DBAs are known for asking for more memory, but often can’t say what’s in memory. While I agree that many database servers can use more memory, I’m a firm believer in knowing how you’re using your resources before asking for more. The script below allows me to do just that.
What It Returns
This will return every index that is using at least 1 MB of memory for every database on your server. It also returns space in memory that is associated with unallocated space in the tables which shows up as NULL for everything except the size of the space and the table name.
I’ll warn you now that the unallocated space can be surprisingly high for TempDB, and I talk about that in TempDB Memory Leak?. Hopefully we can get a good comment thread going on that post to talk through what we’re seeing and how common the issue really is.
IF OBJECT_ID('TempDB..#BufferSummary') IS NOT NULL BEGIN DROP TABLE #BufferSummary END IF OBJECT_ID('TempDB..#BufferPool') IS NOT NULL BEGIN DROP TABLE #BufferPool END CREATE TABLE #BufferPool ( Cached_MB Int , Database_Name SysName , Schema_Name SysName NULL , Object_Name SysName NULL , Index_ID Int NULL , Index_Name SysName NULL , Used_MB Int NULL , Used_InRow_MB Int NULL , Row_Count BigInt NULL ) SELECT Pages = COUNT(1) , allocation_unit_id , database_id INTO #BufferSummary FROM sys.dm_os_buffer_descriptors GROUP BY allocation_unit_id, database_id DECLARE @DateAdded SmallDateTime SELECT @DateAdded = GETDATE() DECLARE @SQL NVarChar(4000) SELECT @SQL = ' USE [?] INSERT INTO #BufferPool ( Cached_MB , Database_Name , Schema_Name , Object_Name , Index_ID , Index_Name , Used_MB , Used_InRow_MB , Row_Count ) SELECT sum(bd.Pages)/128 , DB_Name(bd.database_id) , Schema_Name(o.schema_id) , o.name , p.index_id , ix.Name , i.Used_MB , i.Used_InRow_MB , i.Row_Count FROM #BufferSummary AS bd LEFT JOIN sys.allocation_units au ON bd.allocation_unit_id = au.allocation_unit_id LEFT JOIN sys.partitions p ON (au.container_id = p.hobt_id AND au.type in (1,3)) OR (au.container_id = p.partition_id and au.type = 2) LEFT JOIN ( SELECT PS.object_id , PS.index_id , Used_MB = SUM(PS.used_page_count) / 128 , Used_InRow_MB = SUM(PS.in_row_used_page_count) / 128 , Used_LOB_MB = SUM(PS.lob_used_page_count) / 128 , Reserved_MB = SUM(PS.reserved_page_count) / 128 , Row_Count = SUM(row_count) FROM sys.dm_db_partition_stats PS GROUP BY PS.object_id , PS.index_id ) i ON p.object_id = i.object_id AND p.index_id = i.index_id LEFT JOIN sys.indexes ix ON i.object_id = ix.object_id AND i.index_id = ix.index_id LEFT JOIN sys.objects o ON p.object_id = o.object_id WHERE database_id = db_id() GROUP BY bd.database_id , o.schema_id , o.name , p.index_id , ix.Name , i.Used_MB , i.Used_InRow_MB , i.Row_Count HAVING SUM(bd.pages) > 128 ORDER BY 1 DESC;' EXEC sp_MSforeachdb @SQL SELECT Cached_MB , Pct_of_Cache = CAST(Cached_MB * 100.0 / SUM(Cached_MB) OVER () as Dec(20,3)) , Pct_Index_in_Cache = CAST(Cached_MB * 100.0 / CASE Used_MB WHEN 0 THEN 0.001 ELSE Used_MB END as DEC(20,3)) , Database_Name , Schema_Name , Object_Name , Index_ID , Index_Name , Used_MB , Used_InRow_MB , Row_Count FROM #BufferPool ORDER BY Cached_MB DESC
Where’d the Script Come From
I’ve had a script similar to this one around for a while. It’s originally based off of Jonathan Kehayias’s script on his post Finding What Queries in the Plan Cache Use a Specific Index, and I couldn’t have done this without having his script to start with.
Then I originally posted a version of this script on my post Cleaning Up the Buffer Pool to Increase PLE, which was great to see the index usage for a single database. It runs slower than this, only returns a single database, and does not show unallocated space in memory. Those changes warranted either an update to that post or a completely new post…I opted for the latter.
What It Means
Now you can see what’s in your memory. Hopefully you’ll see one or two things that stand out on here that don’t make sense; those are your easy tuning opportunities.
If an index is 100% in cache then you’re scanning on it, and that may be an issue. Yes, you can find when you did scans on indexes using the scripts in my Indexes – Unused and Duplicates post, but it helps to have another view of what that means in your memory.
One thing the index monitoring scripts in the post I just mentioned can’t do is tell you when you’re doing large seeks as opposed to small seeks. With the typical phone book example, you could ask for all the names in the phone book where the last names begins with anything from A to Y, giving you 98% of the phone book as results. Index usage stats will show you did a seek, which sounds efficient. The script on this post will show that you have 98% of your index in cache immediately after running the query, and that gives you the opportunity to find the issue.
When you see an index that looks out of place here, dive back into the scripts on Cleaning Up the Buffer Pool to Increase PLE to see what’s in cache using that index. If the query isn’t in cache for any reason, you may be able to look at the last time the index had a scan or seek against it in sys.dm_db_index_usage_stats and compare that to results from an Extended Events session you had running to see what it could have been.
The main point is that you have something to get you started. You have specific indexes that are in memory, and you can hunt down when and why those indexes are being used that way. It’s not always going to be easy, but you have a start.
We’re All On a Budget
It’s not too uncommon for this process to end in asking for more memory, and I view memory like being on a budget. The amount of memory you have right now is your current budget. Asking for more memory should be viewed like asking for more money in a financial budget. For a financial budget increase, here are the questions I’d be prepared to answer:
- What did you spend the money we already gave you on?
- Did you spend that money as efficiently as possible?
- What else do you want to spend money on?
Now you can answer these questions in database form:
- Here’s what I have in cache at multiple times, specifically right after PLE dropped.
- I went through the queries that pulled the data into cache and tuned what I could.
- When I checked what’s in cache multiple times, these indexes fluctuated a lot in how much was in there. I believe adding more memory would allow them to stay in cache instead of kicking each other out to make room.
Be it Virtual or Physical environments, there’s only so much memory that can be made available to us. We’re on budgets of how much memory the host has, how many memory slots a server has, and how large the memory chips are that those slots can handle. Prove you’re doing it right and it’s a lot harder to say no to you.
I have an odd habit of getting the memory I ask for because I answer these questions up front in the initial request for memory.