We have come to the fourth day in this mini-series. So far I have laid down some under-carriage maintenance for one of our favorite databases. That database being the msdb database. The maintenance discussed so far has covered a few different tables and processes to try and keep things tidy (at a base level).
Today I want to discuss some under-carriage work for the database of id 4. Yep, that’s right, we will talk some more about the msdb database today. What we discuss today, though, may be applicable beyond just the msdb database. First, let’s recap the previous three days.
- Backup, Job and Mail History Cleanup – 3rd Day
- Service Broker Out of Control – 2nd Day
- Maint Plan Logs – 1st Day
On the Fourth Day of pre-Christmas…
Somehow, that seems like it would be more appealing on bacon. But the amount of fat shown here is just nasty. And that is what I think a table using multiple gigabytes in a database is when it holds 0 records.
Just deleting the records we have spoken of over the past three days is not the end of the road with the maintenance of the msdb database. Often times the pages allocated to these tables don’t deallocate right away. Sometimes, the pages don’t deallocate for quite some time.
When these pages are cleaned up depends in large part on the ghost cleanup process. Of course, the ghost cleanup is not going to do anything on the tables where records were removed unless it knows that there are some ghost records (records that have been deleted) in the first place. This doesn’t happen until a scan operation occurs. You can read a much more detailed explanation on the ghost cleanup process from Paul Randal, here.
Because of this, you can try running a query against the table to help move the process along. But what if you can’t get a scan operation to occur against the table?
Could you try updating statistics? Maybe try running updateusage? How about forcing a rebuild of the Clustered Index? Or even running Index Rebuilds with LOB_Compaction on? Or even creating a Clustered Index where one doesn’t exist? Maybe you could try DBCC CLEANTABLE and see if that works. Or you could try running DBCC CheckDb on the msdb database and try to get all of the pages read.
While working to reduce the table size for the tables in the following image, I tried several different things to try and force a scan of the pages so I could reduce the table sizes (0 records really should not consume several GB of space).
Even letting things simmer for a few days to see if backup operations could help, I still see the following.
This is better than it was, but still seems a bit over the top. Something that both of these tables have in common is that they have BLOB columns. To find that for the sysxmitqueue table, you would need to connect to the server using a DAC connection. If you were to use a DAC connection, you would find that the msgbody column is a varbinary(max).
What I found to work for this particular case was to use DBCC Shrinkfile. As a one time (hopefully) maintenance task, it should be ok. I would never recommend this as a regular maintenance task. The used space for these tables does not decrease until DBCC Shrinkfile reaches the DbccFilesCompact stage. This is the second phase of ShrinkFile and may take some time to complete. You can read more on Shrinkfile here.
Is it necessary to go to these lengths to reclaim space in your msdb database from tables that are excessively large with very few records? That is up to you and your environment. In many cases, I would say it is a nice to have but not a necessity.
If you do go to these lengths to reduce space in msdb, then you need to understand that there are costs associated with it. There will be additional IO as well as index fragmentation that you should be prepared to handle as a consequence of shrinking.
Stay tuned for the next installment. We will be taking a look into some internals related to Compression.