Your production SQL Server database's disk space has fallen below the 10% safety margin and you have to find more space before business stops. Your CFO has declared there is no budget for more drives (although plenty of budget for Finance projects). To make matters worse, the CTO has stated there must be no business impact on whatever solution is proposed. How do you keep both CFO and CTO happy? How do you find more disk space with no business impact? Sometimes being a DBA is quite difficult, fortunately in this case there is a proven solution.
When a database runs out of space, all inserts fail so a method to keep database size in check is an essential DBA tool. Fortunately there is a method to squeeze space out of disks without business impact or spending money. This article describes a technique to shrink and reclaim data disk space with almost no business contention. Let’s examine how this works.
In this example I’ve inherited the over allocated 128GB Analytics database. It is contained in the default file group, Primary. Use the SP_Help to display a databases filegroups as in figure 1.
SP_Helpdb gives the db_Size as 128 GB, the logical file name ‘Tracking’ and the physical file name ‘D:\MSSQL|DATA|Analytics.mdf’. The technique in this article moves tables and indexes, the fewer tables we have to move the faster.
SELECT CAST(object_name(id) AS varchar(50)) AS [name] ,SUM(CASE WHEN indid < 2 THEN CONVERT(bigint, [rows]) END) AS [rows] ,SUM(CONVERT(bigint, reserved)) * 8 AS reserved ,SUM(CONVERT(bigint, dpages)) * 8 AS data ,SUM(CONVERT(bigint, used) - CONVERT(bigint, dpages)) * 8 AS index_size ,SUM(CONVERT(bigint, reserved) - CONVERT(bigint, used)) * 8 AS unused FROM sysindexes WITH (NOLOCK) WHERE sysindexes.indid IN (0, 1, 255) AND sysindexes.id > 100 AND object_name(sysindexes.id) <> 'dtproperties' GROUP BY sysindexes.id WITH ROLLUP ORDER BY RESERVED DESC
TableName Rows Reserved Data Index Size Unused
NULL 46169k 12692K 11149k 1456k 86k
Table1 39912K 10620K 9391K 1161K 67K
Table2 9980K 1464K 1306K 154K 4K
The Total Reserved database space is 12692K; the sum of Table1 and 2 is 12084K. That works out to 12084 / 12692 or 95%. This technique requires only most of the space to be moved. I have found moving 90% or more of the reserved space is sufficient to allow shrinkfile to run quickly. A word of caution, running shrinkfile without moving tables can result in long running shrink statement which holds locks. Don't let this happen to you.
Now just how much space is available to reclaim? Navigate to Management Studio > Database Name > Task > Shrink > Files. I always 'select shrinkfile' and avoid 'shrink database' to prevent accidently shrinking the database.
There is treasure in figure 3, 90% available free space is a DBA's gold mine. This is clearly an over allocated database, how to get the space back? The least impacting command is the truncateonly approach as shown below.
USE [Analytics] GO DBCC SHRINKFILE (N'Tracking' , 0, TRUNCATEONLY)
Unfortunately, this command won’t do much as it only frees up empty pages from the end of the database file. Run SP_HelpDB again to verify how little space is reclaimed.
As Figure 4 shows in the size column, only 11MB (128698368 - 128687296) were freed. We need a more aggresive solution.
The ShrinkFile command moves pages holding locks as it does, if there are more than 50,000 pages you can’t run it in production without a lot of contention for 10 to 30 minutes or more. Other transactions will timeout when ShrinkFile is running. My advice is never run shrinkfile in production without following this technique.
Move the tables and indexes to a new file group then run ShrinkFile. Shrinkfile will have very few pages to move and will run amazingly quick reclaiming the over allocated pages in the process. How to move to tables and indexes to a new filegroup?
Create a new file group large enough to contain 90% of the original file (128 GB * 0.90) or 13GB. It is best to add a buffer for expected growth, say 7GB to round the new file size to 20GB. That’s 108 GB savings in space. (128-20).
ALTER DATABASE [Analytics] ADD FILEGROUP FG_Data2 GO --Add new file. ALTER DATABASE [Analytics] ADD FILE ( NAME = FG_Data2, FILENAME = 'I:\MSSQL\DATA\Analytics_2.ndf', SIZE = 50MB, MAXSIZE = 20000MB, FILEGROWTH = 50MB ) TO FILEGROUP FG_Data2;
You can skip adding the growth space of 7GB, it is simply more efficient to add the extra space now rather than letting the database file add extents.
Moving the tables/indexes to a new filegroup is straightforward, create a copy of the table changing the filegroup to the new filegroup FG_Data2
CREATE TABLE [dbo].[Table1_Copy]( [Id] [int] NOT NULL, [col1] [varchar](50) NOT NULL, [col2] [bigint] NOT NULL, [col3] [datetime] NOT NULL, CONSTRAINT [CUPK_Table1_Id] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [FG_Data2] ) ON [FG_Data2]
In a similar manner create the secondary indexes in the new file group. As a good practice I usually create an index file group but this is optional.
ALTER DATABASE [Analytics] ADD FILEGROUP FG_Index1 GO --Add new file. ALTER DATABASE [Analytics] ADD FILE ( NAME = FG_Index1, FILENAME = 'I:\MSSQL\INDEX\Analytics_Index_1.ndf', SIZE = 50MB, MAXSIZE = 10000MB, FILEGROWTH = 50MB ) TO FILEGROUP FG_Index1; CREATE NONCLUSTERED INDEX [IX_Table1_Ref] ON [dbo].[Table1_Copy] ( [ref] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = ON, DROP_EXISTING = OFF, ONLINE = ON, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [FG_Index1] GO
Use this script to copy the rows into new tables/filegroups.
Insert INTO Table1_Copy Select * FROM Table1
Now drop the original table and then rename table1_copy to table1. Rerun Sp_HelpDB and you’ll notice the Used Pages has dropped but the critical Current Size has grown. This is because reserved space cannot be reclaimed by just moving to new filegroups. Repeat data move for other large tables/indexes.
Database Name Data MB Data Used MB Data Free MB
Analytics Before 125,921.19 12,587.56 113,333.63
Analytics After 136,671.19 12,592.69 124,078.50
Since the largest tables are now safely in new file groups, you can shrink the original file group Primary. This can be accomplished via script in Management Studio.
USE [Analytics] GO DBCC SHRINKFILE (N'Analytics' , 5000) GO
Amazingly this shrinkfile to 5 GBs took only 53 seconds. This would have run for hours and caused serious production contention if the tables and indexes were moved to a new filegroup.
Again run sp_helpdb and notice the file 'Tracking’ reduced from 128 to 5 GB and total dbsize Db_size reduced from 128 to 32 GB, a savings of 94GB. Using this technical you can further reclaim costly disk space with almost no contention.
You can shrink databases and reclaimed reserved space quickly with minimal business impact by using secondary file groups. Your CFO will be thankful no money was spent on additional disks, but don’t expect a bonus. Remember you're in IT, not Finance.