DBCC SHRINKDATABASE

  • When you use the NOTRUNCATE option with DBCC SHRINKDATABASE, does it ignore the target_percentage for the free space that should remain? The reason why I ask is because my book says that the free space "reclaimed" is actually retained in the data files.

    Secondly, if you use the TRUNCATE ONLY option, then data is not compacted to the front of the file. But wouldn't it release more space and lower the high water mark (the last allocated extent) if data were compacted? Why doesn't SQL Server compact the data? Please clarify. Thank you.

  • When I need to shrink the database I always do it on a file basis first.  This way the data is compacted before the database is shrunk.  The code I use is as follows:

    DBCC SHRINKFILE (N'filename')

    DBCC shrinkdatabase(N'databasename',  TRUNCATEONLY )

  • Thanks a lot. The question that popped up in my mind was that when you shrink a file with the SHRINKFILE command, does it compact the data to the front of the file automatically or do you have to use the NOTRUNCATE option as well?

  • The basic shrink operation moves data below the shrink threshold you've set and then releases the free portion of the file to the OS.

    The NOTRUNCATE option simply doesn't release this free portion to the OS after compacting the data.

    The TRUNCATEONLY option doesn't do any compacting and just releases any free portion of the file to the OS. It doesn't do the compacting because that's what the option says - only truncate.

    Why are you shrinking your database in the first place? Its usually a pretty bad practice for most databases. The database needs a certain amount of free space to be able to function under regular DML so removing all the free space is just going to cause it to grow again. Also, the data move operation does not 'intelligently' place the data lower in the file, it just moves it to the first free space it can find - this causes index fragmentation and can lower performance.

    I've documented all these reasons in the BOL for SQL Server 2005. Let me know if you have any further shrink questions.

    Regards

    Paul Randal
    CEO, SQLskills.com: Check out SQLskills online training!
    Blog:www.SQLskills.com/blogs/paul Twitter: @PaulRandal
    SQL MVP, Microsoft RD, Contributing Editor of TechNet Magazine
    Author of DBCC CHECKDB/repair (and other Storage Engine) code of SQL Server 2005

  • Thanks a lot, Paul. I really appreciate that you took the time to clarify this.

  • I'm not sure that I should post it here, but it's somehow related to SHRINKDATABASE...

    The problem I have is that I save PDFs into the database in IMAGE data types. The database is growing so fast and I've decided to save them compressed. I've made a program that gets the data, compress it and save it again into the database (doing an UPDATE, no DELETE-INSERT).

    The curious thing is that the average size of the compressed files saved in NTFS is about 65%, when you look at the file size of the folder. But the table in the database where are stored the PDFs (the used space of the table) only is compressed to a 80%.

    So, I wonder how is "lost" this 15% of compression. The only possibility I've in mind is that the free space is not contiguous and it's lost in several data pages that are not completly full and they are not refilled by DBCC SHRINKDATABASE, as Paul Randal said in a previous post.

    If it were a "normal data type (not IMAGE neither TEXT), I would regenerate the primary key index, but I don't know what I can do with IMAGE data type.

    Thank you in advance and Merry Xmas !!

    Josep

  • Are you taking into account the overhead of storing the files in a database? Storing, say, 8000 bytes of compressed data in an IMAGE field requires an 8192 byte database page to store the IMAGE column in, plus 16 bytes in the data record to point to the location of the IMAGE data. In addition to that, your hypothesis about some wasted space due to your updates is probably also contributing.

    Remember that storing file data in a database is not necessarily more space-efficient or faster than storing it in the file-system. Filestream goes some way towards addressing this in SQL Server 2008.

    Hope this helps.

    Paul Randal
    CEO, SQLskills.com: Check out SQLskills online training!
    Blog:www.SQLskills.com/blogs/paul Twitter: @PaulRandal
    SQL MVP, Microsoft RD, Contributing Editor of TechNet Magazine
    Author of DBCC CHECKDB/repair (and other Storage Engine) code of SQL Server 2005

  • Thank you Paul for your answer. I didn't know the need of 8192 byte for each 8000 bytes of data. I going to try to find out why this...

    I need this PDFs into the database because I'm using Merge Replication and they must be in all the subscribers.

    After your post saying that my hypothesis could contribute, I made a test. Before, when I was compressing the PDFs I was just doing an UPDATE, but I've change it for a DELETE and a INSERT.

    The results are quite surprising (the table has 339 rows, each row has 2 PDFs):

    Original table size: 21.12 Mb

    PDFs compressed using UPDATE: 20.16 Mb

    PDFs compressed using DELETE/INSERT: 17.29 Mb

    So, is there any kind of instruction to rearrange the BLOBs or the only solution is to DELETE/INSERT the rows?

    Thank you and have a Happy New Year!!

    Josep

    PS: I don't post the table schema because mainly is just to IMAGE data types, for two PDF for each row.

  • My 8000 byte example was just that - an example. It doesn't need 8192 bytes for *each* 8000 byte chunk of a larger value. 8040 bytes of data can be stored on each text page - meaning you have 152 bytes of unusable space (used for page and record headers etc).

    The only way to rearrange blob data in SQL 7.0 and 2000 is to export to another table. In 2005 there is LOB compaction as part of ALTER INDEX REORGANIZE and shrink.

    Thanks

    Paul Randal
    CEO, SQLskills.com: Check out SQLskills online training!
    Blog:www.SQLskills.com/blogs/paul Twitter: @PaulRandal
    SQL MVP, Microsoft RD, Contributing Editor of TechNet Magazine
    Author of DBCC CHECKDB/repair (and other Storage Engine) code of SQL Server 2005

  • Thank again for your answer!!

    This solve our problems 🙂

    Josep

    PS: Sorry for the delay, I've been on holidays.

  • Hi Josep, I have a similar situation and I came across this post. I do have a few questions I am hoping you can help answer. I am running SQL2000 with merge-replication. I have a photo table with blob column. The db size is out of control. After research, I decided that I need to do something similar to what you have done. A program that will go through each row, examine the the quality/compression settings of the .jpg images stored in the row. I was able to get for example a photo that was 300K into 30K by changing the quality ration. the DB size reached 36GB. The program that I ran reported a savings of about 19GB just from changing the image property. The program saved each imaged to disk first for backup, created the new format and then deleted/inserted the new image.

    My question now is what is the next step to free up the saved space and can I run this these steps while the db is available to users. I cannot take the db down.

    Will dbcc shrinkfile do the trick?

    Will it free up the space from the file systems and compact the db?

    Any thoughts/suggestions are appreciated.

  • Please post new questions in a new thread.

    Thank you

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply