DBCC SHRINKDATABASE TRUNCATEONLY

  • Hi,

    I need help in considering DBCC SHRINKDATABASE TRUNCATEONLY. I have read a lot of articles about moving away from this option however, I am running out of options in freeing up space to the OS.

    I only inherited this database. It has four filegroups including log and is very large having approx. 600GB when creating a full backup.

    I'm currently archiving and thus removing a large portion of the tables. But deleting does not free up the space to the OS. What would be the steps that I take to free up space?

    Thank you.

    Regards

    By the way, nobody has been maintaining this database except for daily backup which does a full backup.

  • First question - why do you want to release space to the OS? Free space in a database file is not a problem, in fact, SQL needs some free space in database files to work effectively.

    Only full backups? I hope this DB is in Simple Recovery. If not please read through this - Managing Transaction Logs[/url]

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thank you for your quick response.

    I need to release free space to the OS or free up space in simple terms because I am loosing disk space. This server has an allocation of 1.75 GB for data, and thus we are only able to keep 1 copy of full backup of the database. I'm currently Archiving data and due to the additional copy of full backup, I'm at 45 GB out of 1.75 GB. After deleting a lot of entries on some tables, I want to free this space again as everytime we do a backup, this unused space is included in the backup which I believe isn't best practice.

    I want to actually shrink the database size as no maintenance has been performed on this database. My plan is before applying any daily maintenance plan is to be assured that I have the least size of the database and log.

    Oh, and also, I labeled "Only Full backups" because I know that there are more other maintenance techniques that can be applied than just backing up the database.

  • What I actually plan to do on a weekly basis is to do 1 full backup on the Sunday's. And then, on twice a day, do a transaction backup.

  • spaceinvaders (11/20/2011)


    I need to release free space to the OS or free up space in simple terms because I am loosing disk space.

    That's pretty normal, databases do grow.

    This server has an allocation of 1.75 GB for data, and thus we are only able to keep 1 copy of full backup of the database. I'm currently Archiving data and due to the additional copy of full backup, I'm at 45 GB out of 1.75 GB.

    That doesn't quite make sense...

    How much free space have you managed to get within the DB? How fast does the DB grow? (in other words, how long will it take the DB to reuse that free space)

    After deleting a lot of entries on some tables, I want to free this space again as everytime we do a backup, this unused space is included in the backup which I believe isn't best practice.

    Unused space is never included in backups and shrinking databases regularly is very, very bad practice.

    I want to actually shrink the database size as no maintenance has been performed on this database. My plan is before applying any daily maintenance plan is to be assured that I have the least size of the database and log.

    Again, that's among the worst practices ever. Shrinking databases is something that should only be done after some unusual operation has resulted in lots of free space that won't be reused

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • spaceinvaders (11/20/2011)


    What I actually plan to do on a weekly basis is to do 1 full backup on the Sunday's. And then, on twice a day, do a transaction backup.

    Twice a day for a tran log backup? That's enough? That means you could lose up to 12 hours of data in a disaster. If that's the case, switch to simple recovery and run a full backup daily and a diff 12 hours after the full.

    Log backup frequencies are more commonly closer to hourly, every half hour, every 15 min or less.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Sorry, I think I should paint the story clearer.

    I just inherited this SQL Database Server that has not never been actually maintained ever since it was upgraded to SQL 2005. The only thing our infrastructure does to this database is do a backup once a week and store this backup to tape. The current database size is at 600GB however, I do not believe that all of this space is actually being used or will be used in the near future. This specific database has 4 filegroups (for files) that is approximately around 150GB each. Yes, the database grows normally, but I do not believe that the size it is in now is the appropriate size of the actual data inside this database.

    This weekend, I was assigned to Archvie data and remove them from this database. After deleting a chunk of data from this database, it still remains at 600GB. My struggle now is, how do I release the free space to the OS since I know it will not be used by the Database?

    I do not plan to apply any auto shrink or auto shrink database on my maintenance wizard. My maintenance plan is to make a one full backup every Sunday and to make a backup log twice a day during the week. Twice a day transaction log backup is enough because the only actual transaction this database has is a nightly occuring process. The whole day, only reports are being done and no actual insert, update, or delete is being made.

    The only reason I'm considering the DBCC SHRINKDATABASE TRUNCATE ONLY is to be able to release the free space that should've been released after deleting a chunk of data. By the way, the chunk of data I just deleted is approximately 4 years of data. And every year, we will be archving and we will only keep 3 years of data live in production. So I believe, 600GB database size isn't the most appropriate size.

    First, how do I determine the actual database size that has data? and second, how do I free up the disk space. I assume that the actual database is just around 300GB give or take, will increase at 400GB but maintain there.

  • EXEC sp_spaceused

    That'll give you an idea how much free space there is in the DB. If there's more than 10-20% free, consider a once-off shrink. You'll probably need to do ShrinkDatabase with a target size, not truncate only. Target at 10-20% larger than the data within the DB.

    Once done, rebuild all the indexes (they'll have been fragmented by the shrink).

    Once-off, not repeated.

    I do not plan to apply any auto shrink or auto shrink database on my maintenance wizard. My maintenance plan is to make a one full backup every Sunday and to make a backup log twice a day during the week. Twice a day transaction log backup is enough because the only actual transaction this database has is a nightly occuring process. The whole day, only reports are being done and no actual insert, update, or delete is being made.

    With that setup, I'd probably chose simple recovery, a full backup weekly and a differential after the nightly load (or just a full backup after each nightly load)

    Would probably keep the log file smaller.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thank you for sharing EXEC sp_spaceused. It showed me where my data is, 130GB is in index alone. I double checked all my index and a lot have AVG_FRAGMENTATION_IN_PERCENT greater than 10%. Now I have to research rebuilding with Online these indexes. It also showed me that my free space actually is at 9% only.

    By the way, thank you for sharing this with me. I will consider moving towards this direction in my maintenance plan.

    GilaMonster (11/20/2011)

    With that setup, I'd probably chose simple recovery, a full backup weekly and a differential after the nightly load (or just a full backup after each nightly load) Would probably keep the log file smaller.

  • At only 9% free I wouldn't waste time shrinking. Little gain for lots of cost.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I have a slightly different scenario - SCOM informed me that the user database volume in one of our test environment SQL servers is down to 8 GB free space. Upon investiagating, I discovered that the databases is 100GB larger than it's pair on the production server.

    The database in question is 529 GB in total size, but sp_spaceused shows that 193 GB of that is unallocated space.

    I generally don't like to shrink databases but in this case I like going to the SAN guys to extend the volume even less, anticipating the obvious question of why the database is so much larger in dev than production.

    Is there any reason why I shouldn't run a one off shrink and index defrag on this database in the above scenario? Is there anything I should watch for?

    Thanks,

    Andre Ranieri

  • Before you do a shrink and a index defrag, ask yourself what has to happen while the index is running. Work through in your own mind what space is used within the database, and how previously used space might get reused.

    If you still want to go ahead with the shrink and defrag, then fine, but do not be surprised if you end up with much the same space used and free space.

    The question of why the database may be larger in test than production needs to be answered. Are you copying production to test, and if so how do you deal with issues of data protection.

    It is very seldom that the most effective way you deal with testing is to have a copy of production data. While production data may have a wide range of data values, most of this is not beneficial to testing and often just serves to slow down the test process. Also it will not have the edge cases that you need to include in your testing.

    These issues cannot be solved overnight, but if you do not start planning how to deal with them then code with bugs will find its way to production, and you will need to dedicate a few 100GB more disk space to your testing than is needed.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply