Auto Shrink on live DB

  • Hello,

    On one of our customers' live server there are both live and UAT DBs. The latter occasionally replicated from the live DB upon customer's request. All has been done and maintained by DBA's.  When such a copy generated I am normally required to truncate several of our biggest tables (some over 100 MBs), which are not needed for user acceptance test. This is due to lack of the resources.  Once removed, they shrink the DB to release the resources.

    I would like to avoid discussion if this is a good approach or not, but to enquire regarding their reply to my suggestion to set Auto Shrink to true for UAT DB only to ensure all the import / export tests will not create big transaction logs : "As DBA best practice we could not set Auto Shrink value true because its running on background and resource intensive" . Is it correct that Auto Shrink consumes significantly more resources in comparison to keeping an extensive transaction log, please?

  • Actually auto shrink would cause greater transaction log growth - not less. Shrinking the files is a logged activity.

     

    I'd just set it to simple recovery and leave autoshrink off. the db will be faster, generate fewer logs and use less resources.

  • Yes - by all means do a one-off shrink after you've truncated all those tables, but don't set auto-shrink on, otherwise you'll be wasting resources shrinking and growing, shrinking and growing in perpetuity.

    John

  • +1 for simple mode

  • Thank you for the reply. I always set my test DB's to simple recovery mode, yet I have mistakenly thought setting Auto Shrink to true would be better option for the DB which will be mainly used for query tests. In which cases one would set Auto Shrink to true, please?

    • This reply was modified 4 years, 9 months ago by  BOR15K.
  • Autoshrink - never. I mean, I've struggled to think of a scenario where this makes sense. Paul Randal, who managed and wrote code for the storage engine thinks this should be removed.

    Maybe for client machines that run an Express edition and are really space sensitive, but really, no. Just don't.

  • The only case I have ever seen that autoshrink MIGHT be reasonable is on busy data warehouse, where extreme amounts of logging happen, then its done writing until the next day.

    One of my data warehouse servers are like this, which has an incredibly complex job that runs each day with about 400 ETLs which use C#, stored procedures and native SSIS transforms to construct olap data sources from several remote, and generally very dirty data sources. It goes through, cleans the data up, transforms it then stages the data out before loading it into the data warehouse. It really blows the log up and then loads just a little bit (still tens of thousands of rows) of data that has actually changed, then rebuilds all the indexes and finally shrinks. The log files will typically be at least 10-12 times larger than the data file by the time it is done.

    I think that more development time could probably reduce some of the logging however. Even then, once I get some of my SQL environment consolidated and can reclaim some SSD, i'll probably just give it more SSD and have the developer turn that step off.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply