Database defragmentation and autogrowth settings

  • We do have some maintenance plan for our sql server 2008 r2 express.

    Every month we do defragment of the database if any table has page count more tahn 50 for any table and average fragmentation more than 20.

    If the database log size>2 MB, then the recovery mode is made as simple, and it is shrinked, and the recovery mode is set back to FULL.

    If the Page_count>50 and avg_fragmentation_in_percent > 30 then the index is REBUILD.

    And if Page_count>50 and avg_fragmentation_in_percent > 5 and <30 then the index is REORGANIZE.

    This is what we are doing till now.But we found that autogrowth events are resource incentive and it should not happen repeatedly. Now for all database autogrowth is set to MB for mdf file and 10% for ldf file which is default value while creating new database.We are planning to increase the autogrowth values for the database depending on how much database is getting bigger every day.But i want to know how much autogrowth events is ideal for the database.Should i set autogroth so that it happens only once a day,week or month etc.So please help me to set the autogrowth value for my database.

    Also there is another problem.If i do monthly defragmentation of database then it will be shrinked. So after this for all database for which i did shrink autogrowth occurs once when new data is written to it.So there will be so many autogrowth events. So whether it will be a problem?

    Please tell me a solution.

  • The general suggestion for autogrowth seems to be, don't. Ideally, either you set your DBs size large enough to encompass all reasonable possible data growth, or you monitor your DB size and when it needs some room, manually grow the file during a maintenance window.

    As for the rate of growth, I think most will agree that a % is less than optimal. Better to set a fixed size in MB.

    The best suggestion I can make is, determine what would work best in your environment. Can you monitor the free space in the DB (lots of queries on the 'net for doing this, just Google,) or not. Maybe it might be better to set autogrowth, but make sure you tune the settings so that it will grow by enough to not need to grow often, but not by so much it ties up the disk with I/Os (slowing the DB response and aggravating the users.)

    Now, on shrinking the DB, the universal answer is, this should not be a "normal" thing. Doing it as a one-off because a metric ton of data was deleted and you need the space back is one thing (been there,) but doing it weekly / monthly? No. Also, defragmenting the DB (I'm presuming you mean indexes) won't shrink the DB. Truthfully, if you let your DB grow, it will eventually reach a size where it will stay, presuming users are both inserting and deleting information, with only slow growth.

    If you're shrinking your Transaction Log, also bad. Better to control the size of the log through regular transaction log backups. It sounds more like you're controlling the log by kicking it to Simple recovery, then back so you can shrink it. If you don't need point-in-time recovery, switch it to Simple and leave it, which would also help control the size of the log file...

    See this series: Stairway to Transaction Log Management in SQL Server[/url]

    Jason

  • How to know the size to be set for the log file? Whether any method is there by which i can set ideal size for my ldf file.

  • The rule of thumb would be to set it large enough that it doesn't need to grow, with a little headroom to handle "bursts" of transactions.

    It's the sort of thing where you either try to estimate a size at the start, then monitor it for a while, or let it grow and see where it "settles down."

    If you use Simple Recovery (no point-in-time recovery needed or wanted,) the log should reach a size where it will stop growing. If you're using Full Recovery, then you need to make sure you're taking regular log backups, or it *WILL* grow until you run out of disk space (have had that happen to me with a Sharepoint back-end SQL, thankfully it was after hours.)

    Jason

  • monitor your log file over the next month and do not shrink it. You should always set your log file to be the size it needs to be at its largest. Otherwise you run the risk of, in some cases, thousands of VLFs being created, which is bad for performance.

    I am not sure if its the best method, but if I inherit a system, I would monitor the growth of the log over a month, let it grow as big as it needs and take a note of the size. Shrink the log file to remove all of the VLFs and then manually extend the file to the size it was.

  • My post was wrote with the understanding you would take a backup before shrinking the log file 😉

  • If VLF count is more (about more than 100) then shrinking is necessary?

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply