Database growth calculations & Shrinking DBs

  • Does anyone have any recommendations on the best methods for figuring out proper database growth percentages (or MBs)?

    Lately it seems many of our databases are growing "extra large" ending up with 25% + free space in them. Because of this, when we restore them down to our non-prod environments, which have limited space available, we're having to do massive shrink operations before we can use them in non-prod. So I'm trying to figure out a way around this to give the databases the space they need without giving them the entire ranch, if you know what I mean.

    We have a lot of systems (one vended). I don't want to fool around too much with production and end up causing issues for all our customers while I figure this out (I don't want to just do "trial and error" growth settings). Does anyone know of a decent monitoring tool or article that I can read up on?

    Given corporate oversight, I can't just purchase a third party tool. So it's better if there's something I can build myself. Any links and thoughts would be appreciated.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Don't use % growth 🙂

    This could well be causing the large amount of free space you're seeing. Index maintenance, archiving, data removal can also leave large amounts of free space that you're not expecting.

    I use a powershell script (happy to share if you want to try it) to grab weekly db/filesizes and store them in a db table but obviously you'd need a few week's data to start estimating growth rates. Suppose you could do daily if you needed some quick numbers.

    Of course the actual answer is auto growth shouldn't happen - size your db's to incorporate a suitable number of month's growth & review often, although that would conflict with your need to keep sizes down for non-prod.

    Gaz

  • Oh, forgot - autogrowth events can be found in the default trace so you might be able to get some quick info on growth rates & frequency there, or setup your own server side trace.

  • Gazareth (1/20/2015)


    I use a powershell script (happy to share if you want to try it) to grab weekly db/filesizes and store them in a db table but obviously you'd need a few week's data to start estimating growth rates. Suppose you could do daily if you needed some quick numbers.

    I would love to take a look at the script if you don't mind. It would also assist us around budget / end of server life time.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Gazareth (1/20/2015)


    Oh, forgot - autogrowth events can be found in the default trace...

    Which sounds familiar and makes me think this might have been the thing I was racking my brain for this morning.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin (1/20/2015)


    Does anyone have any recommendations on the best methods for figuring out proper database growth percentages (or MBs)?

    Lately it seems many of our databases are growing "extra large" ending up with 25% + free space in them. Because of this, when we restore them down to our non-prod environments, which have limited space available, we're having to do massive shrink operations before we can use them in non-prod. So I'm trying to figure out a way around this to give the databases the space they need without giving them the entire ranch, if you know what I mean.

    We have a lot of systems (one vended). I don't want to fool around too much with production and end up causing issues for all our customers while I figure this out (I don't want to just do "trial and error" growth settings). Does anyone know of a decent monitoring tool or article that I can read up on?

    Given corporate oversight, I can't just purchase a third party tool. So it's better if there's something I can build myself. Any links and thoughts would be appreciated.

    The reason for the large amount of "free space" likely will be caused by the rebuilding of indexes. For anything over 128 extents (that's just 8MB) the old index will not be dropped until the new index is in play and committed.

    One trick around that is to disable a non-clustered index before rebuilding it. Of course, that's an OFFLINE thing to do but will also run faster and do a better job of defragging that rebuilding an NCI online. Of course, you also have to be careful to either not defrag UNQIQUE indexes because they are likely the object of an FK or be prepared to also rebuild the FKs. And, of course, you might want to shift to the BULK LOGGED mode during index rebuilds so the log file doesn't explode. REORGANIZE is always logged no matter the recovery model and REBUILD is normally minimally logged when in BULK LOGGED or SIMPLE (don't go there) but it's also fully logged when in the FULL recovery model.

    And don't ever disable a clustered index because that will make the whole table unavailable instead of just some index.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Brandie Tarvin (1/20/2015)


    Gazareth (1/20/2015)


    I use a powershell script (happy to share if you want to try it) to grab weekly db/filesizes and store them in a db table but obviously you'd need a few week's data to start estimating growth rates. Suppose you could do daily if you needed some quick numbers.

    I would love to take a look at the script if you don't mind. It would also assist us around budget / end of server life time.

    Attached - I've added some comments so hopefully it's vaguely understandable! Feel free to ask any questions.

    Once the data is stored in the database I had a proc that'd work out past/current growth & predict for the next 12 months. Unfortunately I don't have that to hand. You could export to excel/csv if there's no scope for using a db.

    Not quick to set up I'm afraid, but mighty useful.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply