Managing Disk Space

  • Comments posted to this topic are about the item Managing Disk Space

  • Steve,

    I am guessing you have a relative small enterprise where you can manage growth periodically in a manual manner. In our organisation we have 100+ SQL Instances, comprising of 300+ databases so we have become a little reliant on autogrow managing our growth and disk space monitoring alerting us when space is getting tight.

    I suspect we could look to target the top 10-20 database and manage their growth more proactively as you do.

    Regards

    Gary

  • I guess I've never had to manage a really high transaction environment, so I've always allowed autogrow to manage database size. The performance hit was always acceptable to me. Now I have become more proactive in recent years in trying to minimize autogrow by properly sizing my DB and transaction log to start and setting the autogrow increment to a size that minimizes the number of autogrowth instances.

  • A couple of years ago, we had suffered a big performance hit any time that a particular database performed an autogrow. I implemented a report that tells me each day whether any databases are approaching the autogrow point. If so, I increase the size.

    It's just part of my daily routine and it has worked well.

  • Scott,

    Care to share the details of your report with a newbie?

    Do you use a script?

    Thanks

    Mike

  • We auto-grow our smaller databases, but our larger ones we monitor space and manually grow them when necessary. They were initially set to automatic but we found that the performance hits were not acceptable to our users. I've heard that they' ve done a much better job with auto-growth in 2005 is this true? We're bound to 2000 by our vendor app. 😛

  • Scott,

    I'm assuming the database was quite large if you were experiencing such performance issues across the enterprise. I'm also assuming your data files all reside on the same drive(s)?

    For smaller databases I have no problem allowing them to autogrow at the default rate of 10%. It really is situational with each database and often times you will have to do some historical analysis to determine the best way to manage your growth options. There is no silver bullet here. The idea is to not over allocate (wasting resources), but also not under allocate resulting in frequent autogrowths (which will lead to performance issues as noted by others).

    We've ponied up the money where I work for some great monitoring tools: Quest Spotlight is great for real-time monitoring of potential space issues. It will warn you in advance how many growths you have remaining for the databases on a particular drive. It can also warn when drives have a certain amount of space left, per your configuration and threshold.

    We also use Quest's Capacity Manager, which will do the forecasting for your based on recent growth rates.

    If you spend any significant amount of time doing this yourself, you should be able to justify the cost of these tools. They save us tons of time.

  • Mike -

    My daily report is a work in progress. 98% of the report comes from ideas that I have received right here on SQLServerCentral.com. The company that I work for has always been more open to spending time developing a solution than purchasing one.

    So, each time that my development group (or I) find a need, I look for solutions here.

    I took a quick peek within my sproc to see where I found the code, but for some reason it wasn't there.

    I think that this article by Steve Jones was probably the basis for my code:

    http://www.sqlservercentral.com/articles/Administering/20010422115756/86/

    Basically, I just log the database size, the amount of space used for data, then calculate the free space left available. If it is less than 5%, it shows up on my report. I probably don't need to watch smaller databases, they don't have the same impact when an autogrow occurs, but the report includes them all.

    Scott

  • One of my customers is a food manufacturer. Frozen meat mostly. Four items in a box and each box is tracked. About 12K boxes per day per plant. Then there are the seasonal spurts where the calculations go right out the window. They add a couple of new products, or product variants, a month. A new user or two every 90 days or so.

    Having a good handle on the business processes is a real good starting point for growth estimates. Where are new data elements (the things that generate rows) coming from and at what rate. You can then do the math about row sizes. The observe the growth in test and see if your predictions are close. Then monitor production to see if you are still right.

    Now as to log sizes: They produce product according to manufacturing schedules. There is a production counter column in each schedule. Every item produced gets weighed, marked, and counted. That's a few hundred thousand updates every day. All in the SQL log space.

    Another customer is a food supplier to restaurants and industrial kitchens. Those pie and chips come from somewhere you know. Tons of food get received all morning long. Tons get shipped out every night. Every box, in or out, gets accounted for. These blokes also want every move that every fork-truck makes tracked and recorded. That's not just the ins and outs but also the restacking and staging.

    ATBCharles Kincaid

  • I used to do something probably like what Scot does and I had hundreds of databases (actually over a hundred servers as well) to monitor. The report is generated for auditing purposes (ISO9000), but the only things I checked were flagged items: backups not done, space low, config changes, etc.

  • Thanks so much Scott and Steve!

    I'll take a look at Steve's article.

    And a special thanks to SQLServerCentral and all its contributors, absolutely the best resource I've seen for SQL DBAs.

    Mike

  • From my experience I think the best, as previously stated on this forum, is to monitor the system with a good tool that alerts you when the server is overloaded, or it has low free space.

    In general I've seen that the default configuration on SQL Server (10% growth) is enough, but for me the worst part is what to do with the transaction log, mainly because after a few weeks it grows dramatically (all databases are in 'full' transaction log mode). So I decided to shrink it weekly, of course backing up databases daily.

    Speaking about Steve's article, after seeing the comment "how do I shrink a database (you shouldn't)": is it a good practice to shrink the transaction log ? In order to reduce the performance impact I shrink it to a default space (not the minimum) but of course I need free space on my server, so I have to do it sooner or later...

  • It is never a good idea to shrink either the database or the transaction log on a production system. You manage the size of the transaction log by running frequent log backups. The frequency of the transaction log backups will be determined by the business requirements (how much data can you afford to lose in a disaster?).

    For production systems, I generally run transaction log backups every fifteen minutes. Even with that, I have a transaction log file that is 50GB with a database that is 250GB. The reason the log file is so large is because of weekly batch processes that contain very large transactions. So, I keep the log file at the largest size needed and leave it alone.

    I see a lot of people relying on autogrowth, and that is fine - if you never rebuild or reorganize any indexes in your database. If you do, then you need at least 20% free space available in the data file(s) to allow SQL Server the space to rebuild the indexes (well, actually - you only need enough space available for the largest table being rebuilt, but 20% is a good round number).

    SQL Server 2005 helps with this a bit - since you can use tempdb to rebuild, but you still need space available in the data file, just not necessarily as much.

    I built a script that is run in Powershell that uses SMO to access all of my servers and databases (160+) and pulls down a lot of information, includind DataSpaceUsage, IndexSpaceUsage, Size and SpaceAvailable (which is actually wrong). I use this information to generate reports that show how much space is available in the data files for each database.

    Jeff

    Note: DataSpaceUsage and IndexSpaceUsage can be wrong on SQL Server 2000 - but generally should be updated and maintained appropriately for SQL Server 2005. In my script, I will run an update usage if I detect that the numbers are incorrect, but only if the numbers are negative which is not always the case. In general, this should not be a problem because all we are using the numbers for is trending - and, as long as you can see the trend you can anticipate the growth.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • We have 150 SQL Servers & 3000+ databases. Autogrow is the standard & we reindex weekly at minimum. We use home grown code & batch jobs to retrieve database size statistics daily & store this in our own database. One of our DBAs wrote a web app. that displays this info graphically, making it easy to see past growth for the last several years & determine future requirements.

    Another job runs twice daily, checking free space on all disks on all SQL Servers, sends us a warning email if < 1 GB.

  • jeff.williams3188 (5/4/2008)


    ...It is never a good idea to shrink either the database or the transaction log on a production system. You manage the size of the transaction log by running frequent log backups. The frequency of the transaction log backups will be determined by the business requirements (how much data can you afford to lose in a disaster?). ...

    Heartily seconded! For our ERP system I do transaction log backups every ten minutes, then copy the backup to my SAN box. I do differential database backups at noon, and then repeat it, to the SAN box. So I always have a copy on the server itself and on my SAN server. The servers are backed up nightly, the SAN box multiple times per day.

    Like Jeff says regarding log backup interval, how much data are you willing to lose? I've got users in probably ten locations throughout the city, I think ten minutes is an acceptable amount of data loss. Our ERP manager was surprised: the system did t-log backups hourly when I started here, and that's flat-out unacceptable.

    We do have the advantage of not being a 24/7 shop (city gov't), we also have enough disk space that we can back up any given database without space issues. That being said, our network guy just hours ago went before city council to get two whopping big storage arrays.

    Currently I let all of my databases autogrow. I was fanatical about tracking database growth back in the v6.5 days, but our drives were so much smaller then that you had to. Disk is cheap, but you still shouldn't waste it too frivolously.

    -----
    [font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply