Inherited HUGE db with limited disk space

  • I'm new to a project (DBA) that has an enormous database (850GB data, 20GB log) on a LUN with only 1.75 TB available. I've been told new storage may be available in March 2016(!). Problem is that the db has started growing 80 GB per day. I have full backups running every day, translog backups running every 3 hours. This db is hit constantly by four sets of data import feeds that cannot be taken offline for more than 2 hours. Also, I have 30 users performing large queries from the same db. I should also mention that the feeds utilize triggers that insert, update, and delete from tables that average 35GB each. The db has only 1 data file (like I said, I didn't design it, I inherited it).

    The statistics for this db show 67% free in the db file, so, I'm unsure why it continues to expand. We need to reclaim this space (as mentioned no possible expansion for several months) or the instance will crash due to lack of space.

    I know that a dbshrink on the file is not the best option, but, is it my only option given the circumstances? Any thoughts (other than get more space)?

    v/r

    Mary

  • Is it auto growing? What are the auto growth settings? What's running to cause the growth event?

    Sounds like there may be staging tables that inflate, hit the growth threshold, then deflate via truncating.

    There's probably much space that can be reclaimed by some process/structure redesigns. I've seen several databases go from 1 TB size range to 40GB just by properly normalizing the data.

  • Understand, but, as I stated, we have four data import feeds (no staging tables) porting data into the actual tables mentioned that are 30+GB in size). This import fires off the triggers that massage the data (insert, delete, etc, etc), but, we also have a rolling delete that removes data (every 10 minutes) from three tables (30+GB each) for a specified timeframe (aka the data is no longer relevant). Bottom line: enormous amount of data moving in and out of the db every hour.

    The data is normalized as best as possible -- no restructure is authorized until 2016.

    M.

  • Are any of the objects compressed? Compression might buy you some breathing room...


    And then again, I might be wrong ...
    David Webb

  • Compression is an excellent feature, but enterprise only.

    If you:

    don't have/can't get enterprise

    can't change any code

    can't add any storage

    can't archive or delete old data

    Your options are limited. I think maybe you need to detail the problem, the possible options available, and present them to someone who can make decisions then work with them to determine the urgency and resolution.

    Get it all in writing.

  • So on the autogrowth stuff.

    You can check for events really easy through the gui. There are a million other ways to do this but this is pretty easy to walk through.

    In Management Studio open the server, then expand the Databases node.

    Right click on the database and select Reports > Standard Reports then Disk Usage

    It should list your auto growth events. What's that look like?

    Also right click on your database and select Properties

    Then click on the Files tab, that will list your autogrowth settings. What are they set to?

  • Other things to look at are to:

    1. increase the frequency of the log backups to at least every 15 minutes and automate moving the log backups to another server. This should allow you to reduce the size of the log file and free up some space.

    2. work out what the extremely large transactions are which cause the DB to grow even though there is 67% free space. If these could be done in smaller batches it should stop the DB from growing and might even allow you to safely reduce the size of the main DB.

  • The autogrowth of the mdf is 1MB -- not outside of the ordinary. The backups of the logs already take place on a different LUN. The data imports are the processes that take the majority of the resources. We move, on average, 200 GB in each day, with a rolling delete that executes every 10 minutes to stay ahead of the curve (approx. 175 GB deleted). Notes from my predecessor indicate that the logs backups were set to every 3 hours due to the drain on the system & deadlocks -- he had tried every hour, but, it impacted the data migration too much. Same thing for data compression -- too much of a drain.

    What I was hoping to find was an alternative to dbshrink file on this monster (since that will take longer than the "acceptable" downtime for the users). But, it appears that may be my only solution to keep it going until we can get extra storage in the Spring.

    I do appreciate the input!

  • Well, the thing about shrinking, even if it worked really well (which it doesn't), is that it will end up growing to the same size again. Only now you have the disk overhead of the growth events and you still run the same risk as if you didn't shrink, it's going to grow to the size it needs to be.

  • Don't shrink it. It'll just make matters worse... MUCH worse and you still have disk space.

    What's your maintenance for indexes and do you have any HEAP tables that are being updated/deleted from?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply