The Case of the Shrinking CFO, err Database

  • Comments posted to this topic are about the item The Case of the Shrinking CFO, err Database

  • Thanks for sharing.

    Create a new file group large enough to contain 90% of the original file (128 GB * 0.90) or 13GB

    10% i guess.

  • Interesting technique, this might prove useful thanks.

    I feel that the number of situations in which this will help are quite limited though... How many people have 120GB databases that really only need 20GB of space?

    In many cases, databases have auto-grown to such sizes and analysis into the growth is essential - If I may be so bold, you would do well to highlight the need for an in-depth analysis of the database's size before recommending people jump in and shrink stuff as this is, in my experience, only ocassionally the best course 🙂

    If the database was over-provisioned, it would be using the provisioned space until it needed to grow so sys admins would not have noticed space 'running out' because it wasn't growing (unless it was a shared server and it was other databases growing that is causing the problem - again in this case, just picking a database at random to shrink as a sort of sacrificial lamb is a dangerous proposition, the developers/vendors may well have had a damned good reason to ask for it to be provisioned at that size)

    So the question people should be asking - before they dive in and shrink what they think is an overprovisioned database - is why has a database file with only a few gig in it grown to 125,682MB? (OK I understand in this case if the DBA has specified literally "128GB", it might have created a weird sized file like that when displayed in MB)

    If this file has actually grown to this size in spite of only containing a small amount of data, we need to ask if it is still growing and if it is, why?

    In my experience, stuff like this is often caused by something like analytics stored procedures that create local tables instead of using tempdb when generating reports and metrics. As the end of day, end of week or end of month reports run, loads of new tables (and probably indexes) are created in the PRIMARY file group by some sprocs, data is mashed, reports are created and then the new tables are 'tidied' and deleted afterwards.

    So some DBAs will go to all this effort to shrink the database then on the following day, week or month, the database will grow right back up to 125000+ MB and they will be back to square one having wasted some time and slowed down the reports as the database had to auto-grow again to accomodate the reports.

    I apologise if this seems overly critical, that is not the intent, it's an interesting technique for shrinking a database file and reducing the impact this has on a production environment - one I may well use in future. I'm just concerned that a database or file shrink is not very often an advisable practise!

    Best regards,

    Ben

    Ben

    ^ Thats me!

    ----------------------------------------
    01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
    ----------------------------------------

  • I have a better question.

    what if you go through all of this trouble and still need more space?

    let say you know what you are doing, and before putting the request for new drive(s) to add you went through all the hoopla of cleaning the basement 🙂 and attic, and a kitchen sink. and yet still need more space.

    Yet you are told by your CFO etc. that no way you have a budget. than what?

  • Vlad-207446 (8/17/2015)


    I have a better question.

    what if you go through all of this trouble and still need more space?

    let say you know what you are doing, and before putting the request for new drive(s) to add you went through all the hoopla of cleaning the basement 🙂 and attic, and a kitchen sink. and yet still need more space.

    Yet you are told by your CFO etc. that no way you have a budget. than what?

    Good one 🙂

    I'd ask him which database he wants me to take offline and delete 😉

    Ben

    ^ Thats me!

    ----------------------------------------
    01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
    ----------------------------------------

  • Vlad-207446 (8/17/2015)


    I have a better question.

    what if you go through all of this trouble and still need more space?

    let say you know what you are doing, and before putting the request for new drive(s) to add you went through all the hoopla of cleaning the basement 🙂 and attic, and a kitchen sink. and yet still need more space.

    Yet you are told by your CFO etc. that no way you have a budget. than what?

    You explain the issue and provide evidence and facts to back your point up. Don't be snobbish or upset about it.

    You acknowledge the cost requirements, but diminish them with your tone and word choices. Make cost comparisons to things that are not of critical business continuity.

    Finally, point out that the business would benefit and continue to succeed if more data storage was purchased.

    Present the facts.

    Acknowledge and diminish.

    Offer an amicable solution.

    If that fails, hold them hostage with an ultimatum. "If we fail to provide this extra storage, our LOB application will cease to function and we'll all be laid off."

  • Vlad-207446 (8/17/2015)


    I have a better question.

    what if you go through all of this trouble and still need more space?

    let say you know what you are doing, and before putting the request for new drive(s) to add you went through all the hoopla of cleaning the basement 🙂 and attic, and a kitchen sink. and yet still need more space.

    Yet you are told by your CFO etc. that no way you have a budget. than what?

    Also, what happens if you do squeeze 'enough' free space out of the system?

    6 months down the line when you've run out of space again, this time with no quick-wins the CFO says "No we don't have budget for a new SAN. You should have noticed this 6 months ago!"

    Or if he knows about what happened 6-months ago "No I'm not buying you any more computer toys, you managed to deal with it last time so you can make do again. Just do that shrinky thing and make some space."

    There's really no substitute for clear and open communication with the stakeholders right from the outset of a project.

    Ben

    ^ Thats me!

    ----------------------------------------
    01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
    ----------------------------------------

  • Usefull technique that I have used mysqelf, even to re-org a DB. One does have to question why there is this over allocation of space in the DB's to begin with...

  • Not sure about this...

    If there's less than 10% free on the disk is there even space for a new 13GB datafile?

    No business disruption is specified, but then the solution is to copy out entire tables into others?

    This will result in a table lock (significantly more contention than the shrink command) and possibly missing/incorrect data unless the table is quiescent between being loaded & renamed. You will also need to handle any foreign key constraints referencing the table. And deal with a load of plan recompilations after the rename.

    running shrinkfile without moving tables can result in long running shrink statement which holds locks

    As opposed to long running insert statements which holds locks? Shrinkfile is designed to run transparently, with the possible exception of an emptyfile operation.

    This would have run for hours and caused serious production contention if the tables and indexes were moved to a new filegroup

    But that's exactly what you have done?

    I do agree with using shrinkfile over shrinkdatabase and leaving suitable room for growth in your target file size though.

  • As pointed out above, using SELECT INTO will not copy over any of your indexes or keys nor permissions. You'll need to capture all of them and recreate them on the new tables. You need to make sure that table insert/updates are not being performed between your table copy and then when you do your rename, otherwise, you've lost data.

    Honestly, I would be very hesitant to run any process that drops all of my tables without very good error handling. You'd want to confirm that the new table was created successfully and the record counts match before dropping the originals. This whole process is not something you should run during production hours nor without extensive testing. A typo in your scripting could have you drop a table that you didn't create a copy on. Very risky.

    MWise

  • MWise (8/17/2015)


    nor permissions

    Great point, missed that one.

  • It's interesting to find places where space isn't really needed to buy some time. Free space in any read only db is one possibility, as are indexes that have fill factor set below x threshold, and left over audit/history/backup/justincase tables you can usually find if you look.

  • As usual, from my 42 years of experience, I have seen things such as this many times. The first thing you need to do is exactly what this process does - get the figures together in black and white. The second thing you need to do is regularly send this information, in pure and simple understandable format, to the CFO, the COO, the president, and all interested and affected parties, a process otherwise know as CYA.

    This is critical to making it obvious that you understand the business affects and risks of not handling this issue. I know this sounds like the 'I told you so' syndrome, and that is exactly what it is. You need all these people in the boat with you so you don't go down alone. Captain last one in, h-e-l-l no.

    Rick
    Disaster Recovery = Backup ( Backup ( Your Backup ) )

  • To make matters worse, the CTO has stated there must be no business impact on whatever solution is proposed.

    How exactly is making a copy with a rename afterwards not a business impact?

  • You may also free up some space with a more aggressive archiving approach to your data, or install archiving at all.

    I've seen a number of legacy systems have up 10 years worth of data. This may take a bit longer, but it will make the system run faster with less space required. I used to start out with the question of whether we needed invoices from the previous decade readily available. The reality is that an invoice is often not needed after 15 months (comparing this quarter to last quarter).

    These solutions will take more time to put in place over the one you proposed, but it will help with long term strategy.

Viewing 15 posts - 1 through 15 (of 31 total)

You must be logged in to reply to this topic. Login to reply