Any way to speed up a shrink DB operation?

  • I'm going to be taking a backup of a DB tomorrow, then restoring a copy of it for some testing by the Dev and Customers. The server this DB lives on has somewhat limited space (and no way to quickly add more,) so I'm planning to shrink the DB down close to the used space of the file. The copy will be Read-Only, so I'm not worried about it growing (and will in fact disable auto-growth while I'm at it.) The DB is in Simple Recovery, and will be staying that way (and the log will also be getting shrunk.)

    I tried doing this once already with a copy on a QA server, after 2hrs it was maybe about 70% done. Is there any way to speed up this process?

    I know when I restore the backup, it's going to size both the MDF and LDF the same as they were when the backup was taken, so pre-creating a smaller file won't work...

    Thanks,

    Jason

    (PS, not too hopeful there is a way to do this...)

  • Its difficult to say. There is no magic option to speed up the shrink command.

    You could see if you have any unused/unrequired indexes and tables first. Maybe create an another file group and rebuild the tables into that first, then shrink the primary files would be quicker. Its another "it depends" situation.

  • That's what I figured, but it didn't hurt to ask.

    IIRC, the tables in this DB don't have a lot of indexes, which is actually kind of a good thing. There's a data load operation that runs that dumps almost everything every morning. I was going to snapshot the DB, but after monitoring it for about 24hrs last week, it jumped from around a couple dozen MB in the evening, to around 12GB the next AM from this load operation...

    Nothing can be changed in that operation, either, the data changes are getting "pushed" from another system we have no control over...

  • I highly recommend putting the DB into single user mode, to avoid possible usage overlaps and concerns. You'll want to take the DB basically offline while you do this, particularly if you're trying to recover BLOB extents.

    Yes, my DBA life has been particularly painful sometimes. 😉


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • jasona.work (12/30/2013)


    I'm going to be taking a backup of a DB tomorrow, then restoring a copy of it for some testing by the Dev and Customers. The server this DB lives on has somewhat limited space (and no way to quickly add more,) so I'm planning to shrink the DB down close to the used space of the file. The copy will be Read-Only, so I'm not worried about it growing (and will in fact disable auto-growth while I'm at it.) The DB is in Simple Recovery, and will be staying that way (and the log will also be getting shrunk.)

    I tried doing this once already with a copy on a QA server, after 2hrs it was maybe about 70% done. Is there any way to speed up this process?

    I know when I restore the backup, it's going to size both the MDF and LDF the same as they were when the backup was taken, so pre-creating a smaller file won't work...

    Thanks,

    Jason

    (PS, not too hopeful there is a way to do this...)

    Have you got any multi-year audit tables that you could kill all but the last 3 months of data in?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Kraig: I don't have to worry about the scheduled data loads hitting the copy DB, thankfully, so I won't need to get too fancy when I shrink it. The "live" DB will still be getting used by the application, the copy won't be pointed to by anything for a couple days at least.

    Jeff: My understanding from the Dev of the application is that the data in these tables generally only lasts a day or so, before it gets deleted. The DB hasn't grown at all since I've been here.

    What's going on with this is, a group here and another group in another facility are going to be verifying that the data loads are working correctly, not just into this DB, but also into DB(s) at the other facility. Apparently this application and DB(s) are eventually going to be migrated to this other facility.

    Thanks guys!

Viewing 6 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply