Speed up a database shrink

  • Paul White NZ (4/3/2010)


    SQL Server 2008 compression has taken away my reasons for using a third-party tool anyway - and apparently R2 extends backup compression to Standard Edition, which is very welcome.

    Wahoo! That is great news. I'm using a SSIS package to call 7z to compress backups right now, so that just made my day.

    Paul White NZ (4/3/2010)


    doesn't the existing arrangement require you to write a script to 'filter' rows for new objects anyway?

    That's a fair point, I would have to write something for the tables that end up with a material amount of data in them.

    Paul White NZ (4/3/2010)


    another alternative (using a similar approach to snapshot replication) would be to set up an SSIS package. This could be made to iterate over a list of tables and filter conditions to push the selected rows to the target databases.

    This retains the advantage of skipping the delete and shrink operations, and does not need to be set up on stage each time. The SSIS approach can take advantage of minimally-logged bulk copy, but would require that scripts be run on the target servers to reflect any DDL changes.

    Hmm.... maybe the same sort of startup there then - do the restore and truncate everything, then dynamically feed the SSIS package the list of tables so it doesn't need to be updated. I haven't done something like that with SSIS before, but I seem to remember someone throw a dynamic table list into the loop, so I'll put that on my list too. I'm a little stronger in SSIS than replication, so I might try that route first.

    It will take me some time to iterate through all the suggestions, I'm hoping to write up and test at least one every day or two in between other tasks, but I will post back as I make progress. Keep 'em coming and I'll run through them.

    Thanks,

    Chad

  • CirquedeSQLeil (4/2/2010)


    George, I think you would be ok with RedGate Backup. It's not like most other backup tools for SQL server. It integrates nicely with SQL server and iirc it permits the use of standard commands. It is pretty straight forward and simple to use. I think that is why the two are able to merge.

    We have redgate backup compression on some servers I recently inherited. Its nowhere near as seamless as hyperbac. As a DBA I don't need it to be straight forward and simple to use (ie have a gui), I need it not to interfere with my standard backup/recovery strategy. I see no way to use standard commands with redgate, their site says you can use a command line interface with similar commands, thats not the same as the exact same T-SQL using SSMS query window or SQLAgent jobs.

    Redgate is a good tool in that it efficiently does what it says on the tin and would be good for a non-expert who needs compression\encryption, but IMHO the basic implementation of hyperbac is preferable for my needs and I would hate for it to be lost. We will have to see what the plans are following this merger.

    ---------------------------------------------------------------------

  • Chad Crawford (4/3/2010)


    I'm a little stronger in SSIS than replication, so I might try that route first.

    Well I hope you get enough time to try both - at least as a very broad concept. Replication may sound kinda scary if you haven't done much with it, but something as simple as this is pretty trivial to get going, just with the Wizards (which are pretty reasonable, for a change).

    Anyway, good luck, and have fun.

  • george sibbald (4/2/2010)


    sorry Chad getting completely off subject but I know these companies pick up on internet entries so I will post this here:

    Hyperbac - please don't move away from the ease of use principle of your backup compression tool and the ability to continue to use SQL native code and utilities and thereby not have to change existing practices. I am a DBA, I don't need another GUI front end, I have SSMS and SQL, and I don't want to have to learn a new set of commands to do backup and restore (my base, most important function), or learn how to interpret another set of error messages and numbers.

    My 2 cents worth.

    I think Hyperbac really understands what a special product they have George. And the new functionality in no way detracts from the ease of use their current products provide!

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Ok, I know this is an old post BUT :

    I've got a database file which is 400 gigs and I move archive data out of it to a different file in order to make the daily database file smaller. I could never get the 400 gig to shrink down to +- 200 gigs which it needs until today!

    I was reading about the slow process of shrinking the DB and various options and was thinking about shrinking it in stages, instead of running the shrink command DBCC SHRINKFILE (N'DBFile', 200000)

    I tried it in stages, i.e

    First

    DBCC SHRINKFILE (N'DBFile', o, TRUNCATEONLY)

    then stages..

    DBCC SHRINKFILE (N'DBFile', 390000)

    DBCC SHRINKFILE (N'DBFile', 380000)

    DBCC SHRINKFILE (N'DBFile', 370000)

    DBCC SHRINKFILE (N'DBFile', 360000)

    Using this I've already gained back 70 gig of space, its slowed down a bit now, but much better than previous when I tried in one go and I stopped it after 2 days...

  • Please post new questions in a new thread. Thanks

    LOB data makes shrinks very slow, as does heaps. Shrink in smaller chunks, not all at once.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 6 posts - 31 through 35 (of 35 total)

You must be logged in to reply to this topic. Login to reply