Take too long to shrink a huge database, HELP!!!!

  • I have a db size 13.5GB with File Allocation of 29 GB and autoshrink is off.

    I am running the following command, trying to shrink the db, but it has been running for over 2 hours now, and last week I have tried this for over 3 hours and I had to stop it.

    use [MSQLDB] DBCC SHRINKFILE (N'MSQLDB_DATA', 4000)

    Any other way to shrink this big db down?  I can't leave it at this 29GB, it is taking up too much disk space on my server now.

    Please advise.

    Thanks.

    Sin

  • I know that it seems drastic I would try (on a test database first)

    1. Backup the database (just in case)

    2. Detach the database

    3. (I usually delete the log file)

    4. reattach the database (since its new it will have your default size allocation)

     

     

     

  • USe the script below to srink the log file and then modify it to srink the db file

    use Your_Database

    dbcc shrinkfile (2,2)

    checkpoint

    backup log Your_Database with truncate_only

    checkpoint

    dbcc shrinkfile (2,2)

    checkpoint

    go

    use master

    go

    sp_helpdb

    go

  • in addition to the other replies:

    - If you know you'll need 13,5 Gb why do you try to shrink to 4 Gb ?

      This way sqlserver will have to allocate extra extentions (cfr autogrow).

        use [MSQLDB] DBCC SHRINKFILE (N'MSQLDB_DATA', 14000)

    - Is your database in use (locking / blocking) ?

     

     

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Repeated shrink and growth of a database will always lead to bad NTFS file fragmentation.  This will give continuing bad performance to your database.

    The only way to fix NTFS fragmentation is to stop SQL Server and run a NT defrag.

    Good practice for shrinking a database is do not do it.  The only exception is if you expect a permanent reduction in the database size.  If you expect the space gained from a shrink to be reclaimed within a month, then you are causing more problems than you solve by shrinking it.

    If you are disk space constrained, then get more disk space so you can avoid a shrink.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • I'm starting  a system where i'm using SQL for temporary storage of images (plan is for about 500mg-1gb / day).   So reclaiming deleted records is going to be a big deal for me.  Before anyone suggests storing images as files with names in SQL, I have some configuration problems that are going to make that difficult to do.

    Are they any good options besides using shrink.

  • It is important to not confuse reclaiming space within the filesystem with reclaiming space within the database.

    SHRINK is used to reclaim space within the file system.  After running a SHRINK, your database file should be smaller than it was before.  The big problem with this is that repeated SHRINK and growth will fragment the NTFS file system and harm your database performance.

    INDEXDEFRAG and INDEXREBUILD are designed to defragment indexes, but can also reclaim space within the database.  After running one of these commands, your database will remain the same size as before (unless the fill factor in the index causes more space to be needed) but the free space within the database files will be in fewer fragments.  This can help database performance.  However, even without one of these operations, SQL is very good at reusing space released by deleted records.

    If you want to store images in your database (better in the DB than in the file system!), then you should read BOL and look at Technet about how SQL handles images.  It may be worthwhile to store the images in a separate filegroup to the rest of your table data.  If you do this, then SQL logs far less information on I/U/D operations that include an image.

    Space for images is allocated in extents, and released in extents.  SQL is very good at reusing free extents when you want to store a new image, so you do not need to worry about reclaiming space.  However, a large image could be fragmented over a number of extents, and you may want to look at some form of maintenance to reduce this fragmentation.  (However, a SHRINK may not help for this!)

     

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • EdVassie

    "If you want to store images in your database (better in the DB than in the file system!), then you should read BOL and look at Technet about how SQL handles images."

     

    I am curious as to why you think it is better to store images inside the SQL database. I have always been on the other side of the argument. Please explain why this is better as I would like to learn if there is a better way.

     

    Thanks,

    Chris

  • Several Reasons.  My application is being defined for a rather speicalized vertical market with unique requirements.

    1.  Application has two parts.  Images are loaded into DB by asp.net page.  They are retrieved and deleted by a VB.Net windows app at a remote location.  By loading in SQL, I can do all image manipulation by using standard ado.net objects.  Otherwise, I would have to code a web service to retrieve/delete the images stored as file on the server.   As the number of remote locations is going to be between 100 and 400, this is a real problem.

    2.  Due to level of traffic (if all goes well 10gb+/day), I'm going to be seperating sql server from web server onto different machines.  If I stored the images as files, it would be difficult to seperate data and web.  By storing in sql, all I have to do is change IP is web config and data is moved.

    3.  Somewhat concerned about keeping file structure of images and pointers to images consistant.  Probably going to be storing on average about 5000 images.  If I store in DB, then this is a non-issue that I don't have to code.

    Application is not completely set in stone, but with my requirments, I'm not sure there is a methoed that has fewer management/coding issues.  As a one man programming team, ease of code at cost of a little performance is an easy trade off

  • Tom,

    I was not actually talking about your requirements. I understand that in some cases business or technical requirements overrule best(?) practices.

    I was asking EdVassie about why he would rather have the images in the DB. I have always though it would be better for lon-term managment of the system to keep the images separate, but I would always like to hear about someone else's experience and why they think another way works better.

     

    Thanks,

    Chris

  • Some reasons for storing images in a DB

    a) Data integrity.  Most applications I have known that use a mix of DB and non-db data storage have had problems with integrity on the non-db storage.  It is too easy to rename/move/delete individual files, either accidentally, during troubleshooting, or even malicously.  Most of these problems are far simpler if all the data is stored in a DB.

    b) Archiving.  It is normally far easier to ensure integrity of the archived set of data by deleting rows, instead of keeping track of individual files.

    Also, with SQL Server, it is easy to move the archived rows to another database then archive that database on optical disks.  You can then use a jukebox and SQL queries to retrieve the data when needed.  This is often far easier then access to flat file archived data.

    c) It is the future.  Longhorn will use SQL Server technology to store all data. 

     

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • EdVassie,

    Thanks for the response.

    Chris

  • Sinlam,

    As far as the shrink of the data files goes, in my experience it does take a while. I had a 27G data file where only 17G was allocated on one of my servers. The shrink took 3-4 hours to finish, and this was on a 4-2.8Ghz CPU box with 8G RAM, going to RAID 5 and RAID 1 volumes.

    What you described does not sound like it's a bad thing yet. Depending on your hardware it may take a while. You may also want to take heed to what others have said about shrinking. I learned something from them myself.

    Chris

  • I know this is an old post, but I found it when I had the same issue.

    I inherited a database that was 150GB and had 148GB free space (the import routines where written by someone with no experience and had bloated the database massivley).

    I couldn't get the database to shrink, left it running for 6 hours and it wasn't doing anything.

    In the end I suspected it was because none of the tables had a clustered index on them, I added one to each table and then the shrink took less than a minute.

    Worth checking if you google this problem and get this page as the first result as I did

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply