Shrink runs forever

  • Hi all,

    We have dropped a huge table from one of our databases and I want to get some of that space back. However, the shrink of the data file takes forever, and it does extremely high amounts of CPU cycles and reads. So I cancelled it and tried shrinking it just 5GB, same thing...I cancelled it after 8 hours and noticed high resource usage. So I tried shrinking it by 5MB and the same thing happened! I tired shrinking other databases on the same server by 5MB and the same thing happened with those as well. Does anyone know what is going on? I have never seen this behavior before.

  • Shrinks SUCK!!! They will no matter what you try to do - unless you can do a truncate_only shrink and get back the space you need, but how often does THAT happen!?! :hehe:

    More than a few times I have actually migrated the entire database to a new one to get around the severe issues you can have with shrink. Obviously this is an outage event but there are a few tricks you can do to minimize actual down time. Still a PITA though.

    The BIG question is do you REALLY REALLY need that space back? Will the database grow large again and if so in how much time? Best is to leave things as they are.

    Oh, and since you have run multiple shrinks already you MUST check your index fragmentation!! Even running it for short periods of time can cause issues with that.

    Oh, one more thing - did you do a File IO stall and wait stats analysis while the defrag was running? Maybe your server/IO just can't keep up??

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Yes, shrinking is not advisable, but I can see the need in some cases...

    The performance of shrinking is depending on the allocation within the file itself. It's beyond my knowledge to inquiry this information, so when I needed to shrink (only occasionally) I start with small amounts. If it did run rather fast I increase the amount up till 5GB max. per step.

    And I always performed the shrink on a specific file and never on the whole database.

    Duration of some shrinks I've executed:

    500GB databasefile down to 300GB (in steps of 5GB): 70+ hours

    1500GB databasefile down to 750GB (in one giant step; the database was just compressed and not in use): 15 minutes

    ** Don't mistake the ‘stupidity of the crowd’ for the ‘wisdom of the group’! **
  • My approach is similar to Hansi. If you do have enough disk space and shrink is only 5 GB just let it be.

    ---------------------------------------------------
    "Thare are only 10 types of people in the world:
    Those who understand binary, and those who don't."

  • shahgols (6/5/2014)


    Hi all,

    We have dropped a huge table from one of our databases and I want to get some of that space back. However, the shrink of the data file takes forever, and it does extremely high amounts of CPU cycles and reads. So I cancelled it and tried shrinking it just 5GB, same thing...I cancelled it after 8 hours and noticed high resource usage. So I tried shrinking it by 5MB and the same thing happened! I tired shrinking other databases on the same server by 5MB and the same thing happened with those as well. Does anyone know what is going on? I have never seen this behavior before.

    Are you trying during off peak hours? Shrinking is also a high I/O intensive activity. If you try after business hours, you may find that the same task will take less time to complete, depending of how busy the server is at that time.

    By the way, be sure to defrag Indexes after that. Shrinking creates a lot of fragmentation inside the database. So it is a good idea to rebuild Indexes after that.

  • Hi all, thanks for the replies. Yes, I am aware of the issues with shrinking, but this is one of those cases where if I don't shrink the database, 30GB of space will go to waste year after year. I ended up moving the data to a new database and switching databases.

    I just found it odd that 5MB shrink would take hours. Specially on a database that has no write to it.

    I checked wait stats, nothing alarming there. I didn't check the IO, didn't think moving 5MB would cause too much IO.

    I just think something is not right, just don't know what and how to go about finding it.

  • Wow - all that for 30GB of space?!? Sad to be worried about so little in today's age! :w00t:

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • TheSQLGuru (6/6/2014)


    Wow - all that for 30GB of space?!? Sad to be worried about so little in today's age! :w00t:

    I've found that, especially in small to midsized companies, 30GB can be a lot of wasted space and can mean the difference between something working or not. I've also found that the adage of "Disk space is cheap" isn't always true. "It Depends". 🙂

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (6/6/2014)


    TheSQLGuru (6/6/2014)


    Wow - all that for 30GB of space?!? Sad to be worried about so little in today's age! :w00t:

    I've found that, especially in small to midsized companies, 30GB can be a lot of wasted space and can mean the difference between something working or not. I've also found that the adage of "Disk space is cheap" isn't always true. "It Depends". 🙂

    That's very true!

  • I can just hear our storage guy shouting "YOU WANT AN EXTRA 30Gb???!!!" 🙂

    'Only he who wanders finds new paths'

  • Yeah shrinks suck, we have the fun task of pulling all of our Image Blob data out to disk for a few clients coming up. For the most recent client this was around 500Gb out of their 570GB database. It's kind of pointless for us to do it without shrinking the data file to reclaim the space, and I might give some consideration to simply copying the database out to a new one after reading some of the replies here. It might save us a reasonable amount of time.

    While it's not exactly related to shrinking (it took around 8 hours in the scenario above) we have found a way to reclaim the database space used by the blobs without churning large amounts of log and relying on GHOST CLEANUP.

    Without posting our scripts, it basically goes like this:

    - Replicate the source table structure in a new table

    - Insert the contents of the source into the secondary table.

    -In our case we NULL the blob data going into the second table.

    - Drop the original table

    - Rename the new table/keys

    - Create index(es) on the new table

  • Now check your indexes for fragmentation.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Backup the DB,

    Change the DB recovery model to simple,

    Alter the DB to single_user mode,

    Shrink in 5 GB "chunks".

  • TheSQLGuru (6/6/2014)


    Wow - all that for 30GB of space?!? Sad to be worried about so little in today's age! :w00t:

    I had to fight the storage guys for 2 weeks to get 10GB for my tempdb drive. Ridiculous.

  • For 30 GB no way shrink will take that long.:cool:

    I suspect something is blocking the shrinking process to complete.

    I would suggest check any blocking the shrink with below query..

    select

    t1.resource_type as [lock type],db_name(resource_database_id) as [database]

    ,t1.resource_associated_entity_id as [blk object],t1.request_mode as [lock req]

    ,t1.request_session_id as [waiter sid] ,t2.wait_duration_ms as [wait time],(select text from sys.dm_exec_requests as r

    cross apply sys.dm_exec_sql_text(r.sql_handle)

    where r.session_id = t1.request_session_id) as waiter_batch

    ,(select substring(qt.text,r.statement_start_offset/2,(case when r.statement_end_offset = -1

    then len(convert(nvarchar(max), qt.text)) * 2

    else r.statement_end_offset end - r.statement_start_offset)/2) from sys.dm_exec_requests as r

    cross apply sys.dm_exec_sql_text(r.sql_handle) as qt

    where r.session_id = t1.request_session_id) as waiter_stmt

    ,t2.blocking_session_id as [blocker sid] ,(select text from sys.sysprocesses as p

    cross apply sys.dm_exec_sql_text(p.sql_handle)

    where p.spid = t2.blocking_session_id) as blocker_stmt

    from

    sys.dm_tran_locks as t1,sys.dm_os_waiting_tasks as t2

    where

    t1.lock_owner_address = t2.resource_address

    and also instead of doing database at shrink level; i would suggest do it file level.

    Regards

    Hema.,

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply