Defragmenting a MDF file increasing allocated free space

  • My boss decided one of our databases which was performing fine was taking up too much space. His action was to run a shrink database command and then reduce the allocated free space by 50%.

    The database is a reporting mart which gets loaded with approx 200,000 records every night. Before the above changes this load run for 2yrs and use to take 2hrs. Since the changes I have noticed the mdf file is badly fragmented and the load know takes between 8-10hrs.

    How can I get back the allocated free space and defrag the mdf file? And do you think this will help with the performance issue?

  • Is the file having to grow in order to get the load done? That's the most likely cause of the slow-down.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • The default trace should be able to help you find out if there are file growths (event class 92 and 93) during the data loads.

  • Thanks I'll see if i can find the trace.

    Its not somethinmg I have used before.

  • To find out whether or not the data file is growing (autogrowths) you can use the Disk Usage report. Right-click on the database, go to reports and select the Disk Usage report. If you are not on SP2 or greater - the reports are in a different location but still accessible.

    When the database was shrunk - all of your indexes were fragmented. I would schedule an index rebuild when you have the time, and make sure you have at least 1.5x the largest table available space in the data file. If not, manually grow the datafile so you do.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Darren.Siegert (3/26/2009)


    The database is a reporting mart which gets loaded with approx 200,000 records every night.

    With that kind of inserts I would drop and create them. Check the fragmentation levels after the inserts and then you can either decide to rebuild them or drop&create the indexes.

  • Darren.Siegert (3/26/2009)


    His action was to run a shrink database command and then reduce the allocated free space by 50%.

    Rebuild all of your indexes (perhaps after growing the file by 20% or so).

    Shrinking causes massive fragmentation and will just result in the data file growing again next time data gets added. When that happens, the entire system will slow down as the file is expanded. Also repeated shrinks and grows will cause fragmentation at the file-system level, which is hard to fix.

    See - http://sqlinthewild.co.za/index.php/2007/09/08/shrinking-databases/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply