Defragging a Database Server - Good or Bad???

  • Hi,

    In ORACLE, it is frowned upon to defrag a database at the Operating System level. There are different methods/strategies to reclaim unused space and keep row data together. Is this the same for SQL Server?

    Many thanks. Jeff

  • In Windows, fragmentation is more of an issue. For SQL Server, I wouldn't recommend defragmenting the data files or log files unless the server is at low load, but it can help.

    Also need to defragment the tables, since the allocations within the data files can get fragmented.

  • Defragmenting within the database (ie at the table level) has almost always solved our performance issues, so never had to go to the extent of OS defragmentation. One of the reasons was that the db had to be shutdown to unlock the files, and nobody really knew how long it would take. But, I believe defragmenting at the DB level should yield most of the benefits.

  • Defragging can give a performance benefit if your drives are badly defragmented, but requires the databases to be offline.

    If you've got a SAN environment, large caching helps to alleviate the

    affects felt by badly fragmented drives.

    I remember something about how ntfs stores its' allocation table in the middle

    of a volume, sqlserverfaq? i think. This can affect how you create tables.

    ---------------------------------------------
    If data can screw you, it will; never assume the data are correct.

  •  

    Defragmenting at the logical level is always useful (e.g. rebuild indexes), but fragmentation at the physical level can also be important.

    Due to the underlying dynamics of disk head movement, any database system (Oracle, SQL Server DB2, etc, etc) will suffer a performance loss if the files containing the database are badly fragmented.  This applies regardless of operating system (Windows, Unix, Linux, etc, etc).  The use of disk systems with a large cache, such as SANs, will hide most of the impact of file fragmentation. 

    Some SANS have no concept of a contiguous file at the physical level.  These treat the disk tracks as a giant car-park and put each track of data in the first free slot.  However, even if you use these systems, if the file appears to be fragmented at the operating system level there is some CPU overhead in chaining through a large number of extents.

    Regular growth and shrinkage of database files is a good way to fragment them.  For this reason Autoshrink should be turned off, and SHRINKFILE should only be used if a permanent reduction in space usage is expected.

    Typically, a defragmentation at the physical file level requires the database file to be offline.  This is because the defrag program will ignore any file that are in use.  Often this means the database manager must be stopped, as defragmenting a disk 1 file or 1 database at a time is far less efficient and slower than defragmenting the whole disk.  Because it is hard to predict how long a disk defrag will take, very few sites do regular file defrags due to the database outage required.

     

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • Preciously I always applied defrag to the o/s. As I didn't used std allocation blocks I couldn't use any usual tools. I found a utility on sysinternals web site which will report fragmentation. As I had plenty of disk space and arrays I just moved the mdf files from one array to another and back to remove the fragmentation. I did this monthly.

    ( My arrays were 12 and 14 disk raid 10 )

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Something you can consider for larger databases is the use of raw partitions.  Raw partitions are probably easier to manage within a SAN environment than with DAS. 

    Using raw partitions would eliminate any possibility of physical file fragmentation.  Because you would then only get logical fragmentation within SQL Server, you can cure this just be using standard SQL Server tools (index defrag or rebuild).  You would also loose the overhead of NTFS processing.

    If you do use raw partitions, I suggest you only consider this for filegroups over 4GB in size.  (A single GAM page can address just over 3.9GB of space.)  You could then size each partition making up the filegroup at 3.9GB, to minimise any GAM page contention.  As the database increases in size, simply add an additional 3.9GB partition to the filegroup.  If you use multiple partitions in this way you probably need to be in a SAN environment to cope with the I-O load.

     

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • Although I believe there are pratical impacts on defragging the servers file system I do suggest only do when you have CPU cycles you can spare and be aware if the file is locked it may not be defragmented until you can shut down SQL Server or at least close the DB for a frame of time on the server. The big thing is IO of the Hard Drive will go up on an active system and this may cause timeouts on some connections when trying to access data. Gte a feel for when you DBs IO will be as low as possibly or be prepared for slowness.

  • Just to reiterate the value of defragging even in a RAID environment:

    http://www.raxco.com/products/perfectdisk2k/whitepapers/pd_raid.pdf

     

  • Kevin,

    That's not true. DBCC DBREINDEX (which rebuilds the indexes) requires the database to be in singleuser mode.

    DBCC INDEXDEFRAG (which defrags the indexes) is an online operation. I use it frequently (every four to six months).

    -SQLBill

  • I believe with the new faster disks, you are not going to see a huge diff with defraging at the OS level, unless your DB is contantly growing and shrinking.  Keeping your indexes defraged is the key.   You should do this on a regular basis.  You should run the dbcc showcontig to check for fragmentation levels and accordingly use DBREINDEX or INDEXDEFRAG.

  • I'm pretty sure that he was referring to file system fragmentation rather than the logical fragmentation within the SQL Server data files which is what the DBCC commands address.

    /*****************

    If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek

    *****************/

  • The way most DBMSs (including SQL Server) allocate space by grabbing contiguous extents is intended to minimize the effects of file system fragmentation.

    Also, since data access in OLTP envionments tends to be very random, fragmentation has little or no negative impact on performance.  Index maintenance is still important to maintain an appropriate amount of free space, but fragmentation doesn't play a big role there.

    In data warehousing situations the opposite is true, you want very little free space and minimum fragmentation because you are generally reading large chunks of data that tend to be in a range.  Having a high scan density can vastly improve performance there.

    /*****************

    If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek

    *****************/

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply