Indexes and Fragmentation in SQL Server 2000 Part 1

  • Comments posted here are about the content posted at temp

  • Joe,

    Thank you for the excellant article.

    When I wrote a year ago my article about fragmentation, this is exactly what I had in mind - but since I am not an SQL expert, I could not do it myself.

    Please read my article at

    and look at where I present the fragmentation measurement model. This model is working for more than 100,000 users, and proves to be accurate. If you want to join forces and give the audience of this forum a solution - I am ready.

    Koby Biller

  • Thanks for the article cant wait for the next instalment. One thing that has confused me over the years is how sql stores and organises text data. You do not include this type of data field in your table,is their a reason for this? if you can just explain about text data as i find you explinations so far very easy to understand.

  • Hi Joe

    I really enjoyed this article.  I am not a SQL expert in any way, and found the step by easy step approach excellent.  I really feel I have learned something and time spent here was very worthwhile.  I look forward to the next article - particularly with a view towards indexes.

    Many thanks for the time you spent on this.


  • Very nice - I have learned a lot, cannot wait till the next article.

    -- Cory

  • Good article, To the point. Cleared the basics! Keep up the good work.

  • How this relates to the real world is this...

    1) No clustered Index = A Heap

    2) Add some Non-clustered indexes to a table and you still have a heap

    3) Add more data with the non-clustered indexes and your heaped table will GROW by leaps and bounds !!

    4) You want to make the table smaller BUT  a heap cannot be defragged! (nothing works, DBCC indexdefrag etc..)

    5) You MUST create a clustered index to force defragmentation

    Moral - NEVER create non-clustered indexes without creating a clustered index first!

    I have come across this knowledge through evaluation of a legacy database which is massive for apparently no reason.  I could not shrink its size no matter what I did.  Only after much reading did I come across the "Heap Gem" of knowledge and was able to cut the DB down to 40% of its original size by adding, yes adding, clustered indexes to the top 30 largest tables.  There were hundreds of tables, dunno what would have happened by adding a CI to all appropriate tables.

    - B

  • Hi Billy

    Great additional help to Joe's article.

    Thanks for this.


  • Great article Joe.  Thanks a bunch.  Your take on extent switching was new to me.  I'm in hopes that the rest of your series will help clear up a mystery for us.

    Billy, I agree with you.  This goes into our mystery.  One table with less than 100 thousand rows had sporadic inserts take MINUTES to complete.  The majority of inserts take fractions of seconds.  All DBCC indications were clean. Re-index had been done and still it happened. A specific insert would hang.  If you drop the row and re-run that specific insert would take long.

    In frustration I advised the guy working the problem to drop all the indexes and recreate them.  Most are non-clustered and the clustered is PK on the identity column.  Dropping the clustered index took 20 minutes. Re-creating the indexes seems to have fixed the problem.

    What the ...?


    ATBCharles Kincaid

  • Mike,

    I'll make sure information on all data types is included within this series. Thanks for your input.

    Thanks to everyone who has contributed and shown an interest.


  • You're not totally correct concerning a HEAP, some rdbms allow the table and index structures to be defined ( not all B Tree's by default )  heaps do not always fragment. IF the table is not subject to updates which cause deferred updates and there are no deletes then your heap will not fragment, all new data arrives on the top of the heap - hence the term. Secondary indexes on a heap can be defragmented without issue. Technically if your heap is a fifo ( first in first out ) then even the deletes would not fragment the table structure. 

    I agree that it's best practice to have a clustered index, and in fact most tables I see are still heaps , but with a clustered index. Also agreed that the leaf or data fragmentation in a table cannot be removed without a clustered index, however not all heaps fragment.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]

  • Nice article.  Been doing lots of digging around with data pages/indexes/heaps of late as it's a hot topic where I work.  Looking forward to seeing the next article.

  • In the days before good databases were avaialable for PC's I was having to retrive data from random files in an ordered manner.  We used bianry files to store the data and then built an index for the ordering (MySQL anyone?).  We used a tree.  Not exactly a B-tree.

    First, we did not give a rip about balancing.  The ordering of blocks within the index file was of almost no concern.  Even at 233 clock speeds we were able to fetch the first record in an order in way under a second.

    Performance suffered when we had to add all of the overhead of making it multi-user capable.  This was in the days when the only networking that we had was shared file access.

    ATBCharles Kincaid

  • Thanks for the article, very nice, and I look foward to the continuation on this subject.



    Thomas LeBlanc, MVP Data Platform Consultant

  • Looking forward to someone being able to explain why extent scan fragmentation shows in the high eighties or ninties AFTER turning OFF sql server services...  No change whatsoever...

Viewing 15 posts - 1 through 15 (of 23 total)

You must be logged in to reply to this topic. Login to reply