Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««123»»

Index rebuild and fragmentation level Expand / Collapse
Author
Message
Posted Thursday, November 6, 2008 8:49 PM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 1:01 PM
Points: 7,118, Visits: 15,000
1. If I'm reading your results correctly - your index is only 233 pages big. So - while the table might be large - your index is apparently not.
2. I said that if it's below 1000, it would not DEfragment. Meaning - it will ALWAYS show as fragmented. The way the algorithm works (according to Paul again), it will figure out that there will be no benefit in defragmenting it, so it won't.


----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
Post #598648
Posted Friday, November 7, 2008 6:52 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Thursday, February 5, 2009 8:37 AM
Points: 27, Visits: 45
Hello
The index is not small and not just 233 pages. Take a look. Only level 2 of the index has 233 pages. Level 3 has 38237 pages.
So it is a large index ( because it has several levels )
Take a look at the OUTPUT that is pasted below.

Ok, so you are saying when the # pages is less than 1000, the algorithm does not even bother about defragmenting.

Fine! Well then, before I ran the "Alter index command.." why did it
show a value between 0 and 100 ( can't remember the exact value ) ?





Here is the output

index_id avg_fragmentation page_count index_level
---------------------------------- ------------------
2 4.33088369903497 38237 0
2 100 233 1
2 0 1 2

(3 row(s) affected)
------------------------------------------------------


Post #598894
Posted Friday, November 7, 2008 7:07 AM


SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Wednesday, July 16, 2014 4:08 AM
Points: 175, Visits: 742
Hi,

I can see how it would be true for a handful of pages (less than 8) but since the index is created from scratch, I for one cannot see why the pages would be allocated from all over the place.

You mentioned it happens for ALL tables (with more or equal than 3 levels) and always index level 2. Is it so that if there are more levels, it is still just level 2? I suppose some of them have clustered indexes and some are heaps, or?

I am still very puzzled, do you get the same result if you do CREATE INDEX .... WITH DROP_EXISTING?

Which service pack level are you on?

/Elisabeth


elisabeth@sqlserverland.com
MCITP | MCT
http://sqlblog.com/blogs/elisabeth_redei/
http://linkedin.com/in/elisabethredei
Post #598906
Posted Friday, November 7, 2008 7:43 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Thursday, February 5, 2009 8:37 AM
Points: 27, Visits: 45
Hello
This is a production database. I am not allowed to drop indexes and create them from scratch.

I guess the previous reply is correct to assume. He said when the
number of pages is less than 1000, SQl server does not bother to defragment the index ( I mean that part of the index that sits on level 2 )

I am going to do a few more ( I mean index rebuilds ) today and find out whether the results are consistent with what he says.



Post #598949
Posted Friday, November 7, 2008 8:17 AM
SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Friday, March 14, 2014 3:15 PM
Points: 804, Visits: 935
Look below for the difference in the way you use the dbcc showcontig command

--DBCC SHOWCONTIG(TABLE_PROD_RES)

DBCC SHOWCONTIG scanning 'TABLE_PROD_RES' table...
Table: 'TABLE_PROD_RES' (1241211622); index ID: 1, database ID: 5
TABLE level scan performed.
- Pages Scanned................................: 748041
- Extents Scanned..............................: 93513
- Extent Switches..............................: 93526
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.98% [93506:93527]
- Logical Scan Fragmentation ..................: 0.01%
- Extent Scan Fragmentation ...................: 10.43%
- Avg. Bytes Free per Page.....................: 351.9
- Avg. Page Density (full).....................: 95.65%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

--DBCC SHOWCONTIG(TABLE_PROD_RES) WITH FAST

DBCC SHOWCONTIG scanning 'TABLE_PROD_RES' table...
Table: 'TABLE_PROD_RES' (1241211622); index ID: 1, database ID: 5
TABLE level scan performed.
- Pages Scanned................................: 748041
- Extent Switches..............................: 93526
- Scan Density [Best Count:Actual Count].......: 99.98% [93506:93527]
- Logical Scan Fragmentation ..................: 0.01%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.


The WITH FAST doesn't mention the Avg. Page density which is needed if you want to REINDEX with a FILL FACTOR other than 100%, this is applicable if you table is updated quite often and is not static.

If not then 'WITH FAST' is much faster..thanks for showing this QUICK CHECK kinda thing..:)


The_SQL_DBA
MCTS

"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
Post #598983
Posted Friday, November 7, 2008 8:42 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 1:01 PM
Points: 7,118, Visits: 15,000
Elisabeth Rédei (11/7/2008)
Hi,

I can see how it would be true for a handful of pages (less than 8) but since the index is created from scratch, I for one cannot see why the pages would be allocated from all over the place.

You mentioned it happens for ALL tables (with more or equal than 3 levels) and always index level 2. Is it so that if there are more levels, it is still just level 2? I suppose some of them have clustered indexes and some are heaps, or?

I am still very puzzled, do you get the same result if you do CREATE INDEX .... WITH DROP_EXISTING?

Which service pack level are you on?

/Elisabeth


I know - it was a bit surprising for me too. But Paul Randal was once in charge of the team that wrote all of this stuff (as in - the DB engine portion of SQL Server), so he's been a good resource. His point was - they're so small, the fragmentation doesn't affect performance (keep in mind 1,000 pages = 8K * 1000 or roughly 8MB, so we are talking small).

Also - from the "Reorganizing and rebuilding indexes" topic in BOL:

In general, fragmentation on small indexes is often not controllable. The pages of small indexes are stored on mixed extents. Mixed extents are shared by up to eight objects, so the fragmentation in a small index might not be reduced after reorganizing or rebuilding the index. For more information about mixed extents, see Understanding Pages and Extents.


(the link to that is http://msdn.microsoft.com/en-us/library/ms189858.aspx)

Still - the only way to really know is to try it for yourself.


----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
Post #599015
Posted Friday, November 7, 2008 8:47 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Thursday, February 5, 2009 8:37 AM
Points: 27, Visits: 45
Here is a question to the SQL_DBA who presented us with how to run the DBCC command.

As you know there are 2 types of fragmentation.
1. External - results when SQl server has to produce a new page and store it some where (without storing it adjacent to the other pages )

2. Internal - Pages are not full as a result of Page Splitting.

Now, if I run the DBCC command, what info represents what ?
I am not sure what logical fragmentation is ? Can you explain ?
Post #599020
Posted Friday, November 7, 2008 10:27 AM


SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Wednesday, July 16, 2014 4:08 AM
Points: 175, Visits: 742


OK, let's find out what it really looks like; you can run DBCC IND on the index in question:

dbcc ind ( { 'dbname' | dbid }, { 'objname' | objid }, { nonclustered indid | 1 | 0 | -1 | -2 } [, partition_number] )

There will be a row for every page in the index so you might not want to take your largest index.

Also, even though PAD_INDEX is depending on FILLFACTOR (and that doesn't seem to be set), check anyway:

SELECT object_name(object_id), index_id, fill_factor, is_padded
FROM sys.indexes
WHERE object_id = sometable
AND index_id = someindex


/Elisabeth


elisabeth@sqlserverland.com
MCITP | MCT
http://sqlblog.com/blogs/elisabeth_redei/
http://linkedin.com/in/elisabethredei
Post #599129
Posted Friday, November 7, 2008 1:04 PM
SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Friday, March 14, 2014 3:15 PM
Points: 804, Visits: 935
LOGICAL FRAGMENTATION:-It is the level of fragmentation in your leaf nodes. Why does this happen?

The reason is when ever you insert or delete a row from your table the indexes assign the space as when it is available. If the page is full (fill factor 100%, Scan Density 100%) every new row has to be put in the next page, thereby page splits occur. This would kill the performance.Also, the upper levels of a well-used index with a decent fanout are typically memory resident and so do not incur IO costs - the IO cost comes from reading the leaf-level page.
- having interleaved index and data pages in extents will impact the ability of the readahead code to generate multi-page contiguous IOs, but again, that's a negligible cause compared to page fragmentation
- readahead is driven from the level above the leaf level

EXTENT FRAGMENTATION
:- This refers to the fragmentation of the actual physical location of extents/pages within Sql Server data file(s) - it's typically called extent fragmentation due to the fact that extents are the primary allocation unit for Sql Server data, and the majority of the time allocation of space within a file will occur as a full extent, which is made up of 8 contiguous pages. Therefore, arguably the majority of the time this type of fragmentation manifests itself as interleaved extents that are allocated to different structures and the pages within each of these extents are mostly contiguous.

REASON:It occurs when data is being inserted into the database within multiple different tables, and extents are being allocated for these insertions within a given data file in a round-robin-ish fashion (i.e. extent #1 goes to index #1, extent #2 goes to index #2, extent #3 goes to index #3, then extent #4 goes to index #1, then extent #5 goes to index #3, etc., etc.). This results in properly ordered pages for the given indexes (i.e. no logical fragmentation), but not in contiguous pages, since the pages are "interleaved" with each other by nature of the extents being interleaved (you end up with 8 contiguous pages, then an extent for another object(s), then 8 more contiguous pages, etc., etc.,

Hope this helps..

Thanks!!


The_SQL_DBA
MCTS

"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
Post #599248
Posted Friday, November 7, 2008 1:33 PM


SSChasing Mays

SSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing Mays

Group: General Forum Members
Last Login: Friday, January 3, 2014 10:59 AM
Points: 626, Visits: 836
I do know from extensive testing, that reorging an index of less than 1000 pages is pointless, but rebuilding will show reduced fragmentation

I'm curious.
From the DMV, what's the value of avg_fragment_size_in_pages ?


I think what we might have here is that the act of rebuilding the index has caused the mid-tier leaves to fragment... the mid-tier is the index of the leaf level pages (technically) so it makes sense that as the leaf level is "re-inserted" randomly that the mid-tier would get fragmented badly.

If the leaf-level pages were written sequentially, it would be a different story.

Just like Elisabeth Rédei noticed, You might also try rebuilding with pad_index in addition to fill factor, since pad_index is the "fill factor" the intermediate leaves... padding your index will reduce the number of page splits and thus extent fragmentation.

~BOT


Craig Outcalt



Tips for new DBAs: http://www.sqlservercentral.com/articles/Career/64632
My other articles: http://www.sqlservercentral.com/Authors/Articles/Craig_Outcalt/560258
Post #599266
« Prev Topic | Next Topic »

Add to briefcase ««123»»

Permissions Expand / Collapse