August 30, 2008 at 4:41 pm
I have various large databases which are performing poorly. On looking at the avg_fragmentation_in_percent I see there are some very large values. Some of these levels remain unaltered after a defrag or rebuild or even drop and create so I presume I am missing something.
2 questions:-
1. Why do these defrag levels remain high after a rebuild?
2. Can someone explain to me at what levels we should defragment indexes, i.e. number of pages etc. I have found nothing that explains it to me well.
August 30, 2008 at 5:22 pm
You're talking about "defrag" and then you say "rebuild". Which is it that you are doing?
--Jeff Moden
Change is inevitable... Change for the better is not.
August 31, 2008 at 6:31 pm
I actually used alter index, so rebuild. I also dropped and recreated the index.
August 31, 2008 at 7:29 pm
How many pages do these tables have? Small tables will always show a high level of fragmentation.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
September 1, 2008 at 11:03 am
Dougo (8/31/2008)
I actually used alter index, so rebuild. I also dropped and recreated the index.
Then, you must be looking at "extent fragmentation"... just hide that from your eyes... the big key is what the logical scan fragmentation is...
--Jeff Moden
Change is inevitable... Change for the better is not.
September 1, 2008 at 11:10 pm
Thanks, yes they have less than 500 pages.
September 1, 2008 at 11:11 pm
Where would I find the logical scan fragmentation percentage?
September 1, 2008 at 11:56 pm
Dougo (9/1/2008)
Where would I find the logical scan fragmentation percentage?
It's called that in DBCC showcontig. If you're using sys.dm_db_index_physical_stats (which I assume you are) then the avg_fragmentation_in_percent is the logical fragmentation if the row refers to an index and the extent fragmentation if it refers to a heap
500 pages should be enough that a rebuild reduces the fragmentation, though maybe not to 0.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 2, 2008 at 2:02 pm
I've found that even if u defrag an index and the physical disk is fragmented, then the index will remain fragmented.
September 2, 2008 at 7:14 pm
gcg_wilkinson (9/2/2008)
I've found that even if u defrag an index and the physical disk is fragmented, then the index will remain fragmented.
That would also be true. A disk defrag at the OS level is important especially if the growth of the database was done using the defaults or if someone has done multiple "shrinks" only to have the DB grow again.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 2, 2008 at 7:56 pm
Make sure the tables you are looking at have a clustered index. If there is no clustered index, the table is a heap, and the data itself cannot be defragmented. This can result in severe performance problems, especially in tables with lots of inserts and deletes.
September 3, 2008 at 12:16 am
gcg_wilkinson (9/2/2008)
I've found that even if u defrag an index and the physical disk is fragmented, then the index will remain fragmented.
It shouldn't contribute to the logical fragmentation. That's defined as the percentage of pages that are out of order (a page further along the index has a pageID lower than a page earlier in the index). PageIDs are purely logical constructs that indicate the page's position within the SQL data file and won't be affected by fragmentation of that file.
Sure, fragmentation of the file means that the index is fragmented, but it shouldn't reflect so to SQL.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 3, 2008 at 4:58 pm
GilaMonster (9/3/2008)
gcg_wilkinson (9/2/2008)
I've found that even if u defrag an index and the physical disk is fragmented, then the index will remain fragmented.It shouldn't contribute to the logical fragmentation. That's defined as the percentage of pages that are out of order (a page further along the index has a pageID lower than a page earlier in the index). PageIDs are purely logical constructs that indicate the page's position within the SQL data file and won't be affected by fragmentation of that file.
Sure, fragmentation of the file means that the index is fragmented, but it shouldn't reflect so to SQL.
If the MDF or LDF is physically fragmented, it'll affect SQL just as it would any program in the form of non contiguous files and excessive read head movement.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 4, 2008 at 1:24 am
Jeff Moden (9/3/2008)
If the MDF or LDF is physically fragmented, it'll affect SQL just as it would any program in the form of non contiguous files and excessive read head movement.
Of course it will affect SQL.
It just won't reflect in the % logical fragmentation returned for an index, because that's not concerned with where the file pieces are on disk, just where the pages are within the file
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 5, 2008 at 5:54 pm
Heh... sorry Gail... couldn't believe my eyes at first. Now I get what you were trying to say.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply