Luke L (10/14/2010)
Would you say it's safe to remove any indexes with less than 25 pages from my maintenance tasks no matter how fragmented they are?
Sure, and I'd set the threshold a fair bit higher than that.
The 1000 page recommendation for when performance increases might be noticed... is there somewhere I can read more about it and/or find some sample code so that I can test against my particular hardware?
Maybe on Paul Randal's blog. He's the one who came up with that number. It's an educated guesstimate based on his knowledge of the storage engine and how it fetches pages.
Bear in mind that fragmentation's only an issue when
1) You're doing a range scan or full scan of an index
2) The pages are not in memory
I suppose you could set up some tests by creating fragmented tables of various sizes, testing how long a select * takes when the cache is cold, rebuilding them and testing again with a cold cache. Numbers are likely to be hardware-specific.
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
We walk in the dark places no others will enter
We stand on the bridge and no one may pass