Ola's stuff is great for defragmenting but, like Sergiy says, Fill Factor doesn't actually control the way the pages are filled. For example, let's use a NOT NULL IDENTITY (narrow, ever increasing, unique, mostly immutable) column for a PK Clustered Index/Constraint. You build the table with a Fill Factor of 70 on the table and start filling the table without doing any updates. Now, after adding a couple hundred or even a million rows (it doesn't matter), ask yourself how much free space will be available on each page. It won't be 30% due to a Fill Factor of 70. The pages will have been filled to the maximum capacity despite the Fill Factor and there will be virtually no logical fragmentation (although, unless it's a single table database, there will always be non-contiguous physical fragmentation).
To be honest, defragging indexes is almost a stupid thing to do unless you do it right. Ola's great code works a treat but only if you do it right and I've not yet met anyone that actually does it right. Here's why...
Why do you defrag indexes? Most will say to improve performance but they actually end up killing performance simply because they don't make a plan.
For example, it's actually stupid to defrag an index on a table with a 100% Fill Factor that isn't keyed on an ever increasing immutable value that isn't also static. What it guarantees is that nearly every INSERT will require a page split until all the pages have been split. The "Natural Fill Factor" that occurs in the near term will be ~50% and fragmentation will be 99.99% just after several page splitting inserts. During those page splits, you can get some pretty nasty blocking in the system because those kind (bad) of page splits take a relatively long time not only due to the splitting action and related data movement but because all of that is fully logged and that more than doubles the time of the actual split. Then, because of the supposedly bad fragmentation you see, you rebuild the index and start the agony all over. It's as dumb as auto-shrinking database files every night.
The cool part about it is, if you leave the index alone (don't do any index maintenance), then the thing that Sergiy is talking about takes over and the partially empty pages start to fill as if you were actively defragging them without the daily pain of all the pages splitting the day after you do the defrag. Even Guids will settle out at a very nice average "Natural Fill Factor" of ~70% if you just stop messing with defragmentation. The page splits that do occur will be minimal, especially when you compare to defragging at a 100% Fill Factor.
The other thing is that people follow the recommendations of Reorg at 10% fragmentation and Rebuild at 30% fragmentation. That's really bad because fragmentation does NOT occur unless there are page splits. It's like waiting for a fire to start blazing for real before putting it out instead of putting it out while it's still just a flicker. Waiting for 10% fragmentation means that you've already waited for a severe amount of page splits (remember, they block things until done and are tough on the log file). If you defrag at 1% or even a half % on a table with a PROPER
Fill Factor, then performance will not be a problem after you defrag or just before you defrag.
Still, having not rebuilt any indexes on my production box since January of 2016 (more than 2 years ago), I can tell you that it's generally stupid to rebuild indexes. If you leave them alone, they will develop a decent "Natural Fill Factor" all on their own and that thing that they call average percent of fragmentation matters very little if you have more than one user on box and more than 1 table in the databases. You'll find, as I did, that performance actually improves by NOT rebuilding most indexes. Just make sure you keep after stats updates. It's crucial.
As with all else, there are exceptions to the rule and those are when the % of page fullness drops to less than 70% over the long haul. Those are usually because of expansive (entries in variable width columns have been made wider) and those actually can become a bit of a performance problem (especially on memory limited systems) because you have to read two pages (which will first be moved to memory) just to read 1 page of data. Those do eventually need to be brought back up to a decent Fill Factor even if you're not making the mistake of general index maintenance.
The real key to performance is to make sure that your statistics are up to date for both index stats and column stats. I have, indeed, witness a 2 second query suddenly taking hours to execute all because of bad stats causing an improper execution plan to be formed for the query.
I'll also tell you (I've done the testing) that the typical "reorg at 10%" really doesn't do a damned thing for performance (I've been testing on a known set of data for the last 3 months) and the bad page splits that occur are nearly identical to the number of bad page splits that occur with no index maintenance at all.
So... bottom line is...
1. It's stupid to do index maintenance on most indexes that have a 100% Fill Factor because it will cause performance issues right after the maintenance is complete due to page splits. If you insist on wasting time on trusting only % of fragmentation for general index maintenance, you MUST determine what the correct FILL FACTOR for all indexes actually is (I'm also working on a formula for that).
2. Reorg sucks. Avoid it if you can. It's no more effective than not defragging an index and takes a lot of time and log file (it's always fully logged) to do very little. It will also NOT change pages that are (for example) 85% full back down to a (for example) 70% FILL FACTOR.
3. If you're going to do index maintenance, then don't use the accepted best practice of Reorg at 10, Rebuild at 30. You waited too long for the damage to be done. Do a rebuild at 1% or less AND make sure that you actually have a properly planned FILL FACTOR for every index or just don't bother.
4. DO rebuild indexes when then % of page fullness drops below 70% or so. Just make sure that you're not rebuilding using a 100% FILL FACTOR or a FILL FACTOR that won't actually work for very wide rows. (For example, you can't get actually achieve a70% Fill Factor on a table where you can only fit 2 rows per page. You'll need to study the trade off between having full pages and only half full pages).
5. If you do decide to follow what I did and just stop index maintenance, be patient! Your old ideas of how to defrag stuff have cause a lot of damage and it's going to take some time for the system to heal itself numerically. I saw a gradual improvement of performance (avg CPU dropped from 22% to 8% over the first 3 months and then stayed at 8%). The second day, there was virtually no blocking on the system whereas the first day after doing defrags cause huge amounts of blocking.
6. No matter what you do, keep after stats updates. They're crucial and the automatic updates can burn you if they kick in on a busy time of day on a big table.
is pronounced ree-bar and is a Modenism for R
First step towards the paradigm shift of writing Set Based code: Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
If you think its expensive to hire a professional to do the job, wait until you hire an amateur. -- Red Adair
When you put the right degree of spin on it, the number 318
is also a glyph that describes the nature of a DBAs job. Helpful Links:
How to post code problemsHow to post performance problemsForum FAQs