It's quite possible that leaving the FILLFACTOR at 100 would make the most sense. With the SSD in play, the page splits and ensuing fragmentation may be less of a concern than absorbing a 25% space-hit to the buffer pool for each index where the FILLFACTOR is lowered.
At first, and even though it goes against my inner data-troll, I thought that would be fine especially considering the blazing speed of SSDs. It might still be fine if you have a reorg going on a regular basis. My concern is that when you have a lot of pages of GUIDs, inserting a lot more GUIDs will cause page splits possibly on almost every page because of the extremely random distribution. In theory, if you have 10,000 pages of GUIDs where the CI is on the guid and you insert just 10,000 new GUIDs, it could cause a page split on each and every page. Now you suddenly have the space equivalent of a 50% FILL FACTOR on some relatively very expensive hardware.
Considerig the speed that a reorg or rebuild might happen on the SSD, that might still be the way to go, though.
is pronounced ree-bar and is a Modenism for R
First step towards the paradigm shift of writing Set Based code: Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
Although they tell us that they want it real bad, our primary goal is to ensure that we dont actually give it to them that way.
Although change is inevitable, change for the better is usually not.
Just because you can do something in PowerShell, doesnt mean you should. Helpful Links:
How to post code problemsHow to post performance problemsForum FAQs