• There is one additional complication on the database side. While the uniqueidentifiers are guaranteed unique, they are not guaranteed to be sequential. That means as you generate new uniqueidentifiers as primary keys and insert them into a table there is a much higher chance you'll cause a page split. SQL stores data on 8k pages, a page split occurs when there is not enough room on the page for the new row, so the page is basically split into two pages so that the insert can success. Page splits in general are not bad, it's the frequency of page splits that can have an impact on performance as each split requires additional locking and disk IO. In my view the worry about page splits is overblown, but it's something you'll have to access in your environment to be sure. Fast drives do much to alleviate this potential issue.


    Although I think your article was good and I'd love to see more people taking advantage of GUIDs in their database designs, I have one issue with your article.  The above excerpt.  I'd love to know your source for this information and I'd love for you to further explain how page splitting will be increased due to the nature of GUIDs.  I've never found any Microsoft documentation warning of such a scenario and what you claim goes against my understanding of how b-trees work.

    My concern is that your statement is the opposite of the reality.  Assuming that we can agree that SQL Server's indexes are all b-trees, based on my understanding of b-trees, if you insert a preordered set of records into a b-tree more page splits will occur than if you had inserted those same records in a random order.  Additionally, page splits isn’t what's kills performance rather rebalancing of the b-tree is what is so detrimental to performance.  Inserting rows into a b-tree in order causes the tree to become unbalance far more frequently than if rows were inserted into the tree in a random order.

    In any case I'd love to know where you got that information or have you explain it more thoroughly.