Hugo Kornelis (1/20/2012)
Hugo Kornelis (1/20/2012)
You get an error message.
So if you ever need to get a rowcount on a table that might exceed 2,147,483,647 rows, use COUNT_BIG() instead - this function returns bigint, with an upper limit of 9,223,372,036,854,775,807 rows. If you exceed that, I don't know what happens - but in that case, I would definitely like 0.01% of your hardware budget! 😉
Just for kicks, I decided to do the math. (See "Estimating the size of a clustered index" in Books Online)
A relational table should always have a primary key to be useful. To store more than 9,223,372,036,854,775,807 with primary key violations, the primary key must be at least 8 bytes, so the smallest possible table to exceed the max of COUNT_BIG() has a single 8-byte (e.g bigint) column.
The row size is the 8 bytes for the data, plus three for the NULL bitmap, 4 for row header overhead, and 2 for the row's entry in the slot array; that makes 14 total. This means that SQL Server can store at most 8,096 / 14 = 578 (rounded down) rows on a single data page. For 9,223,372,036,854,775,807 + 1 rows, that makes 15,957,391,067,222,796 (rounded up) pages for the leaf level of the clustered index.
For the non-leaf pages, no NULL bitmap is used if the column is not nullable, but the overhead increases a bit - here, the size for each row is 8 (data) + 1 (overhead) + 6 (child page pointer) + 2 (slot array) = 17 bytes; this means SQL Server can fit 8,096 / 17 = 476 (rounded down) index rows on a page.
For the first level above the leaf pages, we would need 15,957,391,067,222,796 / 476 = 33,523,930,813,494 (rounded up) pages.
For the second level above the leaf pages, we would need 15,957,391,067,222,796 / (476 ^ 2) = 70,428,426,079 (rounded up) pages.
For the third level above the leaf pages, we would need 15,957,391,067,222,796 / (476 ^ 3) = 147,958,879 (rounded up) pages.
For the fourth level above the leaf pages, we would need 15,957,391,067,222,796 / (476 ^ 4) = 310,838 (rounded up) pages.
For the fifth level above the leaf pages, we would need 15,957,391,067,222,796 / (476 ^ 5) = 653 (rounded up) pages.
For the sixth level above the leaf pages, we would need 15,957,391,067,222,796 / (476 ^ 6) = 2 (rounded up) pages.
And for the root level, seventh level above the leaf pages, we would need one more page.
Adding all these numbers together gives me a total requirement of 15,990,985,574,732,742 8K pages to store this table. That is approximately 122,000,000,000 TB. If you want to build that using HP 6400/8400 Enterprise Virtual Array machines (see http://h10010.www1.hp.com/wwpc/us/en/sm/WF25a/12169-304616-304648-304648-304648-3900918.html?dnr=1 for specifications) at their maximum capacity of 628 TB a piece, you would need to have almost 200 million of those machines.
Unfortunately, you would have to wait until Microsoft releases an improved an improved version of SQL Server - with its current supported maximum database size of "only" 524,272 terabytes, it seriously falls short for this kind of challenge.