What is the Maximum Page Size in SQL Server 2000?

  • Good article, but it's a bit misleading.  This doesn't have anything to do with a Maximum "Page" size, it's a Maximum Row Size.  However, very informative.  Keep in mind also that your Table WILL successfully be built even if you get the error.  Where you'll have then problem is when you try to Insert or Update data into a Row which exceeds this maximum size.  Even then you can get around it by Updating the largest columns last.


    Insert into Table Values(Null, 'Small Column')

    Update Table Set LargeColumn = LargeValue Where...

  • But how does this affect the NSA storage of phone calls?


    If MS made the object structures public we would all be better off.

  • If it's really true that the NSA is just storing things like connect information (as opposed to the contents of the call), then not at all - I would guess that their rows will not be that wide.  If they do store call contents, well, those will go into large datatypes, so ... it won't affect them at all. 

    Of course, if they don't use MS SQL to store the calls database (as Gartner would say: probability .98), then ... it won't affect them at all. 

    So at least we can be sure (probability .99) that Steve has not abetted the NSA's phone records retention program. 

  • It is OK for them to have extra 14 bytes for overhead. It is still much better then our Oracle admin told me when I asked him to create a database for me with the size about 400 MB. He said that the overhead will be about that size (400 MB) to create an empty database.

    Regards,Yelena Varsha

  • Nice going, I love learning new tidbits like this.  Articles such as these are what makes this site great and a wonderful early morning read.

  • **If you never hear from me again, you know I went to far with this missive.**

    But if you had to store all the phone calls in America. 

    With Content: I can't see it as BLOBS.  Although, its only mono 8kbs 11-22khz. with the right compression one could get it down to about 3 mb's / hour of one conversation.

    If its brute force, I'll include fax machines, pagers and such placing number of phone numbes in the US to be around 1 Billion.

    Say ten percent usage at any time. 100 Million 24/7.

    3x24 72 mb's per call x 100 million 7.2 billion MB's per day.

    Which comes in around 72 Terabytes of data. Daily.

    If I took one of these:

    XKSS Series III

    Ultra High Capacity 6U SATA II RAID Server

    Up To 18 Terabytes Storage - Dual 64 Bit Xeon's - 800Mhz FSB - DDR2 Memory

    - Transfer Rates over 800 MB/s for RAID 5 Reads and 380 MB/s for RAID 5 Writes

    The biggest I found with a cursory look.  I upped the drive sizes to terabyte each but then

    There is one problem. I'm still only at 32 Terabytes a piece.


    Next problem. 6 terabytes an hour. 100 gigabytes a minute. 1.6 gigabytes per second.

    Well over the bandwidth of the all the call data.

    One might say well have multiple recordings in different sites.  The duplication out of state occurs.  Still a possiblity but it would no longer be 'national'.

    I'm thinking its not that the NSA wouldn't record all the calls but I think they just can't do it carte blanche.

    Now its a different story were some speech to text be involved.

    Any other thoughts?

  • I for the record, do not work for the above company. I just wondered if were going to push page size issues. This task, phone recordings, might be one that could bring up that issue. Plus it's timely.

  • Thank You for very interesting info...

  • This article was a great piece of information!  Great work!  I think that we all appreciate the time and effort that you put into this research.  I will keep this all in mind for future reference.





  • Steve... it never ceases to amaze me the lengths that you go to provide us every bit of information needed to keep our 'sanity' and the information needed for us to succeed.  And it never ceases to amaze me just how you find these issues and spend dedicated time in researching them for us.  Your efforts have always been, in the past and will continue to be in the future, much appreciated!  Thanks for your dedication to this site and to us (your loyal fan base).  Press onward!  Scottye

  • Steve,

    Great job! I've run into this a few times. And it has always puzzled me why the 8060 wasn't correct. Now I know. Thanks!

    Gary Johnson
    Sr Database Engineer

  • The extra 14 bytes is only needed for records in heap pages. Here's why - heap records do not have keys, by definition, so the only way to reference them from non-clustered index rows is using the physical RID - a combination of the physical page and slot number. If a heap record changes size such that it no longer fits in the same slot, it has to move to another page, but we don't want to have to go and update all the non-clustered indexes that point to it. So we leave a forwarding-record that points to the new location in the heap, and the new record has to point back to the forwarding record. We always have to ensure we have enough space to add the 14 bytes to any heap record, hence the reduction in maximum row size for heaps.

    This doesn't apply to clustered indexes, where non-clustered indexes store the clustering keys of clustered index records and are impervious to the clustered index records moving around as they change size.

    Paul Randal
    CEO, SQLskills.com: Check out SQLskills online training!
    Blog:www.SQLskills.com/blogs/paul Twitter: @PaulRandal
    SQL MVP, Microsoft RD, Contributing Editor of TechNet Magazine
    Author of DBCC CHECKDB/repair (and other Storage Engine) code of SQL Server 2005

  • Artical was very Useful.

    I tried the below query

    create table MyTest 

     (       mychar1 char( 8000) 

             , mychar2 char( 39), 

             , myvarchar3 Varchar( 8000), 

             , myvarchar4 Varchar( 8000), 

             , myvarchar5 Varchar( 8000) 


    Then I inserted data.

    Insert into MyTest values('r1','r1','12345678901',null,null)  - Not Working(row size =8061)

    Insert into MyTest values('r1','r1','1234567890',null,null) -Working

    Insert into MyTest values('r1','r1','123456789','1',null) -Not Working(row size =8062)

    Insert into MyTest values('r1','r1','1234567','1',null)- Working

    Insert into MyTest values('r1','r1','123456','1','1') -Not Working(row size =8062)

    Insert into MyTest values('r1','r1','1234','1','1')-Working

    In the above case, is the bytes allocated ( 10 + 2)  for backpointer is used for data at runtime ?

    Please can you explain this situation!!


  • Here it is 6 years later and still there is no better explanation of the problem of page size and the 8060 vs 8039 discrepancy. Your article is a very good resource and I thank you for it. I also created a simple excel file to compute the size it can be found at:


    It assumes you will sum the sizes of the columns. although you can do it inline as I did.

  • You'd figure that after 7 hears on this site, that I'd have run into this fun and informative article by now. As an update, the SQL Server documentation now does have the rough explanation of the 24 bytes in heaps that Paul Randal described.

    Well done to you both. Steve, thanks for writing it.

    --Jeff Moden

    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 16 through 29 (of 29 total)

You must be logged in to reply to this topic. Login to reply