Cannot create a row of size 8208 which is greater than the allowable maximum row size of 8060.

  • Hello.  We have a very large table in one of our MSCRM deployments that has 566 columns with a variety of data types from uniqueidentifiers, nvarchar,nvarchar(max),int, money, bit, and datetime.  I know this is definitely not optimal and unfortunately I was not part of the design.

    This is SQL SERVER 2008 R2( SP2) 10.50.4339

    The question/problem is, there are updates taking place on one of the uniqueidentifier columns that change the value from one uniqueidentifier to another and for some of the columns it is throwing the error
    Error: 511, Severity: 16, State: 1
    Cannot create a row of size 8208 which is greater than the allowable maximum row size of 8060.
    When just trying to copy/insert  the offending rows to another empty copy of the table the same error is produced.
    How is it possible the rows exist in their condition being already over the 8060 limit in the first place??
    In my research I also found that nvarchar does not count towards the 8060 total.
    (https://technet.microsoft.com/en-us/library/ms186981(v=sql.105).aspx)
    I am over the limit with the combination of other data types, How did the row get inserted created in the first place?
    Does anyone have any insight they can share on this topic?
    Thank you for your help.

    Mark G.


  • Can you post the table definition (all of it)?
    Do you have either of the ALLOW_SNAPSHOT_ISOLATION or READ_COMMITTED_SNAPSHOT_ISOLATION database settings on?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thank you Gail for the response!
    We use snapshot databases for reporting. I proceeded to drop them and tried an alter/rebuild on the table and received the
    same error:

    Msg 511, Level 16, State 1, Line 2

    The other settings are as follows:
    snapshot_isolation_state   =1
     is_read_committed_snapshot_on   =0

    Here are some other things I attempted on the table 
    (These succeeded but did not clear the 511)
    Update statistics ,  dbcc updateusage,  dbcc cleantable  

    The Table DDL has some identifying information that I would need to clean up.
    Do you still feel it would help if I posted it?

    Thank you
    Mark G.

    just wanted to add. the table has approximately 860 rows.
    I can create a duplicate empty table and 220 of the rows will insert with no issues into the new table.
    The other 640 inserts will return 511.

    thx

  • I want to see all the data types, indexes and constraints. Obfuscate the names as much as you like (as long as the changes are consistent between CREATE TABLE and CREATE INDEX)

    Didn't ask about database snapshots. Can you test turning 'allow snapshot isolation' off briefly (do you use SNAPSHOT ISOLATION anywhere? If not, that setting is overhead for no gain)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • The reason Gail is asking for the table definition (as in the CREATE TABLE & CREATE INDEX statements) is because we can add up the data types and lengths to get an approximate size of each row. Plus some data types don't actually get saved to the same extent as the rest of the data (IN_ROW_DATA allocation unit). Large data types are stored in a LOB_DATA allocation unit. And then there is ROW_OVERFLOW_DATA allocation units.

    Knowing how your table is constructed will help us figure out what's going on with your table, what items are mapped to which time of allocation unit, and how to fix your issue.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie, Gail,  Thank you for your responses.

    My initial calculations of non nvarchar fields totaled approx 2321 Bytes.
    (not including any Bytes for Pointers, not sure how to get that number)

    I do not know if any sessions ever use snapshot isolation.
    As far as I can tell they are always ReadUncomitted.
    I once inquired with the vendor/consultant about enabling is_read_committed_snapshot_on to try and solve some contention issues
    but they were not able to give definitive answers on whether they supported it.
     Because it is MS CRM any outside changes to the setup are frowned upon and could void support.
    The entire database configuration was set up by "consultants"

    I restored a copy of the database
    and  confirmed the problem on the new copy.
    I then  turned off  'allow snapshot isolation' to test
    and confirmed the problem still occurs with 'allow snapshot isolation' off.

    alter table dbo.hnny_sbcconfigurationBase rebuild
    Cannot create a row of size 8208 which is greater than the allowable maximum row size of 8060.
    Error: 511, Severity: 16, State: 1

    I am attaching the Obfuscated ddl.
    "basconfigt.txt"

    Thank you for your time and assistance!

    Mark G.

  • This table huge.  It also could use some normalization.
    From your last post it looked like you may have tried rebuilding the clustered index, which is what we had to do with a very wide table when an upgrade was failing with the same error.  That, however, did solve our problem.

  • Just a wild guess, well it is somewhat calculated, but this table should be refactored (normalized).  Looking at the create statement there are 66 nvarchar(max) columns, 223 nvarchar(100) columns, 38 nvarchar(300) columns, 10 nvarchar(400) columns, 2 nvarchar(200) columns, 1 nvarchar(350) column, 28 bit columns (all of these take only 4 bytes), 7 datetime columns, 82 money columns, and 96 uniqueidentifier columns.

    If you have to move all of the first four identified column types off row you would need 337 24 byte pointers to point to the data which if my calculations are correct would take 8088 bytes themselves.  Notice, that doesn't take into account any of the other columns.

    Edit:  I didn't even catch the INT (6) and DECIMAL (1) columns.

  • Thanks Lynn.
    I suspect the number of pointers must be  substantial since the non-nvarchar data was only giving me a possible total of 2300 bytes.
    I have already given the development team my opinion of this table with out any consensus from the SQL Server community, which now I can add.

    The big question for me is how did the rows that are pushing over 8060 get to that state without failing in the first place?
    How do they exist?  Wouldn't the operation that caused the overage have failed at the time it executed preventing it from becoming greater than 8060?

    Thank you,
    Mark G.

  • All depends on how the data is entered into the system.  Enough small data or null values to begin with and the data is successfully loaded whether in-row or out-of-row.  As more data is entered into a specific row, the greater the chance that you run into this problem.

  • One straightforward way this can happen is adding some columns to a table.

    Under some conditions, this is a metadata-only change and existing rows will not reflect the new space requirements until the new column is affected by an UPDATE. 

    Attempting an INSERT of the existing row into another table with the same structure (or into the same table, if duplicates are allowed) will fail if the new space requirements run afoul of the 8060 limit, despite the fact that the current row exists as-is without issues (well, without causing immediate errors; I'd argue such a row certainly has some issues).

    I've included a demo script to show one example of this. It successfully demonstrates it on the 2008 R2 and 2012 instances I've run it on, but I'm playing with very small margins in the demo (the eventual failure is for a row of 8061 I think), so I wouldn't wager my life on its working under every situation.

    At any rate, the concept is fairly straightforward, and hopefully this sheds light on one way this can happen. Now, whether that's exactly what's happening in your case is another question entirely, but with a table like that, I'd say it's rather likely.

    --Use some dynamic TSQL to create a table with 549 nvarchar(100) columns

    DECLARE @createtable nvarchar(max);

    SET @createtable='';

    SELECT TOP 549 @createtable=@createtable+',nvarchar'+CAST(ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS nvarchar(max))+' nvarchar(100)'
    FROM sys.all_columns;

    SET @createtable='CREATE TABLE too_wide_table ('+STUFF(@createtable,1,1,'')+');'

    EXEC (@createtable);

    GO

    --Use dynamic TSQL again INSERT a row with 307 columns populated

    DECLARE @insertinto nvarchar(max);
    DECLARE @select nvarchar(max);

    SET @insertinto='';
    SET @select='';

    SELECT TOP 307 @insertinto=@insertinto+',nvarchar'+CAST(ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS nvarchar(max))
    FROM sys.all_columns;

    SET @insertinto='INSERT INTO too_wide_table ('+STUFF(@insertinto,1,1,'')+')';

    SELECT TOP 307 @select=@select+',''somelongishstringthatshouldsuitourpurposesnicelyblahblahblahbla'''
    FROM sys.all_columns;

    SET @insertinto=@insertinto+'SELECT '+STUFF(@select,1,1,'');

    EXEC (@insertinto);

    GO

    --Add a 550th NULLable column

    ALTER TABLE too_wide_table ADD nvarchar550 nvarchar(100);

    GO

    --This all succeeds, and our current row is perfectly happy as it is.
    --Now let's just try to insert the exact same row into the table again

    INSERT INTO too_wide_table
    SELECT *
    FROM too_wide_table;

    --This fails. INSERTing it as a new row has to account for the new column

    --Let's also clean up after ourselves

    DROP TABLE too_wide_table;

    Cheers!

  • To add on to what the others are saying, all variable columns NVARCHAR and VARCHAR have an additional overhead added to the length you've give them. I believe (Lynn, correct me if I'm wrong) that if the entire column is null in all records it won't quite count up to the full MAX or 100 (whatever length you have it set to). So it could be you had a bunch of records in the table not using 3 or more columns at all, hence only counting 1 bit for the NULL value and 2 bytes for general overhead per unused column. Then your latest update tried to enter data into these columns and suddenly exploded past the 8060 limit. - Caveat to this. I may be speaking about SPARSE columns which aren't available in 2008 (I think) but I'm pretty sure an empty (N)VARCHAR column doesn't automatically consume all its dictated space regardless of version & SPARSE column availability.

    This question actually sparked a vague memory of an article I read a while back which said 8060 isn't actually the true length available per row. That it was technically more like 8054 with the remaining 6 being pointers or overhead or extent-related maintenance something... I'm not sure if this is actually true. I just remember reading it somewhere. Lynn, have you ever heard of this?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Jacob, thank you for your example.
    In our case the application does add columns to this table. 
    We receive the "Cannot create row of size" error when attempting to update an already populated field not a new field.  A uniqueidentifier field. 
    The example as is allows me to update an existing column with same size data with no problem.
    But I altered the example slightly and was able to recreate our scenario.  Instead of adding just one additional nvarchar(100), I  added 12 nvarchar(max).  (I started with 12 additional rows, I didn't try any fewer.)
    With that change not only can I not insert the same row but I cannot update an existing column with its own data.
    I am pretty sure that covers our scenario as part of the functionality of the software will add columns to this table.

    Thank you for the example.

    Thank you to every one for your input and time helping me with this!

    Mark G.

  • ganci.mark - Thursday, May 11, 2017 6:48 AM

    Jacob, thank you for your example.
    In our case the application does add columns to this table. 
    We receive the "Cannot create row of size" error when attempting to update an already populated field not a new field.  A uniqueidentifier field. 
    The example as is allows me to update an existing column with same size data with no problem.
    But I altered the example slightly and was able to recreate our scenario.  Instead of adding just one additional nvarchar(100), I  added 12 nvarchar(max).  (I started with 12 additional rows, I didn't try any fewer.)
    With that change not only can I not insert the same row but I cannot update an existing column with its own data.
    I am pretty sure that covers our scenario as part of the functionality of the software will add columns to this table.

    Thank you for the example.

    Thank you to every one for your input and time helping me with this!

    Mark G.

    This: The example as is allows me to update an existing column with same size data with no problem.
    When updating a variable length column with the same size or smaller data will work as SQL Server can write the data back to the same location.

    This: But I altered the example slightly and was able to recreate our scenario.  Instead of adding just one additional nvarchar(100), I  added 12 nvarchar(max).  (I started with 12 additional rows, I didn't try any fewer.)
    With that change not only can I not insert the same row but I cannot update an existing column with its own data.
    You have made additional changes to row schema which will result in changes to where data may be written.  SQL Server (and others more knowledgeable in this area may correct me) does not write data in the order the columns are defined.  Fixed length data is written first; your uniqueidentifier, int, datetime, timestamp, money, the bit fields that are consolidated.  Then the variable length data is written.  I am not sure if there is hierarchy in how these columns are written.

    Looking at the table it really appears that it needs to be normalized possibly into several tables are there are several sets of repeating data that should be in separate tables and actually pivoted from their current format such that adding an additional OtherServicesDescription column for example simply becomes adding an additional row of data.

    If I had to guess I would say this table was based off an Excel spreadsheet.

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply