maximum row size (16845) exceeds the maximum number of bytes per row (8060)

  • I get the following message in SQL SERVER 2000

    "Warning: The table 'DS_PRM_ADDRESSES' has been created but its maximum row size (16845) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail if the resulting row length exceeds 8060 bytes."

    I did some research on the internet and found out that this message appears when the bytes of the row size, exceed the maximum bytes SQL server allows. However i cant figure out how SQL server comes out with such a large number as the number 16845. I am aware that tinyint requires 2 bits, and that we have four bits offset for every row and so on.

    The DS_PRM_ADDRESSES' table has the following design. (from query analyzer) which when you add the lengths of datatypes the result comes up to 8345.

    ***** JUST FOR YOUR INFO *******

    ***** TABLE DESIGN *************

    [Address_Code] [int]

    [Address_Title] [nvarchar] (255)

    [Address_Last_Name] [nvarchar] (255) ,

    [Address_First_Name] [nvarchar] (255),

    [Address_Street1] [nvarchar] (255) ,

    [Address_Street2] [nvarchar] (255) ,

    [Address_Street3] [nvarchar] (255) ,

    [Address_Street4] [nvarchar] (255) ,

    [Address_Street5] [nvarchar] (255) ,

    [Address_City] [nvarchar] (255) ,

    [Address_Country] [nvarchar] (255) ,

    [Address_Zip] [nvarchar] (255) ,

    [Address_Tel_1] [nvarchar] (255) ,

    [Address_Tel_2] [nvarchar] (255) ,

    [Address_Fax] [nvarchar] (255) ,

    [Address_Mobile] [nvarchar] (255) ,

    [Address_Email] [nvarchar] (255) ,

    [Address_Location] [nvarchar] (255) ,

    [Address_Concat] [nvarchar] (4000) ,

    [FLG_UPD] [tinyint] NULL CONSTRAINT DF_DS_PRM_ADDRESSES_FLG_UPD] DEFAULT (0),

    [CR_Date] [nvarchar] (50)

    [DF_DS_PRM_ADDRESSES_CR_Date] DEFAULT (convert(varchar(100),getdate(),103))

    **********END OF***************

    ****TABLE DESIGN***************

    *******************************

    The total number of bytes are

    (address_code int) 4 bytes +

    (adress infromation fileds) 17 x 255 +

    address (concat field) 4000 +

    (tinyintfield) 2 +

    any offsets. The total addition of the above doesnt even come at half the 16845 bytes the SQL is referring (in fact is 8345 excluding any row and page offsets).

    How can i therefore define what fields and how much to restrict their length when the number the SQL presents, seems to me "so out of space" πŸ™‚

    Any suggestions?

    Dionisis

    PS: similar problems to other tables. Addition of field lengths do not come up the size SQL server reports? why is that?


    "If you want to get to the top, prepare to kiss alot of bottom"

  • nvarchar = 2 bytes per character, so you just the Address_Concat field is worth 8000, which is already very close to the limit.

  • Please explain abit more.

    if nvarchar is 2 bytes per character then the 17 fields nvarchar [255] come to 20.000 alone! plus the 4000 length of the address_concat field (whgich you say is 8000 bytes) should exceed the 25000 bytes. But SQL server reports 16835. How is this possible? should i only take into consideration that the address_concat field is "doubled in bytes" and not the other varchar fields as well?

    I would be gratefull if you could be a little more specific...

    Kind Regards,

    Dionisis


    "If you want to get to the top, prepare to kiss alot of bottom"

  • 17*255*2 = 8670 + 8000 = 16670, so 16845 doesn't seem far off the total size.

    Run this for the exact total :

    Select SUM(C.Length) as Length from dbo.SysColumns C where C.id = object_id('TableName')

  • Thank you, thank you, thank you!

    πŸ™‚

    Your Help is much appreciated! πŸ™‚ One more question please Remi.

    Have you any other idea apart from cutting the lentghs (we are talking about addresses and names, so that is a bit risky to cut down the lentghs, dont you aggree?).

    b) Would the creation of a child table be a better idea? How should i do this? just take the address concat out of the table and insert it into another, along with the primary table primary key? This would also mean changing all the procedures so far written wouldn'it? OOOPS! πŸ™‚


    "If you want to get to the top, prepare to kiss alot of bottom"

  • Are you expecting to store japanese or foreign languages in that table?

    If not you can use varchar on all Fields.

    The concat field can be dropped as it is the aplication's job to do presentation.

    Also my name is RΓ©mi GrΓ©goire > 510 characthers to keep that info IS overkill. Frankly 30 for each should be plently, maybe 50 for the name if someone as 4 middle names and decides to write 'em all.

  • Yes, the table is used for keeping names and addresses of different language (i.e bulgarian). The table is used in a software application for a bank and currently has over 500.000 records.

    The concat field is used in order to compare the new records with the existing records (among two tables) and find any differences. It was found the quickest way to update client information. I can not change that.

    Why do you mention that "if not you can use varchar on all fields" ? What do you mean? i am using varchar in all fields except the address code [int]. Does this make any change? can you be more specific please?

    Gratefull,

    Dionisis


    "If you want to get to the top, prepare to kiss alot of bottom"

  • Can you send a sample name from a Bulgarian costumer?

    The concat method just can't work faster than the normal sql methods... but I'll trust your tests on this one.

    I meant convert the nvarchars to varchars (make sure you do this on a test server first).

  • I have never used SQLS with languages that use cyrillic letters like Russian or Bulgarian, but I understand that these languages don't require NVARCHAR datatype. It is necessary only for those languages that don't have a limited number of letters, but a huge amount of signs (like Chinese and Japanese). So, if you could change the datatype of nvarchar to varchar (maybe shortening the (255) to (100)?), it should work fine - of course not to be done without testing, especially if it is a bank :-)). Regarding the concat field, I doubt that it would be the best way of checking in SQLS, but since you stated that you can't change it, it makes no sense to look for a better way.

     

    Remi,

    do you want to buy some clothes that you asked for "name from a Bulgarian costumer"? (Sorry, but I liked this typo so much that I simply couldn't resist )

  • I like some of m topes too .

    Thanx for the backup.

  • I suggest that you also look at your data and set your max column lengths to a realistic number.  255 characters seems too long for any name or address, not to mention an email address or telephone number!

    Greg

    Greg

  • In addition to what's been suggested, some structural changes might help:

    1. You could move phone numbers into another, related table. This has the advantage of allowing many or few phone numbers per person (normalization)

    2. You could store the people (names) and the addresses (street, city, post code, etc.) in separate, related tables. If it's useful, this would similarly allow multiple addresses -- though that's not as usual as multiple phone numbers.

  • Guys thank you very much for your help so far,

    i really appreciate it...

    ok let me clarify 2 things...

    First of all, from all this i have learned that i will try to avoid using nvarchar unless it is really necessary.

    Secondly, the reason that I suspect we can not change nvarchar to varchar is this. The loading of the BI system we are building, is beeing fed with data from files that the bank exports (daily) to a specific location. The banks application allready uses nvarchar(255) to store the names addresses etc, so it is not wise to reduce the length of nvarchars or to change the datatype in the application we are building and the BI system, is it? Please advice on this.

    I also have another question to ask regarding the warnings in the original post... Not all the tables that the warnings are issued for, contain concatanation fields. Some dont, but because of the large number of columns which are used (and most are of nvarchar(255) type) their row length still exceeds the 8060 restriction. My question is this.

    What happens in the case that there is a record (that comes directly from the bank) which it actually contains more than 8060 bytes? Taking in acount the current situation of my database (that means with the issued warnings), do i lose the whole record or do i lose the data that exceeds the 8060 bytes? (the first case is bad, the second seems better to me πŸ™‚ ). Please be certain if you answer this. What options do i have to ensure that the record will not be lost??

    And lastly a simple query for anyone (with less knowledge than me) that wants to see the byte lengths of his tables in a database.

    select case when

    (grouping(sob.name)=1) then 'All_Tables' else isnull(sob.name, 'unknown')

    end as Table_name,

    sum(sys.length)as Byte_Length

    from sysobjects sob, syscolumns sys

    where

    sob.xtype='u' and

    --sob.name like 'TABLENBAME%' and

    sys.id=sob.id

    group by sob.name

    with cube

    ps: Moving the phones in another table I think its a good idea. The reason is that we have less than an hour to feed our application AND update the Cubes, execute the seperation of all bank's collection cases (over 1 million) to different queues, update customer info, their credit cards info etc... Because of this, denormalization is necessary, and that is why we have come up with this design as we have rigorously tested the time required for all this actions. The same stands for the concatanation fileds. In brief, the concat filed of the data staging table, is compared with the concat field of our applications table and if there is a change in the customers info, then the whole record is updated. (the programm responsible for this is build with Delphi and is fast enough to fit in our time "window").


    "If you want to get to the top, prepare to kiss alot of bottom"

  • What happens in the case that there is a record (that comes directly from the bank) which it actually contains more than 8060 bytes? Taking in acount the current situation of my database (that means with the issued warnings), do i lose the whole record or do i lose the data that exceeds the 8060 bytes? (the first case is bad, the second seems better to me πŸ™‚ ). Please be certain if you answer this. What options do i have to ensure that the record will not be lost??

    The INSERT statement will fail with the following message and the row won't be inserted into the table.

    Server: Msg 511, Level 16, State 1, Line 5

    Cannot create a row of size <rowlength> which is greater than the allowable maximum of 8060.

    The statement has been terminated.

     
    Greg

    Greg

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply