Insert failure due to over row size max - but it ain''t.

  • Hello all- here's a curly one,

    I'm trying to work out a replication conflict (I don't think it's a repl issue).  Namely:

    "Cannot create a row of size 8092 which is greater than the allowable maximum of 8060."

    It inserted OK at the subsciber but not at the publisher.  I checked out the tables at each site and they are identical in column attributes and number as are any tables any of its triggers update.

    I've summed up the total row size using http://msdn2.microsoft.com/en-us/library/ms175991.aspx just doing the row size part of the calculation for the heap.  There are no clustered indexes but a bunch of non clustered - which I didn't bother with as they are stored on seperate pages (aren't they?).  The total size only comes to 5968 and the record in question is a couple of thousand short of utilising all varchars.  (90% of the space would be varchar).  There's 76 columns in the table.

    Using EXEC sp_spaceused I did notice that the indexes at the subscriber (where inserted) took up substantially less than the publisher so if they are stored on the same page this could account for the failure (but it'd mean the error in question is misleading and my knowledge of how table info is stored sadly lacking - which I guess it is)

    I found a reference to merge replication saying that the maximum is reduced to 6000 bytes due to the need to store conflict info, which is suspiciously close to the column max.  And so I tried to directly insert the changes at the publisher (failed) plus tried to insert the same info on a non replicated restored backup (failed).  plus searched for any repl triggers that may have remained on the table that might still be doing some of the replication process. (none)

    Any ideas anyone.  I've done enough clutching at straws.

    Thanks,

    Mike

  • You posted in SQL 2000 / 7 but quote and article for SQL 2005, which are you using?

    How did you calculate the total size 5968?

    Here is a great article and there are great scripts in the library:

    http://www.sqlservercentral.com/columnists/sjones/pagesize.asp

    http://www.sqlservercentral.com/scripts/viewscript.asp?scriptid=1062

    Andy

     

  • Are all your servers the same engine - 6.5/7/2000/2005?


    Live to Throw
    Throw to Live
    Will Summers

  • Post the schema of the table (script the create table statement).

     


    My blog: SQL Soldier[/url]
    SQL Server Best Practices:
    SQL Server Best Practices
    Twitter: @SQLSoldier
    My book: Pro SQL Server 2008 Mirroring[/url]
    Microsoft Certified Master: SQL Server, Data Platform MVP
    Database Engineer at BlueMountain Capital Management[/url]

  • I've been tied up for a little while but have managed to do more of an investigation on this.

    Thanks for your replies. David, I had seen that link but had a closer look - learnt a bit but didn't solve the problem.  Will, they're all 2000.   Robert, the create table isn't going to help as per the comments below...

    First off, it's sql2000 which is why I posted it there.  It appears I used the wrong method for measuring the row size (though I would've thought they'd be similar between versions) but it seems that's not the problem.  Also, as per original investigations, I can't see how it's replication related.  I am now totally flumoxed by this.

    I created a new table with identical constraints and indexes to the old one and successfully inserted the same record.  I slowely increased the size of the two main varchars (probably about 97% of the variable fields) until they reached the maximum size of their fields successfully.   So I went back to the original table.

    On the original table (in the backup I created - note again that I haven't repl'd the backup and the behaviour is still exhibited) I've removed and recreated indexes to see if they were somehow causing the problem (though as I understand they're not stored on the page) to no avail.  I have increased the size of two of the major varchar fields 50 chars at a time until they failed.  Each time with the said error (row size too big) on a number of different records - and here's the really weird thing ... summing up the length of all the variable fields on these records show that they get the error at vastly different total lengths ????.  

    Does someone know what other information is stored on a page other than the row itself.  Is it possible for old dropped row information to somehow remain attached to a record - perhaps it _is_ repl related and it is a result of sp_repldropcolumn leaving info around.

  • Oh - and does someone know how to find out what page a particular record in a particular page is on so that I can use "dbcc page(etc)" to dump the page info.

    cheers,

    Mike

  • Please post the schema. I'm not going to research other potential answers until I've satisfied myself that the most obvious answer is wrong.


    My blog: SQL Soldier[/url]
    SQL Server Best Practices:
    SQL Server Best Practices
    Twitter: @SQLSoldier
    My book: Pro SQL Server 2008 Mirroring[/url]
    Microsoft Certified Master: SQL Server, Data Platform MVP
    Database Engineer at BlueMountain Capital Management[/url]

  • Absolutely you can have old info still on the pages when you drop a column, it's one of the tricks SQL does to speed up table changes. For example, if you change an int to smallint, it will still have 4 bytes allocated instead of the 2 required. Same for dropping a column. To actually fix things up you need to rebuild the table, easiest way in your case is to add/drop a clustered index.

  • OK - here's the table script...

    CREATE

    TABLE [dbo].[tblCustomer](

    [CustomerID] [int]

    IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,

    [CustomerCode] [varchar]

    (10) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,

    [StartDate] [datetime]

    NULL,

    [ParentCustomerID] [int]

    NULL,

    [ConsolidateToMasterCustomer] [bit]

    NOT NULL CONSTRAINT [DF__Temporary__Conso__145C0A3F] DEFAULT (0),

    [CreditStop] [bit]

    NOT NULL CONSTRAINT [DF_tblCustomer_CreditStop] DEFAULT (0),

    [BillingName] [varchar]

    (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [BillingAddress1] [varchar]

    (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [BillingAddress2] [varchar]

    (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [BillingCity] [varchar]

    (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [BillingState] [varchar]

    (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [BillingPostcode] [varchar]

    (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [BillingContactName] [varchar]

    (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [BillingContactPhone] [varchar]

    (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [BillingContactFax] [varchar]

    (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [ServiceName] [varchar]

    (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [ServiceAddress1] [varchar]

    (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [ServiceAddress2] [varchar]

    (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [ServiceCity] [varchar]

    (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [ServiceState] [varchar]

    (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [ServicePostcode] [varchar]

    (8) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [ServiceContactName] [varchar]

    (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [ServiceContactPhone] [varchar]

    (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [TerritoryCode] [varchar]

    (2) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [SalespersonCode] [int]

    NULL,

    [CategoryID] [int]

    NULL,

    [LastVisitDate] [smalldatetime]

    NULL,

    [NextVisitScheduledDate] [smalldatetime]

    NULL,

    [WeeksBetweenScheduledVisit] [smallint]

    NULL,

    [CreditLimit] [float]

    NULL,

    [InUse] [bit]

    NOT NULL CONSTRAINT [DF__Temporary__InUse__164452B1] DEFAULT (0),

    [GstExempt] [bit]

    NOT NULL CONSTRAINT [DF__Temporary__GstEx__173876EA] DEFAULT (0),

    [DisplayKgOnInvoice] [bit]

    NOT NULL CONSTRAINT [DF__Temporary__Displ__182C9B23] DEFAULT (0),

    [PickupInstruction] [varchar]

    (1500) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [EpaLicenceNumber] [varchar]

    (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [ConsignmentAuthorisationExpiryDate] [smalldatetime]

    NULL,

    [WasteCodeID] [varchar]

    (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [ChangeDate] [smalldatetime]

    NULL,

    [ChangeByID] [smallint]

    NULL,

    [WasteTrackingAdminFee] [money]

    NULL,

    [CurrentPeriod] [smalldatetime]

    NULL,

    [HideKGOnStatement] [bit]

    NOT NULL CONSTRAINT [DF__tblCustom__HideK__11BF94B6] DEFAULT (0),

    [AllowRemoteWeighIn] [bit]

    NOT NULL CONSTRAINT [DF__tblCustom__Allow__1590259A] DEFAULT (0),

    [PrintDepartmentalAnalysis] [bit]

    NOT NULL CONSTRAINT [DF__tblCustom__Print__17786E0C] DEFAULT (0),

    [DoNotPrintDocketBarcodes] [bit]

    NOT NULL CONSTRAINT [DF__tblCustom__DoNot__033C6B35] DEFAULT (0),

    [BillingEmailAddress] [varchar]

    (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [StatementViaPost] [bit]

    NOT NULL CONSTRAINT [DF__tblCustom__State__5DD5DC5C] DEFAULT (1),

    [StatementViaEMail] [bit]

    NOT NULL CONSTRAINT [DF__tblCustom__State__5ECA0095] DEFAULT (0),

    [StatementViaFax] [bit]

    NOT NULL CONSTRAINT [DF__tblCustom__State__5FBE24CE] DEFAULT (0),

    [TreatmentPlantID] [int]

    NULL,

    [ImmediateBilling] [bit]

    NOT NULL CONSTRAINT [DF__tblCustom__Immed__6482D9EB] DEFAULT (0),

    [CommercialEmailAddress] [varchar]

    (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [EntityOfOrigin] [int]

    NOT NULL CONSTRAINT [DF__tblCustom__Entit__066DDD9B] DEFAULT (1),

    [WasteOriginCodeID] [smallint]

    NOT NULL,

    [AdgCodeID] [smallint]

    NOT NULL,

    [PhysicalNatureOfWasteID] [smallint]

    NOT NULL,

    [UnNumberID] [smallint]

    NOT NULL,

    [WasteTypeID] [smallint]

    NULL,

    [BillingEntityID] [int]

    NOT NULL,

    [OriginEntityID] [int]

    NULL,

    [RegionID] [int]

    NOT NULL,

    [Reference1] [varchar]

    (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [Reference2] [varchar]

    (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [PaymentTerm] [int]

    NULL,

    [DisplayEpaDocketNumberOnStatement] [bit]

    NULL,

    [ShowPriorInvoicesOnStatement] [bit]

    NULL DEFAULT (1),

    [BillingMobilePhone] [varchar]

    (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [ServiceMobilePhone] [varchar]

    (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [UseDisposalReport] [bit]

    NOT NULL DEFAULT (0),

    [rowguid] [uniqueidentifier]

    ROWGUIDCOL NOT NULL CONSTRAINT [DF__tblCustom__rowgu] DEFAULT (newid()),

    [BypassWeighOut] [bit]

    NULL,

    [ABN] [varchar]

    (15) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [SFDCParent] [bit]

    NULL,

    [SFDCCode] [varchar]

    (18) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    [SecurityNumberRequired] [bit]

    NULL,

    [Notes] [varchar]

    (3100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,

    CONSTRAINT [aaaaatblCustomer7_PK] PRIMARY KEY NONCLUSTERED

    (

    [CustomerID]

    ASC

    )

    ON [PRIMARY],

    CONSTRAINT [IX_tblCustomerCode] UNIQUE NONCLUSTERED

    (

    [CustomerCode]

    ASC

    )

    ON [PRIMARY]

    )

    ON [PRIMARY]

    GO

    SET

    ANSI_PADDING OFF

    GO

    USE

    [Phoenix_Prod]

    GO

    ALTER

    TABLE [dbo].[tblCustomer] WITH NOCHECK ADD CONSTRAINT [FK_tblCustomer_tblCustomer] FOREIGN KEY([ParentCustomerID])

    REFERENCES

    [dbo].[tblCustomer] ([CustomerID])

    NOT

    FOR REPLICATION

    GO

    ALTER

    TABLE [dbo].[tblCustomer] CHECK CONSTRAINT [FK_tblCustomer_tblCustomer]

    GO

    ALTER

    TABLE [dbo].[tblCustomer] WITH NOCHECK ADD CONSTRAINT [FK_tblCustomer_tblEntity] FOREIGN KEY([BillingEntityID])

    REFERENCES

    [dbo].[tblEntity] ([EntityID])

    NOT

    FOR REPLICATION

    GO

    ALTER

    TABLE [dbo].[tblCustomer] CHECK CONSTRAINT [FK_tblCustomer_tblEntity]

    GO

    ALTER

    TABLE [dbo].[tblCustomer] WITH NOCHECK ADD CONSTRAINT [FK_tblCustomer_tblEntity1] FOREIGN KEY([OriginEntityID])

    REFERENCES

    [dbo].[tblEntity] ([EntityID])

    NOT

    FOR REPLICATION

    GO

    ALTER

    TABLE [dbo].[tblCustomer] CHECK CONSTRAINT [FK_tblCustomer_tblEntity1]

    GO

    ALTER

    TABLE [dbo].[tblCustomer] WITH NOCHECK ADD CONSTRAINT [FK_tblCustomer_tblResource] FOREIGN KEY([SalespersonCode])

    REFERENCES

    [dbo].[tblResource] ([ResourceID])

    NOT

    FOR REPLICATION

    GO

    ALTER

    TABLE [dbo].[tblCustomer] CHECK CONSTRAINT [FK_tblCustomer_tblResource]

    GO

    ALTER

    TABLE [dbo].[tblCustomer] WITH NOCHECK ADD CONSTRAINT [FK_tblCustomer_tblTreatmentPlant] FOREIGN KEY([TreatmentPlantID])

    REFERENCES

    [dbo].[tblEntity] ([EntityID])

    NOT

    FOR REPLICATION

    GO

    ALTER

    TABLE [dbo].[tblCustomer] CHECK CONSTRAINT [FK_tblCustomer_tblTreatmentPlant]

    GO

    ALTER

    TABLE [dbo].[tblCustomer] WITH NOCHECK ADD CONSTRAINT [repl_identity_range_pub_013DC69B_C418_4B92_8DBC_FC8943CA196A] CHECK NOT FOR REPLICATION (([CustomerID] > 669000 and [CustomerID] < 670000))

    GO

    ALTER

    TABLE [dbo].[tblCustomer] CHECK CONSTRAINT [repl_identity_range_pub_013DC69B_C418_4B92_8DBC_FC8943CA196A]

     

    --And an extra couple of indexes

    GO

    CREATE

    NONCLUSTERED INDEX [CustomerID] ON [dbo].[tblCustomer]

    (

    [CustomerID] ASC) ON [PRIMARY]

    GO

    CREATE

    UNIQUE NONCLUSTERED INDEX [index_1160391203] ON [dbo].[tblCustomer]

    (

    [rowguid] ASC) ON [PRIMARY]

    GO

    CREATE

    NONCLUSTERED INDEX [ParentCustomerID] ON [dbo].[tblCustomer]

    (

    [ParentCustomerID] ASC) ON [PRIMARY]

  • Absolutely you can have old info still on the pages when you drop a column, it's one of the tricks SQL does to speed up table changes. For example, if you change an int to smallint, it will still have 4 bytes allocated instead of the 2 required. Same for dropping a column. To actually fix things up you need to rebuild the table, easiest way in your case is to add/drop a clustered index.

    That may explain it.  Due to the limitations of sql2000 replication we recently increased the size of the largest column by creating a new temporary column, copying data to it, droping the existing column, recreating it larger and copying data back and droping the temporary column.  Which means there could be lots of legacy information swanning about in there.  It would also explain why the error arises at such different actual field lengths.  (I could do an approximate calc to see about where the error should occur) 

    We wanted to make the column larger but it failed.  Recent reading (though this was just in a forum) suggested the failure to increase the column size larger was due to it being up against the 6000 limit imposed by replication to allow storage of  conflict information - which it's almost up against.  Is it possible two things are happening here - SQL is logically calculating how big the columns can actually be whic determines how large we can set them, but how much we can practically fit in them depends somewhat on the detritus in the page.

    I expect there's no way of rebuilding a table while replication is maintained?  The client would be significantly inconvenienced if we had to drop repl - though it might give us a change to move them to 2005 if they're up for it

    Does 2005 behave in the same way?  At least doing an alter column under replication should just alter that column.

  • I've never dug deep enough to know for sure. Might be interesting to dig, but given that a copy of the table works fine, I think it makes sense to try to rule it out. I dont see a clustered index in your table script, so all you have to do is add one to the table, you can drop it after if you want, but in general you want a clustered index on each table (and preferably on a key that is unique and increases in value).

  • Thanks Andy.  I finally got around to trying it out and it worked a treat.  That sort of information should be mentioned in the general help on create table.  Did a lot of testing first to make sure it wasn't going to cause repl to fall over and even so it wasn't long before I had the phone running off the hook with all sites running so slow as to be unusable.  I had one of those initial dread feelings that I'd screwed things up and was about to wave goodbye to my weekend rolling out repl again.  Fortunately a quick Update Stats solved the problem.  Phew.

    We don't tend to have clustered indexes on tables as it slows down inserting records.  Having said that the table in question doesn't have many inserts so we'll do some tests on performance impacts in the future.

  • Glad to hear you got it resolved, and without the extra weekend!

    How many inserts/sec are you doing that you cant afford a clustered index, on the tables that have a lot of inserts? And was the datatype of the column you tried it on? Just curious. In general having a clustered index is a good thing but of course there are exceptions.

  • I've never actually had a close look - but have just done that on the most inserted into table and it seems that it's about 3/sec.  This will depend to some extend on the users PC as it's happening client side in ado.  However, due to the client growing and expanding the apps usage this is becoming slower and slower and so we are now moving some of the processes server side and adding them in batches - sometimes over 2000 in one go.  I'm not sure how indexing affects this.

    The decisions regarding indexes were made quite some time ago and I wasn't a part of them (and even if I was my poor memory would preclude me from saying anything about it).  I suspect they would have put the index on the ID field.

    The table measured above gets searched in any number of ways so is probably best left as it is.  I would suspect that tables that could overall benefit from the clustered index would be searched by the ID as this is the most often referred to field - individually and within joins.

     

    Cheers,

    Michael

  • The clustered index affects the peformance of all indexes. Having a clustered index gives SQL Server a field that it can use to perform quick lookups of values by associating other indexes to the associated values in the clustered index.

    For example, if I had a varchar(2) field called state, and I selected records where state = 'WA', SQL Server will check the index on state to see which values of the clustered index associate with records whose state = WA. Then it will look up the records by finding the values on the clustered index. Without a clustered index, SQL Server will have to read the entire column to find the records that match the value.

    So a clustered index can speed up all queries that use an index, not just the ones searching on the clustered column.


    My blog: SQL Soldier[/url]
    SQL Server Best Practices:
    SQL Server Best Practices
    Twitter: @SQLSoldier
    My book: Pro SQL Server 2008 Mirroring[/url]
    Microsoft Certified Master: SQL Server, Data Platform MVP
    Database Engineer at BlueMountain Capital Management[/url]

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic. Login to reply