Alter table failed because the minimum row size would be 8061, including 8035 bytes of internal overhead

  • I have a strange problem that I have never seen before. I am trying to alter a table and add a field to it and I am greeted with an error that I would exceed maximum row size; when in fact I would not even be remotely close:

    Here is the statement:

    IF NOT EXISTS(SELECT * FROM [addb15].[FSParallel].INFORMATION_SCHEMA.COLUMNS

    WHERE TABLE_NAME = 'AccountingPeriod'

    AND COLUMN_NAME = 'ImportIdentity') BEGIN EXEC [addb15].[FSParallel].sys.sp_executesql N' ALTER TABLE [dbo].[AccountingPeriod] ADD [ImportIdentity] int' END

    Here is the table that I want to alter:

    CREATE TABLE [dbo].[AccountingPeriod](

    [AccountingPeriodGUID] [uniqueidentifier] NOT NULL CONSTRAINT [DF_AccountingPeriod_AccountingPeriodGUID] DEFAULT (newsequentialid()),

    [AccountingPeriodDate] [smalldatetime] NOT NULL CONSTRAINT [DF_AccountingPeriod_AccountingPeriodDate] DEFAULT (getdate()),

    [WeekOfMonth] [tinyint] NOT NULL,

    [BiWeeklyCycle] [tinyint] NOT NULL,

    CONSTRAINT [PK_AccountingPeriod] PRIMARY KEY CLUSTERED

    (

    [AccountingPeriodGUID] ASC

    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

    ) ON [PRIMARY]

    Here is my error message:

    Msg 1701, Level 16, State 1, Line 1

    Creating or altering table 'AccountingPeriod' failed because the minimum row size would be 8061, including 8035 bytes of internal overhead. This exceeds the maximum allowable table row size of 8060 bytes.

    Does anyone have a clue? I am totally lost how this is happening.

    <hr noshade size=1 width=250 color=#BBC8E5> Regards,Jeffery Williams http://www.linkedin.com/in/jwilliamsoh

  • I know this does not explain your problem, but have you tried creating a new table with the new definition? Then move the data (you pick the method, I like SSIS) then drop the old one?

    Regarding the error itself, I am not sure, but I think that the ALTER statement does the change in place and on the fly, keeping old and adding the new data during the process, then drop once done. You may be exceeding the actual max size if my statement is correct.

  • I prefer to code it and yes I tried that, good thought.

    I am a contractor and have been at this site for a few weeks. The table has several FK relationships and while I can recreate those I would rather not be that disruptive. With that said I would also like to know what it going on.

    <hr noshade size=1 width=250 color=#BBC8E5> Regards,Jeffery Williams http://www.linkedin.com/in/jwilliamsoh

  • Jeffery Williams (5/22/2015)


    I have a strange problem that I have never seen before. I am trying to alter a table and add a field to it and I am greeted with an error that I would exceed maximum row size; when in fact I would not even be remotely close:

    Here is the statement:

    IF NOT EXISTS(SELECT * FROM [addb15].[FSParallel].INFORMATION_SCHEMA.COLUMNS

    WHERE TABLE_NAME = 'AccountingPeriod'

    AND COLUMN_NAME = 'ImportIdentity') BEGIN EXEC [addb15].[FSParallel].sys.sp_executesql N' ALTER TABLE [dbo].[AccountingPeriod] ADD [ImportIdentity] int' END

    Here is the table that I want to alter:

    CREATE TABLE [dbo].[AccountingPeriod](

    [AccountingPeriodGUID] [uniqueidentifier] NOT NULL CONSTRAINT [DF_AccountingPeriod_AccountingPeriodGUID] DEFAULT (newsequentialid()),

    [AccountingPeriodDate] [smalldatetime] NOT NULL CONSTRAINT [DF_AccountingPeriod_AccountingPeriodDate] DEFAULT (getdate()),

    [WeekOfMonth] [tinyint] NOT NULL,

    [BiWeeklyCycle] [tinyint] NOT NULL,

    CONSTRAINT [PK_AccountingPeriod] PRIMARY KEY CLUSTERED

    (

    [AccountingPeriodGUID] ASC

    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

    ) ON [PRIMARY]

    Here is my error message:

    Msg 1701, Level 16, State 1, Line 1

    Creating or altering table 'AccountingPeriod' failed because the minimum row size would be 8061, including 8035 bytes of internal overhead. This exceeds the maximum allowable table row size of 8060 bytes.

    Does anyone have a clue? I am totally lost how this is happening.

    Had to do some mods to the code you posted. Ran the following in a Sandbox database with no issues.

    CREATE TABLE [dbo].[AccountingPeriod](

    [AccountingPeriodGUID] [uniqueidentifier] NOT NULL CONSTRAINT [DF_AccountingPeriod_AccountingPeriodGUID] DEFAULT (newsequentialid()),

    [AccountingPeriodDate] [smalldatetime] NOT NULL CONSTRAINT [DF_AccountingPeriod_AccountingPeriodDate] DEFAULT (getdate()),

    [WeekOfMonth] [tinyint] NOT NULL,

    [BiWeeklyCycle] [tinyint] NOT NULL,

    CONSTRAINT [PK_AccountingPeriod] PRIMARY KEY CLUSTERED

    (

    [AccountingPeriodGUID] ASC

    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

    ) ON [PRIMARY]

    GO

    IF NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS

    WHERE TABLE_NAME = 'AccountingPeriod'

    AND COLUMN_NAME = 'ImportIdentity') BEGIN EXEC sys.sp_executesql N' ALTER TABLE [dbo].[AccountingPeriod] ADD [ImportIdentity] int' END

    GO

    Not sure, but it feels like we are missing something.

  • The byte math in the error message makes sense.

    AccountingPeriodGUID - 16

    AccountingPeriodDate - 4

    WeekOfMonth - 1

    BiWeeklyCycle - 1

    ImportIdentity - 4

    Total - 26

    26 bytes + 8035 bytes of internal overhead = 8061

    What doesn't make sense is where the huge amount of internal overhead is coming from.

  • Well i created the table, new name. Added column fine. Tried original table still same problem.

    Sooooooooo. I drop fk constraint, copy data to new holding table, drop table ; recreate, reinsert data. Add back constraint.

    Add column just fine.

    So my issue is resolved but I need to find out what happened.

    <hr noshade size=1 width=250 color=#BBC8E5> Regards,Jeffery Williams http://www.linkedin.com/in/jwilliamsoh

  • And yes the math works but in the heck is that overhead coming from

    <hr noshade size=1 width=250 color=#BBC8E5> Regards,Jeffery Williams http://www.linkedin.com/in/jwilliamsoh

  • As a total swag on my part, try running DBCC CLEANTABLE on the table giving you the problems and see if you still get an error.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Well the CLEANTABLE function is for varchar columns and SUPPOSEDLY does not do anything for int's.

    Tried it... He shoots, he scores... It worked.

    I considered this however knowing that it does nothing for int's I passed. Well after seeing your post I am like screw it, I have tried everything else!! And sure enough it was the magic bullet.

    <hr noshade size=1 width=250 color=#BBC8E5> Regards,Jeffery Williams http://www.linkedin.com/in/jwilliamsoh

  • Very interesting (peculiar) problem, Jeffrey, I'm glad you got it fixed! I would have loved to see some DBCC results on that table.

    -----
    [font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]

  • I'd like to know why DBCC CLEANTABLE worked and what it actually cleaned up because it seems like there was some metadata kicking around for that table somewhere that caused the issue and DBCC CLEANTABLE cleaned it up.

  • Jeffery Williams (5/28/2015)


    Well the CLEANTABLE function is for varchar columns and SUPPOSEDLY does not do anything for int's.

    Tried it... He shoots, he scores... It worked.

    I considered this however knowing that it does nothing for int's I passed. Well after seeing your post I am like screw it, I have tried everything else!! And sure enough it was the magic bullet.

    I'd bet $ that at some point in that table's history, it had a large varchar column (e.g. varchar(8000)), and perhaps some other combination of varchar columns that were dropped, and this is the "leftovers" from those operations.

    Steve (aka sgmunson) 🙂 🙂 🙂
    Rent Servers for Income (picks and shovels strategy)

  • sgmunson (7/14/2015)


    Jeffery Williams (5/28/2015)


    Well the CLEANTABLE function is for varchar columns and SUPPOSEDLY does not do anything for int's.

    Tried it... He shoots, he scores... It worked.

    I considered this however knowing that it does nothing for int's I passed. Well after seeing your post I am like screw it, I have tried everything else!! And sure enough it was the magic bullet.

    I'd bet $ that at some point in that table's history, it had a large varchar column (e.g. varchar(8000)), and perhaps some other combination of varchar columns that were dropped, and this is the "leftovers" from those operations.

    That was my thought as well. Methinks I shall try to reproduce this behavior in my home lab tonight.

  • Jeffery Williams (5/28/2015)


    Well the CLEANTABLE function is for varchar columns and SUPPOSEDLY does not do anything for int's.

    Tried it... He shoots, he scores... It worked.

    I considered this however knowing that it does nothing for int's I passed. Well after seeing your post I am like screw it, I have tried everything else!! And sure enough it was the magic bullet.

    It works on DROPPED varable length columns. So the fact that there are no variable length columns in the table now is neither here nor there. If there used to be one or more variable length column, now dropped but still occupying space dbcc CLEANTABLE make that space unoccupied. Rebuilding the clustered index (if the table has one) will also do that, but does more than just that and costs more.

    Tom

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply