Running out of Identity values

  • Ok

    Long story short, I am running out of Identity values on two tables (different databases) but very big tables, a billion records.

    I can't reset the values via DBCC, so I designed this plan ...

    1.Export data via BCP

    2.Import via BCP to the new table that now has BIG INT on that column (while system is online)

    --System offline for an hour or maybe less --

    4.Insert or copy new rows (added after bcp) from old table into new table

    5.Drop FK on old table and create on new table

    6.Rename tables (keep old one just in case)

    5.Compare row count, just in case

    --Bring system online ---

    Am I missing something? I am planning to test the logic with those tables on a different environment, of course.

    Any comment or advice is appreciated ...

    Thanks,

  • Are the records in the tables in question ever updated or just inserted?

    └> bt



    Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • bteraberry (10/2/2012)


    Are the records in the tables in question ever updated or just inserted?

    Good point...

    Ever increased ... it is my understanding, no updates or inserts in the middle ...

  • Maybe I'm missing something.

    Why not just ALTER the column to be a bigint instead of an int?

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher (10/2/2012)


    Maybe I'm missing something.

    Why not just ALTER the column to be a bigint instead of an int?

    He said these are very big tables. Altering the column means that every single records needs more storage space to accommodate the larger data type. Tons of downtime is likely to result because in most environments the extra space won't be available without shuffling everything around.

    └> bt



    Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • bteraberry (10/2/2012)


    ScottPletcher (10/2/2012)


    Maybe I'm missing something.

    Why not just ALTER the column to be a bigint instead of an int?

    He said these are very big tables. Altering the column means that every single records needs more storage space to accommodate the larger data type. Tons of downtime is likely to result because in most environments the extra space won't be available without shuffling everything around.

    He packed a huge table 100% full? Yeah, that's a mistake for a number of reasons, particularly in SQL Server. Even turning on certain options / features lengthens rows in SQL Server.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • Some other thoughts/possibilities:

    Rather than export and then import, it should be faster to just directly INSERT INTO the new table, naturally doing a table lock on the new table.

    Pre-allocate enough log space to handle the activity (you'd want to do this whatever method you use).

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher (10/2/2012)


    Maybe I'm missing something.

    Why not just ALTER the column to be a bigint instead of an int?

    NO way! Because downtime! Why do you think that I am taking all those extra measures or steps? The table has over 1 billion records 😎

    I did not design this table / database, I am the new DBA. It is my responsibility to take care of the system though, that's what I am doing.

    I pretty much know what I need to do. I was just asking to validate the overall steps.

    The database is a MC one. I can't touch it! Plus is a very old hardware with serious IO bottleneck. Any activity on that table will make the table useless and trigger a downtime.

  • sql-lover (10/2/2012)


    ScottPletcher (10/2/2012)


    Maybe I'm missing something.

    Why not just ALTER the column to be a bigint instead of an int?

    NO way! Because downtime! Why do you think that I am taking all those extra measures or steps? The table has over 1 billion records? 😎

    I did not design this table / database, I am the new DBA. It is my responsibility to take care of the system though, that's what I am doing.

    I pretty much know what I need to do. I was just asking to validate the overall steps.

    The database is a MC one. I can't touch it! Plus is a very old hardware with serious IO bottleneck. Any activity on that table will make the table useless and trigger a downtime.

    Yes, I read the 1B rows.

    Do you really think that with adequate freespace in each block to begin with, it will be faster to insert the rows twice than to change the length of one column?? I don't see how that's even remotely possible.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • bteraberry (10/2/2012)


    ScottPletcher (10/2/2012)


    Maybe I'm missing something.

    Why not just ALTER the column to be a bigint instead of an int?

    He said these are very big tables. Altering the column means that every single records needs more storage space to accommodate the larger data type. Tons of downtime is likely to result because in most environments the extra space won't be available without shuffling everything around.

    You nail it down! ... that is correct ...

  • sql-lover (10/2/2012)


    bteraberry (10/2/2012)


    ScottPletcher (10/2/2012)


    Maybe I'm missing something.

    Why not just ALTER the column to be a bigint instead of an int?

    He said these are very big tables. Altering the column means that every single records needs more storage space to accommodate the larger data type. Tons of downtime is likely to result because in most environments the extra space won't be available without shuffling everything around.

    You nail it down! ... that is correct ...

    Can you show the results that demonstrate that claim? You only need an additional 4 bytes per row. Did you pack the table to 99-100% rather than 98%?

    Now, you might have done something dopey and put the identity in your clustered key, in which case you cannot just ALTER it. And dropping and recreating the clus index would indeed be much more overhead than a simple ALTER column.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • Depending on your timeline, you could do this with virtually zero downtime. Rather than doing a bulk copy, you could easily chunk this up since you're not looking for updates but only inserts (if I understand you correctly.) Just select top (1) id from the new table and then get the next N records from the old table after that ID to do the insert. Wrap each iteration in its own transaction, add a maxdop of whatever makes you comfortable (for something like this I would typically use 25% of my processors on a busy machine) and include a short WAITFOR DELAY after each iteration. With such a strategy you can easily plow through the copy without adversely affecting your server. You will still need to have a very short period of downtime to rename the old and then make sure you didn't miss any new records coming in before you rename the new, but it will be a matter of seconds instead of an hour.

    If you're not worried about the downtime so much, I believe the plan you have established will work fine.

    └> bt



    Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • ScottPletcher (10/2/2012)


    sql-lover (10/2/2012)


    bteraberry (10/2/2012)


    ScottPletcher (10/2/2012)


    Maybe I'm missing something.

    Why not just ALTER the column to be a bigint instead of an int?

    He said these are very big tables. Altering the column means that every single records needs more storage space to accommodate the larger data type. Tons of downtime is likely to result because in most environments the extra space won't be available without shuffling everything around.

    You nail it down! ... that is correct ...

    Can you show the results that demonstrate that claim? You only need an additional 4 bytes per row. Did you pack the table to 99-100% rather than 98%?

    Now, you might have done something dopey and put the identity in your clustered key, in which case you cannot just ALTER it. And dropping and recreating the clus index would indeed be much more overhead than a simple ALTER column.

    OP did mentioned that this column does participate in relationship, so I do think it is at least PK. Could you please clarify why putting the identity into clustered key is "dopey"? Do you, somehow, know what this table holds? I want the same crystal ball ;-).

    Actually, I can easily believe that even without index on this column, it may be much faster to re-insert into new table than ALTER the existing one. It may depend on position of this column (let me guess it's a first one) and wideness of the table. Also, OP cannot allow too long down-time which will be required in case of using ALTER.

    I guess the best way would be the one suggested by OP. May be it needs to be batched.

    _____________________________________________
    "The only true wisdom is in knowing you know nothing"
    "O skol'ko nam otkrytiy chudnyh prevnosit microsofta duh!":-D
    (So many miracle inventions provided by MS to us...)

    How to post your question to get the best and quick help[/url]

  • bteraberry (10/2/2012)


    Depending on your timeline, you could do this with virtually zero downtime. Rather than doing a bulk copy, you could easily chunk this up since you're not looking for updates but only inserts (if I understand you correctly.) Just select top (1) id from the new table and then get the next N records from the old table after that ID to do the insert. Wrap each iteration in its own transaction, add a maxdop of whatever makes you comfortable (for something like this I would typically use 25% of my processors on a busy machine) and include a short WAITFOR DELAY after each iteration. With such a strategy you can easily plow through the copy without adversely affecting your server. You will still need to have a very short period of downtime to rename the old and then make sure you didn't miss any new records coming in before you rename the new, but it will be a matter of seconds instead of an hour.

    If you're not worried about the downtime so much, I believe the plan you have established will work fine.

    bt,,

    Can you elaborate more? Are you saying, inserting the remaining records (not copied via bcp) from old to new using SELECT INTO or something like that? Do you mind putting the T-SQL? Yes, that would be one of my steps, the final one before renaming the table and during the short offline, if I understand you correctly.

    @Eugene,

    Thanks for the suggestion. Quick question.

    Not at work right now, but we do have several FK and Indexes on the source table. So I do need to drop and recreate Indexes on the new one after moving all the info.

  • What about adding a new BIGINT column with NULL default so theres no locking, populate with values via script in batches.

    Then have a short outage while the new column is marked as IDENTITY(1billion,1), then rename old and new Id columns. OldId will remain for reference, or you could try dropping the column during the outage, but it will take some time.

Viewing 15 posts - 1 through 15 (of 43 total)

You must be logged in to reply to this topic. Login to reply