change identity for int to bigint

  • Anyone have any clue how long this will take. We have a table who identity reached a max int and i did an alter table command to change the idenetity from int to big int. It's beeen running for 5 hours at this point? anyone ever run into this? ideas on how long it should take? is there a way i can calculate how many reads or right it will have to do.

    It's at 1,859,837,918 logical reads. It chewed up the log drive so i shrank some other db's trans logs. since then the log file hasnt grown but it does appear to be doing something because the cpu usage and logical reads are climbing. It doesn't appear to be affecting the cpu or memorry on the box

  • How long is it going to take? Difficult for us to say, not knowing anything about your environment. However, this is an operation that, if you've done it by just using ALTER TABLE is going to effectively read and write every record in that table. In one transaction. The TLog is going to blow out massively - to something larger than the current size of the table.

    How long did it take in your test environment?

    How long as it got to go? Unfortunately, sys.dm_exec_requests doesn't give the percentage complete value on all SQL statements, which is a pity as it would be really quite useful for you right now.

    Have a look at Aaron Bertrand's series "Minimizing the Impact of Widening an Identity Column". Probably a bit late for this one, but might help next time.

    Thomas Rushton
    blog: https://thelonedba.wordpress.com

  • so the log space used on the log file was at 357,655 megs on 8/11

    now it's at 365,908 megs. This process has been running for 6 hours at this point. Shouldn't the size be much larger at this point? At one point while the alter table script was running i had to free up space on the log drive. Not sure if that had any effect

    the table as of midnight was 676,644,856 megs

    So are u saying the log file will be at least 676,644,856 megs? I know the answer to this but is there anyway to kill this process w/o an extensive rollback?

    Also, you said it will read each record and write each record. Does that mean i can track progress based on the number of physical reads and writes it does?

    What are my options at this point? Even if i reboot it will still rollback correct?

  • i killed it. It was running for 8 hours. dont know what else to do to be honest. How long do you suspect the rollback will take?

    The plan:

    in the meantime to get away from the trans log writes i'm going to develop a ssis package to bulk copy the data into a new table w/ the bigint identity When its done i'll rename the table, rerun the indexes, then shrink the mdf as it will have an extra 600 gigs of space. Sound about right? Suggestions?

  • A couple of thoughts,

    Shrinking your db is going to fragment all of your indexes, can you avoid it?

    As for the rest of your process, have you tested it?

    I would also be worrying about how large of a maintenance window I would have, and if I would have to take an outage. Renaming and cloning a table always makes me worry about if there are active users trying to access it for reads or writes, not to mention complications like foreign keys.

    Jim

  • To close the loop here. I'm my case the table in question was a prod table anyhow so I had to fix it in prod. All inserts were failing.

    The alter column was still running at 9 hours. I killed it and it took 2 hours to roll back. My numbers for loading the table into another table w big int identity then rerunning the indexes would have taken over 4 days. Also not an option. Fortunately in my case I changed the clustered index from the identity to the unique key when I started a few years back and as such no one was using the identity. It no longer had any indexes on it. So I changed all the load processes to set the identity to 1 w set indentity insert on functionality. Two lines of Code fixed my issue as a result.

    I verified that change tracking works off the primary key and not the identity thankfully.

    So as u can imagine my day was a bit stressful and I thought for sure I'd be working on this over the next 4 day. My blood pressure can't take it.

    Now I'm all good and drinking a glass of wine on the porch w the dog.

    Take always. Set up alerting so I get alerted before the identity runs out and partition the big large tables

    I'm an idiot for letting it get this large. In my defense the table recently grew faster than normal for business reasons. I have a task that I haven't implemented yet to keep 2 years of data in Sql and keep the rest in Hadoop. I just moved that tasks priority up.

    Thanks for all ur help

  • We just hit a similar (but way smaller) situation. Altering the size of a column in a large table is a giant pain.

    If possible, the best bet is to create a new empty table with the proper identity column type, then copy the data into that table with Identity_Insert on.

    As noted, doing the alter in place will read/modify every row. Also - all the indexes on the table must be rebuilt. Our experience was that table with 9 GB of data (83 Million rows) and 10 GB of indexes used over 56 GB of space before it failed - 3 hours later. I did a script to backup to a new table (with no identity column, indexes, or constraints), truncate, remove indexes/constraints, alter table, reload (with identity_insert on), replace constraints/indexes, and the entire process ran in a few minutes.

    If you have to do it in place, then first drop all indexes and disable all triggers, constraints, and foreign keys. Also drop the primary key constraint on the identity column. Then - at least it will only thrash the actual table, and not all the indexes and constraints. When done, restore the constraints and indexes, etc.

    Also - when you finish, I recommend running "dbcc checkident(<tablename>, RESEED)" to make sure the identity value is reset based on the current highest value.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply