Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12345»»»

Running out of Identity values Expand / Collapse
Author
Message
Posted Tuesday, October 2, 2012 5:18 PM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Thursday, July 24, 2014 10:24 AM
Points: 373, Visits: 1,235
bteraberry (10/2/2012)
ScottPletcher (10/2/2012)
Maybe I'm missing something.

Why not just ALTER the column to be a bigint instead of an int?


He said these are very big tables. Altering the column means that every single records needs more storage space to accommodate the larger data type. Tons of downtime is likely to result because in most environments the extra space won't be available without shuffling everything around.


You nail it down! ... that is correct ...
Post #1367337
Posted Tuesday, October 2, 2012 5:24 PM
SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Today @ 12:28 PM
Points: 1,982, Visits: 2,932
sql-lover (10/2/2012)
bteraberry (10/2/2012)
ScottPletcher (10/2/2012)
Maybe I'm missing something.

Why not just ALTER the column to be a bigint instead of an int?


He said these are very big tables. Altering the column means that every single records needs more storage space to accommodate the larger data type. Tons of downtime is likely to result because in most environments the extra space won't be available without shuffling everything around.


You nail it down! ... that is correct ...



Can you show the results that demonstrate that claim? You only need an additional 4 bytes per row. Did you pack the table to 99-100% rather than 98%?


Now, you might have done something dopey and put the identity in your clustered key, in which case you cannot just ALTER it. And dropping and recreating the clus index would indeed be much more overhead than a simple ALTER column.


SQL DBA,SQL Server MVP('07, '08, '09)
"In America, every man is innocent until proven broke!" Brant Parker
Post #1367341
Posted Tuesday, October 2, 2012 5:32 PM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Wednesday, December 4, 2013 1:49 PM
Points: 1,104, Visits: 1,174
Depending on your timeline, you could do this with virtually zero downtime. Rather than doing a bulk copy, you could easily chunk this up since you're not looking for updates but only inserts (if I understand you correctly.) Just select top (1) id from the new table and then get the next N records from the old table after that ID to do the insert. Wrap each iteration in its own transaction, add a maxdop of whatever makes you comfortable (for something like this I would typically use 25% of my processors on a busy machine) and include a short WAITFOR DELAY after each iteration. With such a strategy you can easily plow through the copy without adversely affecting your server. You will still need to have a very short period of downtime to rename the old and then make sure you didn't miss any new records coming in before you rename the new, but it will be a matter of seconds instead of an hour.

If you're not worried about the downtime so much, I believe the plan you have established will work fine.


└> bt


Forum Etiquette: How to post data/code on a forum to get the best help
Post #1367345
Posted Tuesday, October 2, 2012 7:02 PM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Friday, July 4, 2014 3:55 AM
Points: 2,836, Visits: 5,062
ScottPletcher (10/2/2012)
sql-lover (10/2/2012)
bteraberry (10/2/2012)
ScottPletcher (10/2/2012)
Maybe I'm missing something.

Why not just ALTER the column to be a bigint instead of an int?


He said these are very big tables. Altering the column means that every single records needs more storage space to accommodate the larger data type. Tons of downtime is likely to result because in most environments the extra space won't be available without shuffling everything around.


You nail it down! ... that is correct ...



Can you show the results that demonstrate that claim? You only need an additional 4 bytes per row. Did you pack the table to 99-100% rather than 98%?


Now, you might have done something dopey and put the identity in your clustered key, in which case you cannot just ALTER it. And dropping and recreating the clus index would indeed be much more overhead than a simple ALTER column.


OP did mentioned that this column does participate in relationship, so I do think it is at least PK. Could you please clarify why putting the identity into clustered key is "dopey"? Do you, somehow, know what this table holds? I want the same crystal ball .
Actually, I can easily believe that even without index on this column, it may be much faster to re-insert into new table than ALTER the existing one. It may depend on position of this column (let me guess it's a first one) and wideness of the table. Also, OP cannot allow too long down-time which will be required in case of using ALTER.

I guess the best way would be the one suggested by OP. May be it needs to be batched.



_____________________________________________
"The only true wisdom is in knowing you know nothing"
"O skol'ko nam otkrytiy chudnyh prevnosit microsofta duh!"
(So many miracle inventions provided by MS to us...)

How to post your question to get the best and quick help
Post #1367365
Posted Wednesday, October 3, 2012 5:18 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Thursday, July 24, 2014 10:24 AM
Points: 373, Visits: 1,235
bteraberry (10/2/2012)
Depending on your timeline, you could do this with virtually zero downtime. Rather than doing a bulk copy, you could easily chunk this up since you're not looking for updates but only inserts (if I understand you correctly.) Just select top (1) id from the new table and then get the next N records from the old table after that ID to do the insert. Wrap each iteration in its own transaction, add a maxdop of whatever makes you comfortable (for something like this I would typically use 25% of my processors on a busy machine) and include a short WAITFOR DELAY after each iteration. With such a strategy you can easily plow through the copy without adversely affecting your server. You will still need to have a very short period of downtime to rename the old and then make sure you didn't miss any new records coming in before you rename the new, but it will be a matter of seconds instead of an hour.

If you're not worried about the downtime so much, I believe the plan you have established will work fine.


bt,,

Can you elaborate more? Are you saying, inserting the remaining records (not copied via bcp) from old to new using SELECT INTO or something like that? Do you mind putting the T-SQL? Yes, that would be one of my steps, the final one before renaming the table and during the short offline, if I understand you correctly.

@Eugene,

Thanks for the suggestion. Quick question.

Not at work right now, but we do have several FK and Indexes on the source table. So I do need to drop and recreate Indexes on the new one after moving all the info.

Post #1367569
Posted Wednesday, October 3, 2012 5:56 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Today @ 12:27 AM
Points: 1,159, Visits: 1,607
What about adding a new BIGINT column with NULL default so theres no locking, populate with values via script in batches.
Then have a short outage while the new column is marked as IDENTITY(1billion,1), then rename old and new Id columns. OldId will remain for reference, or you could try dropping the column during the outage, but it will take some time.
Post #1367591
Posted Wednesday, October 3, 2012 5:57 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 4:53 PM
Points: 36,795, Visits: 31,257
ScottPletcher (10/2/2012)
Even turning on certain options / features lengthens rows in SQL Server.


I've never heard of such a thing, Scott. Do you have an example of this?


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1367594
Posted Wednesday, October 3, 2012 6:10 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 4:53 PM
Points: 36,795, Visits: 31,257
foxxo (10/3/2012)
What about adding a new BIGINT column with NULL default so theres no locking, populate with values via script in batches.
Then have a short outage while the new column is marked as IDENTITY(1billion,1), then rename old and new Id columns. OldId will remain for reference, or you could try dropping the column during the outage, but it will take some time.


I believe that would cause massive page splitting to make room for the new column. Dropping the clustered index probably wouldn't help here either because the resulting heap would still need to expand the rows.

I'd have to do some testing to make sure it would work correctly but I would try making the new table as an empty table with the IDENTITY seed on the BIGINT column larger than the largest value in the old table. Then, combine the two tables using a partitioned view. This new view would be named the same as the old table and, of course, the old table would be renamed. Then, create an INSTEAD OF trigger to intercept new inserts to force the new inserts into the new table rather than the old. Correctly done, the partitioned view would work for UPDATEs, DELETEs, and SELECTs without further complication.

Except for a possibly new constraint on the old and new tables, the whole shootin' match could be done online in about 65 milliseconds.

Again, this all is just a thought and should be tested prior to actually trying to implement it. And, yeah.... it'll take a bit of planning to do it right the first time.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1367599
Posted Wednesday, October 3, 2012 8:13 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: 2 days ago @ 10:34 AM
Points: 386, Visits: 624
also, if you are running on an old server with I/O bottlenecks, are you running SQL2008. An earlier versin may affect the answer.

Its not an area I am familiar with, but would partitioning the table help him out here?

Post #1367708
Posted Wednesday, October 3, 2012 8:15 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: 2 days ago @ 10:34 AM
Points: 386, Visits: 624
another thing, if you are going to create a new table and manually insert the records into it, dont forget to turn identity insert on; otherwise you will potentially destroy the key sequence. Not good if it is used as a FK on other tables
Post #1367710
« Prev Topic | Next Topic »

Add to briefcase ««12345»»»

Permissions Expand / Collapse