Home Forums SQL Server 2012 SQL 2012 - General Fastest way to apply primary key on a huge table so downtime is minimal? RE: Fastest way to apply primary key on a huge table so downtime is minimal?

  • I think you have a great plan, with a few conditions:

    1) Are the rows in the original table just ADDED or are there also UPDATES and DELETES? If just the former, you are fine. If there are either of the other two things you are screwed because you would need to track all of that in the new table too, while you added rows to it.

    2) I wouldn't do just 100K per hour. I would migrate them in batches that were small enough that you got index seeks on the thing controlling the batching. Then you could do 5-10K per batch and just put in a waitfor delay '00:00:01' or whatever to give the table and server some breathing room. Locking should not be a problem with index seeks on the acquisition of rows. I have done similar operations with billion-row tables before with no issues.

    3) You can use dbcc sqlperf(logspace) or some DMVs probably to keep an eye on tlog size during your looping and execute backups as necessary.

    4) Obviously explicit transactioning and error checking are mandatory in this evolution.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service