• Whether you copy the data to a new filegroup with an INSERT/SELECT, or move it by creating and then dropping a clustered index, the entire table has to be read and written. There is no real difference in the amount of I/O involved. In addition, if your existing table has small rows and little fragmentation (i.e many rows per page and little free space), the step of adding the identity column could cause page splits and double the size of the table in the primary file group before you even start to move it.

    What you need to consider is how the move will be logged. Trying to move the table with a single INSERT/SELECT will require logging each row moved. This will put a huge strain on your transaction log, possibly filling the log drive and stopping all activity in the database. The same problem occurs with the identity + index technique, all actions will be logged as a single transaction.

    You also need to give some thought to your disaster recovery options in case you screw this up. Take a full backup before and after moving the table. Has anyone ever tried to do a complete restore of this monstrosity? Can you schedule a maintenance period to do this with no interference? The best case would be if you can kick all the users off the system and put the database in single user mode for the duration.

    If you can afford to switch to simple recovery mode, you can move the table by using INSERT/SELECT in a loop to move a reasonable number of rows at a time. The definition of "reasonable" varies, but could be between 1000 and 50000 rows. This technique will have a minimal impact on logging, although every row copy is logged there is little chance of running out of log space. This will run much faster on a large table than trying to do it all at once.

    The least amount of logging (and therefore fastest) method would be to dump the data to a flat file with BCP, and then use BCP to import it into the new table. If a number of conditions are met (the database is in single- or bulk-logged recovery mode, the target table is empty or has no indexes, see BOL for details) the data will be imported with minimal logging. An added benefit is that you have an external copy of the table in case something blows up.