• Smendle (9/7/2016)


    Why not just bulk load into a staging table then index as needed then drop main-table and rename the staging table?

    I've written a Powershell script which does the following (among other things):

    1) Clones the source table as:

    SELECT <column list> INTO TgtDatbase.schema.zzz_<TgtTable> FROM Srcdatabase.schema.table WHERE 0=1

    2) Bulk copies the above query from source to zzz_<TgtTable>.

    3) Sets PK and Indexes on zzz_<TgtTable> (separate script to the bulk copy)

    4) Drops the target table and renames the staging table as:

    exec sp_rename 'TgtDatabase.schema.zzz_<TgtTable>' 'TgtTable'

    Problems/Questions:

    A) Is there a way I can programmatically capture the PK, FK, Indexes, etc. from the target table, and then programmatically recreate those items on zzz_table after the bulk copy? Right now all that post-processing in step 3 is hardcoded. And if so, the best approach: stored procedure, pull metadata into Powershell, some other approach?

    B) I'm creating the indexes named as IXNN_TABLENAME_ColumnName. However, this results in a name collision with the existing target table. I prefer not to delete the target table until step 4 so the users can continue to work during the bulk copy. I could create the indexes as IXNN_zzz_TABLENAME_ColumnName. But is there a way I can programatically rename the PK, FK, Indexes, etc when I rename the table?

    C) In general this processing will run overnight. However, if it is long running, it could still be running in the morning. I assume if someone is querying the target table when step 4 is executing, that step will fail. Is there a way to prevent that, say a long timeout value for the table rename step?

    Thanks,

    Scott

    P.S.: I'm sure the PS script could be improved. If anyone is interested in the script, either to critique or use for themselves, let me know and I'll upload to GitHub.