Copy a big table (250G) form one server to another

  • Hi,

    I need to copy a big table (about 250G, basically used to store images) that I have on a MSSQL Server to a MSSQL Server. I don't need to transform data but I don't if this command will works

    INSERT INTO Temp SELECT * FROM [SourceServerName].DBName.dbo.Temp

    Thanks for your help

  • If space isn't an issue, I'd restore a backup to the other server and drop the things you don't need, keeping only that table

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • Hi, I need to insert it into a database. Thanks

  • Your command will work, there's just a whole bunch of things to think about: First thing that comes to mind is that the entire process will be logged and generated one heck of a large transaction! Is your DB in simple, bulk-logged, or full recovery?

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • Hi,

    The database is in Simple Mode.

    Thanks

  • Don't use the insert command as you currently have it. Contrary to popular belief, having the database in the SIMPLE recovery mode does NOT prevent the log file from being used. Since you're trying to transfer all 250GB in a single query, your log file will explode to at least that size.

    You could write a WHILE loop to transfer the data in smaller chunks but it'll still take a while and fill the pipe while you're doing it.

    It's time for a "carrier-pidgeon" to do the first load. Before we get into that, are both servers on a SAN and are both SANs of the same make and model?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Hi,

    They are on the same SAN. Thanks

  • In that case, you might try what SQL Server itself uses for replication if you have the disk space. BCP in native format. Export the table to a file, then import it in half million (or less depending on the size of your rows) row chunks using the batch size setting. It will keep your log file from exploding.

    As an alternative, you could have the infrastructure folks do what I believe is called a "SAN Snapshot" to copy the mdf and ldf files and then simply attach the database.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Hi,

    In the past I had problems with BCP because when I try to copy the data to a file some times SQL introduce returns between the data and when I try to reconstruct again the images I got errors.

    The second option is a valid but I need to copy the table to a specific data base, so if a move the mdf and ldf I still have to move the table from two locals databases

    Thanks

  • csalazar-607991 (3/4/2013)


    Hi,

    In the past I had problems with BCP because when I try to copy the data to a file some times SQL introduce returns between the data and when I try to reconstruct again the images I got errors.

    Yes but if you actually set it up to replicate the data over (meaning physically create a publication/subscriber), the snapshot will create multiple bcp chunks for your automatically and should keep everything in sync.

    If you can't use snap shot replication as a one time deal, you really only have 2 options: 1) have the SAN guys take a snap of the LUN (as Jeff has mentioned) or 2) break it up into a match style job, transferring over x-number of rows at a time.

    I can't think of anything else that hasn't already been mentioned

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • Jeff Moden (3/4/2013)


    In that case, you might try what SQL Server itself uses for replication if you have the disk space. BCP in native format. Export the table to a file, then import it in half million (or less depending on the size of your rows) row chunks using the batch size setting. It will keep your log file from exploding.

    As an alternative, you could have the infrastructure folks do what I believe is called a "SAN Snapshot" to copy the mdf and ldf files and then simply attach the database.

    As jeff sugested , taking a SAN snapshot and then attaching the mdf / ldf file could save time and will be a good approach.

    "More Green More Oxygen !! Plant a tree today"

  • I have some recommends here:

    1-Choose the point time when your server does not work hard

    2-Separate your table into smaller pieces (may be multiple files) and using a package to transform data

  • Did you try Import/export option of SQL when server is not too busy?

  • Hi,

    We decided to insert the info using

    BEGIN TRANSACTION

    INSERT INTO DB2.dbo.Table

    SELECT *

    FROM DB1.dbo.Table

    WHERE CodigoCaso < @cont and CodigoCaso > @ultimo

    SET @ultimo = @cont

    SET @cont = @cont + 1000

    COMMIT TRANSACTION

    In order to avoid a huge increase in the log file.

    I will post the result

    Thanks

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply