Copy Table Data

  • I have a large table that I need to copy.  (insert into [destination], select, from [source])

    It brakes with the following error:
         Msg 9002, Level 17
         The transaction log for database is full due to 'ACTIVE_TRANSACTION'

    The only solution that I found so far is DBCC SHRINKFILE.  Any other suggestions?
    Thanks!

  • clicky - Wednesday, June 21, 2017 7:54 AM

    I have a large table that I need to copy.  (insert into [destination], select, from [source])

    It brakes with the following error:
         Msg 9002, Level 17
         The transaction log for database is full due to 'ACTIVE_TRANSACTION'

    The only solution that I found so far is DBCC SHRINKFILE.  Any other suggestions?
    Thanks!

    What is the recovery model for the database?
    😎

  • The transaction log for database is full due to 'ACTIVE_TRANSACTION'

    The only solution that I found so far is DBCC SHRINKFILE.


    That's not a solution for a full transaction log.  Please read through this.

    John

  • Eirikur Eiriksson - Wednesday, June 21, 2017 8:26 AM

    clicky - Wednesday, June 21, 2017 7:54 AM

    I have a large table that I need to copy.  (insert into [destination], select, from [source])

    It brakes with the following error:
         Msg 9002, Level 17
         The transaction log for database is full due to 'ACTIVE_TRANSACTION'

    The only solution that I found so far is DBCC SHRINKFILE.  Any other suggestions?
    Thanks!

    What is the recovery model for the database?
    😎

    SIMPLE

  • Hi,
    Have you tried a DBCC OPENTRAN, to identify who is active ?
    Best  Regards

  • my rule of thumb is for any table over say, 100K rows or so, data transfers like this should be done using the Import/Export Wizard, or via SSIS, so that it chunks it out in batches, and avoids massive log growth like that.
    INSERT INTO... SELECT * FROM MillionBillionRowTable will kill you in these situations.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • If the database is already in the simple recovery model, you aren't worried about point-in-time restores, so that's good.

    Will you post the DDL for the source table?  Please be sure to include the clustered index definition.  In this case, I don't need any data.

    Granted, you're still going to have to address John's point about managing your transaction log, but I may be able to help you with the table.  I've done this type of thing before and found the results pretty impressive.

  • clicky - Wednesday, June 21, 2017 8:44 AM

    Eirikur Eiriksson - Wednesday, June 21, 2017 8:26 AM

    clicky - Wednesday, June 21, 2017 7:54 AM

    I have a large table that I need to copy.  (insert into [destination], select, from [source])

    It brakes with the following error:
         Msg 9002, Level 17
         The transaction log for database is full due to 'ACTIVE_TRANSACTION'

    The only solution that I found so far is DBCC SHRINKFILE.  Any other suggestions?
    Thanks!

    What is the recovery model for the database?
    😎

    SIMPLE

    And the file growth settings for the log file?
    😎

  • Eirikur Eiriksson - Thursday, June 22, 2017 1:11 AM

    clicky - Wednesday, June 21, 2017 8:44 AM

    Eirikur Eiriksson - Wednesday, June 21, 2017 8:26 AM

    clicky - Wednesday, June 21, 2017 7:54 AM

    I have a large table that I need to copy.  (insert into [destination], select, from [source])

    It brakes with the following error:
         Msg 9002, Level 17
         The transaction log for database is full due to 'ACTIVE_TRANSACTION'

    The only solution that I found so far is DBCC SHRINKFILE.  Any other suggestions?
    Thanks!

    What is the recovery model for the database?
    😎

    SIMPLE

    And the file growth settings for the log file?
    😎

    10%

  • Will you post the DDL for the source table and clustered index definition?

  • No pun intended, but this is SIMPLE.  Create the new target table with just the clustered index.  The clustered index should be identical on both tables for the best performance without having to do any tricks.  Then use a minimally logged Insert/Select and you're logfile will be safe because it will only log page allocations on the new table.
    INSERT INTO dbo.TargetTable WITH(TABLOCK) --This is necessary for minimal logging
            (put your column list here)
     SELECT put the same column list here
       FROM dbo.SourceTable
     OPTION (RECOMPILE) --Not always necessary but a good habit to get into for this type of thing.
    ;

    If you have any non-clustered indexes, build them after the target table is loaded.

    This method also keeps you from blowing out the MDF file during the transition of a heap to a table if you were to build the clustered index separately and will take a whole lot less time.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Wednesday, June 28, 2017 6:06 PM

    No pun intended, but this is SIMPLE.  Create the new target table with just the clustered index.  The clustered index should be identical on both tables for the best performance without having to do any tricks.  Then use a minimally logged Insert/Select and you're logfile will be safe because it will only log page allocations on the new table.
    INSERT INTO dbo.TargetTable WITH(TABLOCK) --This is necessary for minimal logging
            (put your column list here)
     SELECT put the same column list here
       FROM dbo.SourceTable
     OPTION (RECOMPILE) --Not always necessary but a good habit to get into for this type of thing.
    ;

    If you have any non-clustered indexes, build them after the target table is loaded.

    This method also keeps you from blowing out the MDF file during the transition of a heap to a table if you were to build the clustered index separately and will take a whole lot less time.

    That's why I was asking for the DDL and CI for the table, but your post works.  I always like to put the ORDER BY in there to ensure it's sorted by the CI.

    Clicky, this works very well.  I've been able to take a data load that started out at 7 hours in the wrong tool, someone wrote it in SQL and got it down to 30 minutes.  I then used this technique and got it down to 55 seconds with no long growth.  The creation of NCIs will be logged, but not the load itself.

  • Ed Wagner - Thursday, June 29, 2017 7:26 AM

    Jeff Moden - Wednesday, June 28, 2017 6:06 PM

    No pun intended, but this is SIMPLE.  Create the new target table with just the clustered index.  The clustered index should be identical on both tables for the best performance without having to do any tricks.  Then use a minimally logged Insert/Select and you're logfile will be safe because it will only log page allocations on the new table.
    INSERT INTO dbo.TargetTable WITH(TABLOCK) --This is necessary for minimal logging
            (put your column list here)
     SELECT put the same column list here
       FROM dbo.SourceTable
     OPTION (RECOMPILE) --Not always necessary but a good habit to get into for this type of thing.
    ;

    If you have any non-clustered indexes, build them after the target table is loaded.

    This method also keeps you from blowing out the MDF file during the transition of a heap to a table if you were to build the clustered index separately and will take a whole lot less time.

    That's why I was asking for the DDL and CI for the table, but your post works.  I always like to put the ORDER BY in there to ensure it's sorted by the CI.

    Clicky, this works very well.  I've been able to take a data load that started out at 7 hours in the wrong tool, someone wrote it in SQL and got it down to 30 minutes.  I then used this technique and got it down to 55 seconds with no long growth.  The creation of NCIs will be logged, but not the load itself.

    The creation of the NCIs will be minimally logged because the OPs database is in the SIMPLE recovery model.  Same for the CI and the initial file copy.  With the TABLOCK hint and having identical CIs (although the sort you mention is almost free insurance), it'll all be minimally logged and nasty fast because it won't have to write the data twice.

    Also, since the OP posted this 7 days ago, I'm thinking that he's no longer listening on this problem because, by hook or crook, they likely did something to accomplish the copy a couple of days ago.  It might help someone else down the road, though.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply