Inserting from 1 table to another with no logging

  • Hi,

    I have a server which has run out of space. Basically one of the tables has new been archived, so is taking up 60% of the drive. What i need to do is to copy out the data from the table into a new table, to allow me to drop the existing table.

    However, the problem is that there is only enough room on the drive for the data in the new table - which leaves no space for the log to grow (I have already shrunk this to 10 mg and deleted all indexes except 1 that is required).

    My question is can I do an insert from the existing table to the new one as this has minimal logging.

    I don't know whether the SELECT * INTO could be used or whether it is possible to do a bulk insert from 1 table to another? Is there a table hint or something similar that I can apply to turn off the logging?

    Any pointers would be appreciated.

    Thanks in advance.

  • You could:

    A) insert rows one by one (or x by x, you could play with the right batch size), or it will be a single transaction for the whole set of data

    B) use BULK INSERT with the appropriate batch size

    C) buy a new disk 🙂

    I can't figure out other solutions, but for sure there are plenty.

    Hope this helps

    Gianluca

    -- Gianluca Sartori

  • Tom West (7/14/2009)


    Is there a table hint or something similar that I can apply to turn off the logging?

    There is no way to turn off logging completely. Best you can do is the minimal logging allowed in simple and bulk-logged recovery models. Select ... into is bulk-logged and so should work for you.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Gail, in this case won't SELECT .. INTO copy the whole table in a single transaction? In that scenario BULK_LOGGED recovery won't help.

    As long as it is running, the log will grow and will inactivate the entries only when the execution is over, exactly as if it was SIMPLE recovery model.

    Am I wrong?

    -- Gianluca Sartori

  • Gianluca Sartori (7/14/2009)


    Gail, in this case won't SELECT .. INTO copy the whole table in a single transaction?

    Yup, it will

    In that scenario BULK_LOGGED recovery won't help.

    Why do you say that?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • If SELECT ... INTO runs in a single transaction, that transaction is logged while running, so it keeps growing the log file until it's over.

    When the query is over, the space used to log the bulk query can be reused (same as SIMPLE), but it still can't be used while the query itself is running.

    Here the OP says he doesn't have disk space enough to store data and log, so I guess in this case BULK_LOGGED won't help. The only thing that comes to my mind is reducing the batch size so that just a little log space is needed and always reused.

    -- Gianluca Sartori

  • True, however SELECT ... INTO is minimally logged when the recovery model is other than FULL. Hence the changes don't get logged, just the fact that certain extents have changed.

    Take for example a 1 million row table that occupies 20 extents (160 pages). With an INSERT INTO, to copy to a new table, all 1000000 rows will be logged fully. With SELECT ... INTO in full recovery, all 1000000 rows will be logged, fully. With SELECT ... INTO in bulk-logged or simple recovery recovery, all that will be logged is the fact that these 20 extents have changed. The actual inserts will not be logged.

    That is why I'm suggesting SELECT ... INTO in bulk-logged recovery, because the minimal logging will mean that the log shouldn't grow much, if at all, depending on the exact size of the table that needs to be copied. Very little space is needed to log that an extent has changed.

    http://msdn.microsoft.com/en-us/library/ms191244(SQL.90).aspx

    Bulk insert would work as well, but the recovery model needs to be simple or bulk-logged or even bulk insert will be fully logged.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Tom West (7/14/2009)


    What i need to do is to copy out the data from the table into a new table, to allow me to drop the existing table.

    Hang on...

    Do you just want to insert all the rows from one table into a new table? Or are you inserting just some rows?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks Gail,

    I knew that BULK_LOGGED had minimal logging, but I didn't know it was THAT minimal.

    Nice to know, will be useful in the future.

    -- Gianluca Sartori

  • GilaMonster (7/14/2009)


    Tom West (7/14/2009)


    What i need to do is to copy out the data from the table into a new table, to allow me to drop the existing table.

    Hang on...

    Do you just want to insert all the rows from one table into a new table? Or are you inserting just some rows?

    Hi there,

    Managed to get this sorted using the select [columns] into dbo.newtable, so thanks for your help.

    I need needed a subset of data so it worked a treat, as it allowed me to drop the old table and free up the disk considerably.

    It now has 70% free and there's a nightly job to archive off the data we no longer need so the issue wont arise again.

    Thanks for you help

  • Just a side note, SELECT INTO only works for new table, it works for this scenario asked.

    For on-going archive, you can write a controlled transaction if you are not allowed to change recovery-mode,

    You can write a small script, insert/delete a batch of (for example 5000) each transaction, witha while loop and rowcount paramenter.

    Commit the transaction, then

    Waifor delay (for example every 15 minutes) to run again until after the next transaction log backup .

    Make sure you don't invoke lock-escalation on select size or use snapshot isolation to not using locking mechanism.

    It does not reduce your total transaction log backup space requiremnet, but it does allow you time to move them. And it does prevent transaction log explosion before its backup.

    This is one thing SQL Server is less than Oracle.

  • jswong05 (5/22/2012)


    This is one thing SQL Server is less than Oracle.

    I guess you mean "less messy". 🙂

    If you use NOLOG and take hot backups, you're just wasting tape.

    With SQL Server and BULK_LOGGED you don't have point in time recovery but, at least, you can restore the whole log backup.

    -- Gianluca Sartori

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply