how to insert big number of records without filling up transaction log

  • We have a developer running an adhoc query that pulls records from a staging database to the production database on the same server, the query is insert into..... select from..

    This is a quite big batch of data.

    The query failed with transaction log full.

    I checked the transaction log size setup, it is full, and autogrowth is set to 4 gb limititation.

    I manually increased the transaction log file to a bigger size. Than it succeeded.

    I know I also can do a transaction log backup to reduce the size of transaction log.

    But for developer, what is the best way for him to do for the insert without creating big data in transaction log, we don't actually need to have this transaction log back up.

    Thanks

  • If your recovery model is full then you have to do transaction log backups or the log will grow indefinately until it fills up the drive (or the 4 gb limit you imposed). If the load is done in one transaction and requires 4gb of log to commit then it is difficult to get around needing a log file that size to accomodate. You could break the insert up into several smaller chunks so the transaction size is smaller at commit time. Then if you run the tran log backups more frequently then the log file will be reused without growing. If you can change to simple recovery mode (no tran log backups and of course no point in time restore) then the smaller batches will reuse the log file and it will not grow.

  • Check out this article[/url]

    Wayne
    Microsoft Certified Master: SQL Server 2008
    Author - SQL Server T-SQL Recipes


    If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
    Links:
    For better assistance in answering your questions
    Performance Problems
    Common date/time routines
    Understanding and Using APPLY Part 1 & Part 2

  • You could use SSIS to move the data and set a batch size to something reasonable, like 100,000 rows.

  • Does this insert that the developer is doing have to be done manually ? You could set up an automated job to call an ssis package as indicated above or you could go old school and use a bcp out and bcp in setting your batch size. Also as asked previously above what is your recovery requirements ? If you are running in Full recovery do you need to be ? Before the upload you could chane your recovery model to bulk logged , perform your load and the change back to either simple or full based on your requirements.

    MCT
    MCITP Database Admin 2008
    MCITP Database Admin 2008
    MCITP Database Dev 2008
    www.jnrit.com.au/Blog.aspx

  • Change the database recovery model to bulk logged when performing the operation.

    http://msdn.microsoft.com/en-us/library/aa173529%28v=sql.80%29.aspx

    Jayanth Kurup[/url]

  • How often are you taking t-log backups on your production DB?

    Carlton.

  • What is a developer doing anywhere near your production database? If you're going to allow developers, or anyone else, to perform this sort of operation, then you need to have the transaction log space to support it. The only alternative is to insist that such inserts are done in batches, as Michael suggested. If you automate it then you should retain enough control over the process to avoid nasty surprises such as that which you have just experienced.

    John

  • Thanks all, will look into the articles.

    But just want to clarify this, so simple recovery mode's database, they will still use transaction log, but we still need to divide the batch to small chunk to do it multiple times, correct?

    I suppose if it is a one time big batch it will still make transaction log full even it is simple recovery mode, is that right?

  • If DB is in simple mode: you can't back up the transaction log.

    Need to break into smaller batches.

    Carlton.

  • sqlfriends (6/17/2011)


    Thanks all, will look into the articles.

    But just want to clarify this, so simple recovery mode's database, they will still use transaction log, but we still need to divide the batch to small chunk to do it multiple times, correct?

    I suppose if it is a one time big batch it will still make transaction log full even it is simple recovery mode, is that right?

    Correct - regardless of the recovery model, all transactions are logged in the transaction log. In certain situations you can get minimally logged transactions in either simple or bulk_logged recovery model - but that is only in those situations.

    Your transaction log needs to be as large as the largest transaction that is going to run against that database, regardless of the recovery model. You can mitigate this by 'batching' the inserts/updates/deletes to keep the size from getting too large.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply