Enormous Transaction Log!

  • Good day and thanks for looking at my post.

    I have a Stored Procedure which runs every hour during business hours. It's purpose is to delete all rows, about 28,000, in a table and then repopulate the rows with an INSERT INTO command from an external database. Here's an example:

    /*************************

    delete from Parcel_Data

    INSERT INTO Parcel_Data

    SELECT [Column1]

    , [Column2]

    , [Column3]

    , [Column4]

    , [Column5]

    FROM [ServerB].[Database].[dbo].[Parcel_Info]

    END

    /*************************

    The procedure works perfectly, and has worked perfectly every hour for at least three years. My problem is that the Transaction Log is enormous. I have tried backing up the database and to reduce the log, tried shrinking the log file, etc. Nothing seems to work. The log file is now about 100 GB, while the database is only 16 MB.

    I am in the process of migrating all of this to a new SQL server, and I don't want to inherit the same problems on the new server. I submit that I am but a SQL neophyte, and ask for the wisdom of the forum.

    How can I write this procedure differently to prevent so many transaction entries? Or, how can I consistently and successfully shrink my enormous transaction log?

    Thanks for your help!

  • The most common cause of this is that the database is using the full recovery model but log backups are not being taken.

    Check on that first.

    Another common cause is a long-running transaction that forced the log to grow to that size at one point. Given that the data is only 16 MB, I think this one is less likely.

    If it turns out to be the lack of log backups, then you'll need to choose between taking regular log backups, or switching the database to the simple recovery model and relying on full and differential backups.

    To make the appropriate choice of recovery model and backup strategy, you'll need to know how much data loss is acceptable should something unfortunate happen to the data.

    Cheers!

    EDIT: Fixed a typo ("a long-running transactions") :blush:

  • tcalloway (5/17/2016)


    I have a Stored Procedure which runs every hour during business hours. It's purpose is to delete all rows, about 28,000, in a table and then repopulate the rows with an INSERT INTO command from an external database

    I inherited something similar a few years ago. My solution was to move the repopulating tables to another DB in SIMPLE recovery and then create SYNONYNs in the main DB.

  • Ken McKelvey (5/18/2016)


    tcalloway (5/17/2016)


    I have a Stored Procedure which runs every hour during business hours. It's purpose is to delete all rows, about 28,000, in a table and then repopulate the rows with an INSERT INTO command from an external database

    I inherited something similar a few years ago. My solution was to move the repopulating tables to another DB in SIMPLE recovery and then create SYNONYNs in the main DB.

    Why do all that work when you can just enable transaction log backups?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin (5/19/2016)


    Ken McKelvey (5/18/2016)


    tcalloway (5/17/2016)


    I have a Stored Procedure which runs every hour during business hours. It's purpose is to delete all rows, about 28,000, in a table and then repopulate the rows with an INSERT INTO command from an external database

    I inherited something similar a few years ago. My solution was to move the repopulating tables to another DB in SIMPLE recovery and then create SYNONYNs in the main DB.

    Why do all that work when you can just enable transaction log backups?

    I do log ship the main DB. The main DB is about 10 times smaller than the one with the repopulating tables. Given that the data in the repopulating tables has no long term value, why bother moving it around, allocating more space for logs etc?

  • I just noticed this line...

    tcalloway (5/17/2016)


    How can I write this procedure differently to prevent so many transaction entries? Or, how can I consistently and successfully shrink my enormous transaction log?

    In addition to setting up Transaction Log backups (which will help manage the size by truncating and reusing it automatically), look up BULK INSERT (for the code) and BULK-LOGGING (recovery model). I don't really recommend changing the recovery model for many options.

    BULK INSERT can be minimally logged (https://technet.microsoft.com/en-us/library/ms190422(v=sql.105).aspx) which means SQL would log the data pages rather than the individual rows assuming the prereqs are met (see the link above).

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Ken McKelvey (5/19/2016)


    Brandie Tarvin (5/19/2016)


    Ken McKelvey (5/18/2016)


    tcalloway (5/17/2016)


    I have a Stored Procedure which runs every hour during business hours. It's purpose is to delete all rows, about 28,000, in a table and then repopulate the rows with an INSERT INTO command from an external database

    I inherited something similar a few years ago. My solution was to move the repopulating tables to another DB in SIMPLE recovery and then create SYNONYNs in the main DB.

    Why do all that work when you can just enable transaction log backups?

    I do log ship the main DB. The main DB is about 10 times smaller than the one with the repopulating tables. Given that the data in the repopulating tables has no long term value, why bother moving it around, allocating more space for logs etc?

    I still don't see why anyone would need to allocate more space for logs if they were properly backing them up. Unless the data increases exponentially, the transaction logs should show relatively even growth if any at all.

    Since you're log shipping, you are obviously backing them up. But your initial answer didn't seem relevant to the topic without the additional information and still seems like a lot of work for the OP when there are other options available.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Thank you for the ideas and comments. I will try some of the suggestions and let you know!

  • Brandie Tarvin (5/19/2016)


    Ken McKelvey (5/18/2016)


    tcalloway (5/17/2016)


    I have a Stored Procedure which runs every hour during business hours. It's purpose is to delete all rows, about 28,000, in a table and then repopulate the rows with an INSERT INTO command from an external database

    I inherited something similar a few years ago. My solution was to move the repopulating tables to another DB in SIMPLE recovery and then create SYNONYNs in the main DB.

    Why do all that work when you can just enable transaction log backups?

    No pun intended but because it's "simple". 😉 It allows for a lot of minimally logged things to happen (which is also a great performance benefit) without the problem of destroying Point-in-Time ability for a given transaction log backup if you elect to do a minimally logged action (change to bulk insert) on your money-maker database. It's also a totally "expendable" table that doesn't actually need to be backed up, which saves on disk space, tape space, backup times and, most importantly, restore times.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply