Transaction Log when FIXED size, does it hang?

  • If I restrict the Transaction log to a fixed size (no auto growth) and then do a large delete, what happens when the log fills up?   (Simple recovery model)   Will it move the transactions into the database so that the delete can continue?  Or will it hang?

  • easiest way for you to find out is to test it yourself on a DEV instance.

  • I agree, test it with various size deletes starting with small ones that you know won't fill the transaction log.  When you start with bigger deletes that can fill the transaction log watch the behavior.  If things work like you would like, SQL Server should run out of space, but still have sufficient space to successfully complete a rollback.  That may not always happen and you need to be prepared for that to occur.

    The transaction log will not "rollover" into the database file(s).

  • In the Simple recovery model, the transactions are supposed to disappear as soon as the delete is done, right?  So there is no rollback unless power fails before the delete is finished.   Trying it on a DEV is a good idea but the large database is large!   Don't know if size matters.   So I'm looking for knowledge of how it works, if  possible.   If it isn't known then of course I'll experiment.

  • babahu - Friday, August 3, 2018 8:37 PM

    In the Simple recovery model, the transactions are supposed to disappear as soon as the delete is done, right?  So there is no rollback unless power fails before the delete is finished.   Trying it on a DEV is a good idea but the large database is large!   Don't know if size matters.   So I'm looking for knowledge of how it works, if  possible.   If it isn't known then of course I'll experiment.

    As with all things "it depends" is the actual answer. This is why you need to test the process.  You need to feel comfortable with the process you put in to place as you are the one that will need to support it if something goes wrong.

    Yes, if the delete completes successfully the VLFs in the transaction log will be marked for reuse upon completion of the checkpoint.  They don't actually disappear, the VLFs are simply reused if marked appropriately.  The problem you may encounter, however, is if the delete is large enough that the transaction for the delete fills the transaction log.  This is what you need to test.  It is possible that SQL Server will detect the problem, terminate the delete process and rollback the transaction.  The problem you need to test for is what happens if the fixed size transaction log fills up and SQL Server is unable to abort the transaction and successfully complete the rollback because it didn't have sufficient log space to complete the rollback.  What happens at that point is the database becomes inaccessible until you provide SQL Server with additional log space.  This is what can happen if a transaction log is also allowed to grow until it consumes the entire drive space where it resides.

  • babahu - Friday, August 3, 2018 3:28 PM

    If I restrict the Transaction log to a fixed size (no auto growth) and then do a large delete, what happens when the log fills up?   (Simple recovery model)   Will it move the transactions into the database so that the delete can continue?  Or will it hang?

    There is no need to test this, the SQL Server will throw an Error 9002 and if the database is online, it will be in read only mode until log space is available, which means that it will either have to be manually added or freed up. Since your database is in simple recovery mode, space will have to be added.
    😎

    One way of mitigating this is to have a small auto-growth log file on a different drive that can take the overflow if the main log file fills up, not ideal but it's better than making the database unusable.

  • Eirikur Eiriksson - Saturday, August 4, 2018 12:02 AM

    babahu - Friday, August 3, 2018 3:28 PM

    If I restrict the Transaction log to a fixed size (no auto growth) and then do a large delete, what happens when the log fills up?   (Simple recovery model)   Will it move the transactions into the database so that the delete can continue?  Or will it hang?

    There is no need to test this, the SQL Server will throw an Error 9002 and if the database is online, it will be in read only mode until log space is available, which means that it will either have to be manually added or freed up. Since your database is in simple recovery mode, space will have to be added.
    😎

    One way of mitigating this is to have a small auto-growth log file on a different drive that can take the overflow if the main log file fills up, not ideal but it's better than making the database unusable.

    Since the OP is unsure I still believe that testing is appropriate so that he (or she) gains confidence in the process.  By doing it and seeing what happens, or doesn't happen, will help in the future.

  • Lynn Pettis - Saturday, August 4, 2018 12:23 AM

    Eirikur Eiriksson - Saturday, August 4, 2018 12:02 AM

    babahu - Friday, August 3, 2018 3:28 PM

    If I restrict the Transaction log to a fixed size (no auto growth) and then do a large delete, what happens when the log fills up?   (Simple recovery model)   Will it move the transactions into the database so that the delete can continue?  Or will it hang?

    There is no need to test this, the SQL Server will throw an Error 9002 and if the database is online, it will be in read only mode until log space is available, which means that it will either have to be manually added or freed up. Since your database is in simple recovery mode, space will have to be added.
    😎

    One way of mitigating this is to have a small auto-growth log file on a different drive that can take the overflow if the main log file fills up, not ideal but it's better than making the database unusable.

    Since the OP is unsure I still believe that testing is appropriate so that he (or she) gains confidence in the process.  By doing it and seeing what happens, or doesn't happen, will help in the future.

    Testing is fine but I would not consider SQL Server being an experimental science😉
    😎

    A little bit of research and learning can avoid embarrassing moments, in this case, a good place to start is this article, Troubleshoot a Full Transaction Log (SQL Server Error 9002)

  • Eirikur Eiriksson - Saturday, August 4, 2018 1:13 AM

    Lynn Pettis - Saturday, August 4, 2018 12:23 AM

    Eirikur Eiriksson - Saturday, August 4, 2018 12:02 AM

    babahu - Friday, August 3, 2018 3:28 PM

    If I restrict the Transaction log to a fixed size (no auto growth) and then do a large delete, what happens when the log fills up?   (Simple recovery model)   Will it move the transactions into the database so that the delete can continue?  Or will it hang?

    There is no need to test this, the SQL Server will throw an Error 9002 and if the database is online, it will be in read only mode until log space is available, which means that it will either have to be manually added or freed up. Since your database is in simple recovery mode, space will have to be added.
    😎

    One way of mitigating this is to have a small auto-growth log file on a different drive that can take the overflow if the main log file fills up, not ideal but it's better than making the database unusable.

    Since the OP is unsure I still believe that testing is appropriate so that he (or she) gains confidence in the process.  By doing it and seeing what happens, or doesn't happen, will help in the future.

    Testing is fine but I would not consider SQL Server being an experimental science😉
    😎

    A little bit of research and learning can avoid embarrassing moments, in this case, a good place to start is this article, Troubleshoot a Full Transaction Log (SQL Server Error 9002)

    Not an experiment if someone is not sure how things will work.  Best to see for ones self rather than take the word of people on the internet without testing.

  • Thanks for all the help. 
        I have observed that when I delete several months of data, about 500 mb worth, the transaction log increases from 200 mb to over 2 gb!   The transaction log is very wordy!   Once the delete is finished the log is still fat but it is only 3% used, i.e. mostly "empty."    I then shrank it back to 200 mb.  
         Since my process is to backup  the data I'm deleting before doing the delete,  I am happy to have no transaction log except as it might be needed if power were to fail -- I don't want corruption.   My only choice is to delete the several months of data in tiny amounts.   After each small delete, the log would be reused -- but this would be annoying to split up the deletes and would engender possible typing errors.
       (I'm storing real time physical data so it always grows and grows and I have to regularly delete.)

  • babahu - Saturday, August 4, 2018 2:46 AM

    Thanks for all the help. 
        I have observed that when I delete several months of data, about 500 mb worth, the transaction log increases from 200 mb to over 2 gb!   The transaction log is very wordy!   Once the delete is finished the log is still fat but it is only 3% used, i.e. mostly "empty."    I then shrank it back to 200 mb.  
         Since my process is to backup  the data I'm deleting before doing the delete,  I am happy to have no transaction log except as it might be needed if power were to fail -- I don't want corruption.   My only choice is to delete the several months of data in tiny amounts.   After each small delete, the log would be reused -- but this would be annoying to split up the deletes and would engender possible typing errors.
       (I'm storing real time physical data so it always grows and grows and I have to regularly delete.)

    Assuming your delete criteria would allow for it (i.e. if it's some kixed date and not based on duplicate checks) , you can chunk the delete easily enough.

    If you use a TOP (@N) type statement in your delete, you can drop it into a while loop.  You just have to paste the same delete statement twice :). 

    Clearly - keep the begin tran commented out when you run this for real or it will defeat the purpose.  Something like:


    declare @maxrows int;
    set @maxrows=250000; --pick your ideal number

    --begin tran
    delete top (@maxrows)
    from OrderDetail
    where 1=1  
    while @@rowcount>0
    Begin
          delete top (@maxrows)
         from OrderDetail
        where 1=1;
    End

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Will the transaction log process each loop?   I'm not understanding how programs (scripts) work as I've been told to put GO statements in to make sure the previous commands are processed.   If the transaction log is on a separate thread its cleanup, i.e. committing the change to the database, might be delayed?   Do you know how it works?

  • If you're deleting more than about a third of the data, it would be much better to copy what you want to keep to another table that's been created to be a twin of the original, then drop the original and rename the copy to be the same as the original.  As you're finding out, deletes are wicked expensive even in the SIMPLE Recovery Model.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply