simple recovery model Log truncation

  • Hi

    1) At what time intervel does the simple recovery model truncates the trans Log

    2) I am having a 30 million records i need to delete 25 million old records with minimum log how it can be done there will be continues users accessing the table

    Thanks

    Parthi

    Thanks
    Parthi

  • 1) Whenever a checkpoint occurs. Only log records up to the beginning of the oldest open transaction can be truncated.

    2) Do it in batches and run CHECKPOINT between those batches.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I have simple solution for you.

    Copy 5 Million records to another dummy or staging table.

    Truncate the original table.

    And then copy 5 Million rows from Dummy table to original table.

    If you do not have partitions created on table this is the fastest way.

    😉

  • parthi-1705 (3/17/2010)


    I am having a 30 million records i need to delete 25 million old records with minimum log how it can be done there will be continues users accessing the table

    Understand that minimum logging is different than "no" logging. You might want to change the Recovery mode to Bulk-Logged instead of Simple (unless the db is already set to Simple) before doing your delete. This way, if you need to recover, you'll still have a transaction log to recover from.

    In Simple mode, checkpoints happen quite regularly and automatically. Usually, you'll hear people say "the transaction log truncates all transactions as soon as they are commited." It's not quite a true statement, but it does mean that transactions are unrecoverable almost instantaneously. The logging that occurs is only for the SQL Server engine, not for the DBA.

    Per BOL:

    If the database is using the simple recovery model, an automatic checkpoint is generated whenever the number of log records reaches the lesser of these two values:

    The log becomes 70 percent full.

    The number of log records reaches the number the Database Engine estimates it can process during the time specified in the recovery interval option.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin (3/17/2010)


    Understand that minimum logging is different than "no" logging. You might want to change the Recovery mode to Bulk-Logged instead of Simple (unless the db is already set to Simple) before doing your delete. This way, if you need to recover, you'll still have a transaction log to recover from.

    Deletes are always fully logged (truncate table excepted, which is always 'minimally logged' - sort of) regardless of the recovery model.

    Changing from simple recovery also changes nothing until a full backup is taken to establish a base for a future restore.

    The log stays in auto-truncate mode until the first full backup after a change from simple recovery.

  • vidya_pande (3/17/2010)


    I have simple solution for you.

    Copy 5 Million records to another dummy or staging table.

    Truncate the original table.

    And then copy 5 Million rows from Dummy table to original table.

    If you do not have partitions created on table this is the fastest way.

    😉

    Simple, yes. But not always practical on a production system.

    Also, remember that foreign key constraints will need to be dropped, indexes re-created etc etc etc

    Good idea, though.

  • vidya_pande (3/17/2010)


    I have simple solution for you.

    Copy 5 Million records to another dummy or staging table.

    Truncate the original table.

    And then copy 5 Million rows from Dummy table to original table.

    If you do not have partitions created on table this is the fastest way.

    😉

    I had originally thought to do that since the table I had wasn't referenced by foreign keys. But its a replicated database, so no truncations allowed.

  • How To Delete a Large Number of Records[/url]

    Limiting Deleted Rows by Using TOP

  • Hi there,

    I used a script to changed around 60 cust-test DBs from Full to simple recovery model as they were not needed to be in Full recovery mode etc

    I did the change on Fri and today is Monday..I was expecting the see lot of space freed up on the drive where files are stored, by this as thinking it will run checkpoint etc....but below reasons tells me it wont happen straight away:

    ================

    If the database is using the simple recovery model, an automatic checkpoint is generated whenever the number of log records reaches the lesser of these two values:

    The log becomes 70 percent full.

    The number of log records reaches the number the Database Engine estimates it can process during the time specified in the recovery interval option.

    =================

    Some of these DBs have never been backed up but were in Full recovery model. So does this change anything?

    or is it a simple matter of watining?

  • hp_dba_uk (9/24/2012)


    Hi there,

    I used a script to changed around 60 cust-test DBs from Full to simple recovery model as they were not needed to be in Full recovery mode etc

    I did the change on Fri and today is Monday..I was expecting the see lot of space freed up on the drive where files are stored, by this as thinking it will run checkpoint etc....but below reasons tells me it wont happen straight away:

    ================

    If the database is using the simple recovery model, an automatic checkpoint is generated whenever the number of log records reaches the lesser of these two values:

    The log becomes 70 percent full.

    The number of log records reaches the number the Database Engine estimates it can process during the time specified in the recovery interval option.

    =================

    Some of these DBs have never been backed up but were in Full recovery model. So does this change anything?

    or is it a simple matter of watining?

    Are you waiting for the space to be available to OS? If yes, that won't happen automatically. Changing the recovery model to SIMPLE does break the log chain & allows the transaction log to be truncated on every check point. However, truncatation of the transaction log means making the inactive VLFs inside the transaction log file as reusable & nothing else.

    In case you want the freed space returned to OS itself, you need to shrink the transaction log file.

    Note: If you know that after few days the transaction log file is going to be of the same size again (after shrinking), there is no point in shrinking it.


    Sujeet Singh

  • To understand it better I am pointing you to an excellent article written by Gail:

    http://www.sqlservercentral.com/articles/Transaction+Logs/72488/


    Sujeet Singh

  • hp_dba_uk (9/24/2012)


    Some of these DBs have never been backed up but were in Full recovery model. So does this change anything?

    or is it a simple matter of watining?

    Oh, heavens. I think you just granted me my daily dose of database terror with that statement.

    If "some of these" databases have never been backed up, the first thing you should do upon reading this message is a FULL backup on all your databases. Then test the backups on a sandbox server to make sure your backup processes are working and not corrupting anything.

    The second thing you want to do is schedule regular backups (full, differential, file, etc.) based on your recovery strategy.

    Then, and only then, worry about your transaction log. There's a good chance the transaction logs are larger than they need to be if they never got backed up. Your best bet, if you really really need the drive space (OS as Divine Flame commented), only shrink by small increments and let the log sit for a few days to see if it increases in size again. If not, shrink another small increment.

    If you try to shrink too much of the log, you may see a performance hit as the sql engine auto-grows the log again to account for the needed space. Especially if it has to "thrash" for the growth. (Even though it isn't really disk thrashing when the engine grows the file, I do actually use the phrase "thrashing" to describe the back and forth performed by the engine when it encounters a file that's just too small and the AutoGrow setting is also too small for the current set of transactions.)

    If, however, you don't need the disk space, I strongly advise leaving the log file alone.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Divine Flame (9/24/2012)


    To understand it better I am pointing you to an excellent article written by Gail:

    http://www.sqlservercentral.com/articles/Transaction+Logs/72488/

    Thanks

  • Brandie Tarvin - Wednesday, March 17, 2010 7:57 AM

    parthi-1705 (3/17/2010)


    I am having a 30 million records i need to delete 25 million old records with minimum log how it can be done there will be continues users accessing the table

    Understand that minimum logging is different than "no" logging. You might want to change the Recovery mode to Bulk-Logged instead of Simple (unless the db is already set to Simple) before doing your delete. This way, if you need to recover, you'll still have a transaction log to recover from.In Simple mode, checkpoints happen quite regularly and automatically. Usually, you'll hear people say "the transaction log truncates all transactions as soon as they are commited." It's not quite a true statement, but it does mean that transactions are unrecoverable almost instantaneously. The logging that occurs is only for the SQL Server engine, not for the DBA.Per BOL:If the database is using the simple recovery model, an automatic checkpoint is generated whenever the number of log records reaches the lesser of these two values: The log becomes 70 percent full.The number of log records reaches the number the Database Engine estimates it can process during the time specified in the recovery interval option.

    hi Tarvin, is there any other parameter used by SQL Server to clear the logs for reuse apart from above 2 mentioned?

  • farooq.md - Tuesday, March 21, 2017 11:25 AM

    Brandie Tarvin - Wednesday, March 17, 2010 7:57 AM

    parthi-1705 (3/17/2010)


    I am having a 30 million records i need to delete 25 million old records with minimum log how it can be done there will be continues users accessing the table

    Understand that minimum logging is different than "no" logging. You might want to change the Recovery mode to Bulk-Logged instead of Simple (unless the db is already set to Simple) before doing your delete. This way, if you need to recover, you'll still have a transaction log to recover from.In Simple mode, checkpoints happen quite regularly and automatically. Usually, you'll hear people say "the transaction log truncates all transactions as soon as they are commited." It's not quite a true statement, but it does mean that transactions are unrecoverable almost instantaneously. The logging that occurs is only for the SQL Server engine, not for the DBA.Per BOL:If the database is using the simple recovery model, an automatic checkpoint is generated whenever the number of log records reaches the lesser of these two values: The log becomes 70 percent full.The number of log records reaches the number the Database Engine estimates it can process during the time specified in the recovery interval option.

    hi Tarvin, is there any other parameter used by SQL Server to clear the logs for reuse apart from above 2 mentioned?

    Can you be more specific with your question? Is this an informational question only or are you actually experiencing issues?

    If the later... What database recovery model are you dealing with? What kinds of backups are you doing and how frequently? What are you looking for

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply