shrinking log transaction file (ldf) will break logs chain?

  • Hi!, I've read several post about logs & shrinking, but i don't know -i can't decide- where I must ask about my problem...

    Well, situation is i've a db in PRO server which have transaction log shipping to another one. The original schedule for backup log's job was "occurs every day every 15 minutes" but the trn files grows up to a big size, so i reschedule that job so now it is running every 5 minutes. However, the ldf file remains much bigger than i should like ... Question is, if I shrink it, I'll break the log chain? What do you think i must to do?

  • shrinking does not break the log chain.

    Make the shrink a one off operation, do not repeatedly shrink it. Be sure first of all the log does not need to be the size it currently is to support activity. the largest log is likely to be produced when your reindexing runs.

    ---------------------------------------------------------------------

  • Al-Rahim (10/24/2012)


    However, the ldf file remains much bigger than i should like

    It wil grow as it is nature of LOG file. the operation will get performed more LOG will grow , manage certain tasks like rebuild indexing , T-sql which are handling long heavy intermediate steps MEANS long-running transaction case.

    So better to do proper regular log backup OR add more disk for log

    -------Bhuvnesh----------
    I work only to learn Sql Server...though my company pays me for getting their stuff done;-)

  • As already mentioned, there is zero impact on the log chain if you shrink the transaction log. But the question remains, why do you want to shrink the transaction log? Other than doing restores to other environments with inadequate space, there really isn't a good reason to shrink the transaction log. It grows to a certain size because it requires that amount of space to log transactions until the next transaction log backup occurs.

    Typically, the index maintenance process is what causes it to grow the most. Make sure that you have a smart index maintenance process in place and do not just use the database maintenance wizard so that you are only rebuilding the indexes that are fragmented. Rebuilding an index that isn't fragmented generates a lot of unnecessary entries in the transaction log.

    Check out the following two BLOGS by Kimberly Tripp:

    http://www.sqlskills.com/blogs/kimberly/post/8-Steps-to-better-Transaction-Log-throughput.aspx

    This one contains links to several smart index maintenance processes.

    http://www.sqlskills.com/blogs/kimberly/post/Database-Maintenance-Best-Practices-Part-I-e28093-clarifying-ambiguous-recommendations-for-Sharepoint.aspx

    You can also check out Ola Hallengren's scripts. I know a lot of DBA's who have implemented his maintenance scripts.

    http://ola.hallengren.com/

  • I know ldf will increase 'ad infinitum' if i do not take backup log... and the TLS schema will not works if i don't. Refering to indexing... well, Olla's solution is what i've implemented in deed...

    As i've wrote, my goal is to give to ldf an ideal size between it gigant current size and that wich contemplating the longest estimated transaction.That is the reason why i want to shrink a little the ldf file, desallocate the innecesary allocated space, not all the allocated space. Resources are limited, the requirement, aren't... So, we need do a balance of it.

    Thanks a lot to all for suggestions.

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply