How To Truncate Log with Transaction Replication

  • I am running a transactional replication and seeing the transaction log growing in the publisher database; it is a little over 200GB.  Just a quick note, I moved the published database to a new instance; the old instance had similar transactional replication setup but the transaction log remained steady at around 5GB - the data file is about 130GB.

    I backup the publisher database daily through the maintenance plan but I didn't see any option to truncate the log - although I had the same setup in the older instance; and again, the log didn't grow.

    I saw an article that says I would need to stop the replication, truncate the log, and create a new snapshot to reinitialize the replication.  Creating the snapshot is very time consuming and takes up a log of disk I/O and network bandwidth; needless to say, causes issues to our production website.  I also read many other articles and am still unsure what's the best way to truncate the log and shrink down the file while transactional replication is continuous running. 

    Please advice.

    Thanks,
    Ken

  • Did u check log_reuse_wait? Does log reader is working  properly? (no errors?) Do you see data being replicated to Subscriber? What recovery u using on that published db?

    Could you tell more how did u move published database and setup replication on that server? You said that this published db is a backup from other Publisher.

    If log reader is working fine and reading transactions the issue may be not replication. Maybe any other active transaction on database?

    Cheers,
    Bartek

  • Hi Bart,

    The log_reuse_wait says "LOG BACKUP".
    The log reader is running as I am seeing data replicated to the subscriber pretty much in real-time.
    The recovery model is FULL.
    I didn't really move the published database, I merely setup replication from the old published database to the new one.  Then I setup a new publisher/distributor on the new instance and cutover transaction to the new instance.
    I ran DBCC Opentran at the time submitting the original post and hadn't seen any open/on-going transactions

    Transaction information for database 'xxxxxxx'.

    Replicated Transaction Information:
       Oldest distributed LSN  : (9748:19585:3)
       Oldest non-distributed LSN : (9748:19587:1)
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    -Ken

  • I do not think it is replication issue in that case.
    I believe you are running in full recovery model. I am assuming u created new db on new instance from scratch? model db by default is in full recovery model therefore all new user dbs will have same settings.
    You tried to run transaction log backup? BTW truncation on transaction log has been removed sometime ago. You either backup transaction log to drive or change recovery model to simple and change it back.

    LOG_BACKUP pretty much means  that the virtual log files cannot be reused if they have not been backed up yet. (fi running in full recovery model)

  • Bart,
    Yes, I am running in full recovery model.  I did create the new DB from scratch and then it became the subscriber of the old instance.  After the cutover, the new instance became the publisher/distributor (running from the same instance).  I haven't transaction log backup yet because I am unsure if that would impact the transaction replication.  As i understood correctly, transaction log backup does not reduce the file size - I need to run a DBCC Shrinkfile, correct?

    -Ken

  • Hi,

    Transaction log backup won't affect replication at all. You can run transaction log backup without any issues. Yes once u backup the log you will need to run shrink fileto reduce size of the log.

    Cheers,
    B

  • Great and thank you!

  • I did a dry run restoring a backup to another instance.  Backup Logs went fine but DBCC Shrinkfile threw an error:

    Cannot shrink log file 2 (xxxxxxx_log) because the logical log file located at the end of the file is in use.

    I did a DBCC Loginfo and noticed there are rows with Status = 2; I think this is due to the replication that was still running from the server that was backed up from and not all transactions have been replicated yet.  So I am wondering - I don't have right setup in the instance to try this out, should I stop the Log Reader agent before doing a Backup Log?  If I do that, would all the transactions have been replicated to the subscribers to update all status to 0?

    Thanks,
    Ken

  • that may not be related to replication. Status=2 means that this VLFs is active (at given point at least one VLF need to be active). what log_reuse_wait says this time?

    in your scenario Status=0 will be set when to condition will be met:
    1. Transactions that are marked for replication are read by log reader and committed to distribution db.
    2. You backed up transaction log.
    If these two things will happen then you should have only one row with Status=2 (active VLF)

    You do not need to stop logreaders while performing log Backup or shrinkfile. Almost never I was forced to stop log reader:).

  • Bart,

    Right now, log_reuse_wait says "Nothing". I do recall after the first Backup Log, I was not able to shrink the log because of the error i posted earlier.  copying here for reference - there were about 10 or so rows with Status=2

    Cannot shrink log file 2 (xxxxxxx_log) because the logical log file located at the end of the file is in use.

    I believe it was after I did a database backup that I was able to shrink the file. 

    I am ready to run this in my production database, do you think:
    - I will run into the shrink file error again?
    - I need to do a database backup if I do run into the shrink file error?
    - I can run it while the production database is actively being used, any possible performance degradation?

    Thanks,
    Ken

  • Backup log should should clear up the log and leave only one VLF active. AS far I see it most probably on your dry run u did not have any data coming in and therefor log did not "rollover" to the VLF at the beginning of the log file and the only active VLF were at the end of the file.
    I believe in production that you have normal transactions coming in it won't be a problem. if  yes wait a bit do another backup log and then run shrink.
    I think running backup log def wont do any harm. IT is normal operation that you need to do regularly if u are running Full recovery.

    - I will run into the shrink file error again? - as I said I doubt it.
    - I need to do a database backup if I do run into the shrink file error? - run backup log again after some time and check dbcc loginfo also keep log reader running.
    - I can run it while the production database is actively being used, any possible performance degradation? the backup log most probably is big now and it may take some time to complete. the only impact I can see is on storage level. Some things can go bit slowish, but only due to the size of the log. However we did not see performance issues even when backing up 100GB+ transaction logs.

    Hope it helps.

  • Cool, sounds like a plan.

    Thank you!

  • For the record, I ran in to a couple of issues executing the plan above:

    - Backing up the (now) 300GB log file requires the same amount of space; I don't believe "with compression" was an option.  I didn't want to add an additional 300GB of disk space to the instance just to backup the log, so I backed up over the network to another instance.  It took a while...

    - After the backup log process is done, I still experienced the same error posted above when trying to run DBCC Shrinkfile.  I tried backing up the DB, then issued a backup log and finally, DBCC Shrinkfile again; however, received the same error.  I ended up putting the DB in simple recovery mode before shrinking, then put it back to full recovery mode.

  • ken 16613 - Thursday, November 1, 2018 11:43 PM

    For the record, I ran in to a couple of issues executing the plan above:

    - Backing up the (now) 300GB log file requires the same amount of space; I don't believe "with compression" was an option.  I didn't want to add an additional 300GB of disk space to the instance just to backup the log, so I backed up over the network to another instance.  It took a while...

    - After the backup log process is done, I still experienced the same error posted above when trying to run DBCC Shrinkfile.  I tried backing up the DB, then issued a backup log and finally, DBCC Shrinkfile again; however, received the same error.  I ended up putting the DB in simple recovery mode before shrinking, then put it back to full recovery mode.

    When you do that, you need to take at least a  "DIF" backup to re-establish the log file chain.  A "Full" backup will also work.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks @jeff.  I didn't receive a notification from your reply for some reason.

    For clarification, at what point do i take the "DIF" or "FULL" backup, right after putting the DB in simple recovery mode?

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply