Full Transaction log

  • GNUZEN (3/18/2009)


    Here we have publisher database having tran. log with large size even though we have log backup and full backup in place. What is the best practice to truncate trans. log on publisher database?

    The best practice, in any environment is not to ever truncate the transaction log. That means none of the following:

    BACKUP LOG MyImportantDB WITH NO_LOG

    BACKUP LOG MyImportantDB WITH TRUNCATE_ONLY

    DUMP TRAN MyImportantDB WITH NO_LOG

    All of those break the log chain. That means no log backups and no point in time restores after that until a full backup is run.

    With replication, a full log is often the result of the log reader not running or running slowly. Either way, the solution is to fix the problem with the log reader. The inactive log entries cannot be removed until the log reader has processed them.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Gail,

    In response to your post, let me add that I believe a backup should be done in conjunction with the DUMP TRAN, when used. There are a number of methods that should probably be addressed prior to a DUMP TRAN WITH NO_LOG or some such. However, I would hesitate to say never, after all there is a time and place for anything.

    So, I would say the best practices would include addressing issues based on their merits and using the tools required to accomplish the job at hand in the most effective and efficient way within a given time frame.

    This is definitely a moot point and there are a varied number of arguments to be made for which methods to use when. I will have to stand by what I said that I thought this method should have been included (perhaps with a bit more detail and disclaimer).

    -Joseph Foster

  • BACKUP LOG MyImportantDB WITH NO_LOG

    BACKUP LOG MyImportantDB WITH TRUNCATE_ONLY

    DUMP TRAN MyImportantDB WITH NO_LOG

    Just an additional point. All three of those were deprecated in SQL 2005 and removed in SQL 2008. From 2008 onwards the only way to remove inactive entries from the log is to either back it up or set the database to simple recovery.

    BACKUP LOG AdventureWorks WITH NO_LOG

    Msg 3032, Level 16, State 2, Line 1

    One or more of the options (no_log) are not supported for this statement. Review the documentation for supported options.

    BACKUP LOG AdventureWorks WITH TRUNCATE_ONLY

    Msg 155, Level 15, State 1, Line 1

    'TRUNCATE_ONLY' is not a recognized BACKUP option.

    DUMP TRAN AdventureWorks WITH NO_LOG

    Msg 156, Level 15, State 1, Line 1

    Incorrect syntax near the keyword 'TRAN'.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Gail,

    Good point and well taken.

    That should probably go along with the best practices: keep an eye out to upcoming versions.

    For those using pre-SQL 2008 servers, this would still be in the toolbox if needed and should be used with their scope in mind.

  • jfoster (3/18/2009)


    For those using pre-SQL 2008 servers, this would still be in the toolbox if needed and should be used with their scope in mind.

    I'd say no, even pre-2008, either back the log up to disk or switch (temporarily) to simple recovery. Both will result in exactly the same log truncation that backup ... with truncate_only does, the first doesn't break the log chain, the second does but at least it's pretty obvious what's actually happening.

    The problem I have with backup ... with truncate only is that it's used and recommended without people realising what it actually does. ALTER DATABASE ... SET RECOVERY SIMPLE makes it pretty clear that you're not in full recovery any longer

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I guess it depends on how many hands are hitting the server.

    In the situation where I would say it was acceptable is when others won't need to know what happened. I.e., if you aren't the DBA or acting in some fashion of administration for the server, then you probably shouldn't do it. Likewise, if you are sharing administrative responsibilities with one or more people, you might want to avoid doing it unless you consult with the other members of your group.

    I wouldn't nessessarily place that as an automated feature, and it should be documented where how and why the process took place, etc...but that goes more to the business model that the department is running under.

  • Good article and everyone has brought up some good points.

    However we should also investigate additional reasons that the log filled up in the first place.

    Such as, a sole process may be filling the log when it does a million row insert. If that process runs daily, then your problem will return daily. That process can be modified to perhaps insert the data in smaller amounts, ensuring the transaction is committed between chunks.

    - Paul

    http://paulpaivasql.blogspot.com/

  • Strangely, although we keep everything in FULL recovery mode and do transaction dumps every few hours, differentials every night, and fulls twice a week, we have some databases (always 3rd party, never our own) whose logfiles simply grow and grow. I end up finding a 100 MB db with a 10 GB log file - and its 99% empty.

    My theory - and that's all it is - is that those vendors are creating the problems themselves.

    Many a third party vendor has proven they don't know much about the difference between a database and a sequential file, and they often wrap huge amounts of things in transactions that are NOT; or rebuild huge tables from scratch instead of just updating.

    There often is no practical way (certainly no reliable scripted way) to clear out the space when the usual cycle of dumps doesn't accomplish it, not without taking the database down. It's a real waste of time and money.

    Even though I've always believed "never ever use auto-shrink", I'm reconsidering for those troubled dbs. After all, these are 64-bit 64GB 8-way servers with multipath, non-front bus IO. And that auto-shrink rule I've known since the days of the /3GB switch...

    Like I say - never see it in anything we build ourselves, nor on Sybase.

    Roger Reid

    Roger L Reid

  • Sorry guys I have missed all the banter 🙂 I was a little busy at work today. I will join the discussion.

  • GilaMonster (3/18/2009)


    Your first suggestion for a full tran log is to shrink it. Maybe I'm missing something, but that's completely the opposite of what's needed.

    The log is full, ie, there is no free space within the log file. Since there is no free space within the log file, a shrink will find no space to release to the OS. Even if the shrink did find some free space, that'll just make the situation worse.

    If the log file is full you need to either reduce the amount of data inside it by either backing the log up or switching to simple recovery, or you need to grow the log file to give it more space.

    I see your point, probably i should have not mentioned that as my first point. I have outlined couple of steps, I should not have said as that is the first thing you can do. We would not be able to shrink the log file until log files are freed.

  • GilaMonster (3/18/2009)


    Your first suggestion for a full tran log is to shrink it. Maybe I'm missing something, but that's completely the opposite of what's needed.

    The log is full, ie, there is no free space within the log file. Since there is no free space within the log file, a shrink will find no space to release to the OS. Even if the shrink did find some free space, that'll just make the situation worse.

    If the log file is full you need to either reduce the amount of data inside it by either backing the log up or switching to simple recovery, or you need to grow the log file to give it more space.

    Thanks for the feedback Gail. I see your point, probably i should have not mentioned that as my first point. I have outlined couple of steps, I should not have said as that is the first thing you can do. We would not be able to shrink the log file until log files are freed.

  • jfoster (3/18/2009)


    Krishna,

    I thought I'd post another alternative which I didn't see in your article. I've used this in the past, with good results. This method allows me to run it on a live database without causing issues. At the same time, I have had issues where the log file never seemed to decrease after a shrinkdatabase...this has had 100% good results. Of course, the disclaimer would be that you are definitely purging the log file(s), so there is no going back if you didn't back them up.

    USE [AdventureWorks]

    DUMP TRAN [AdventureWorks] WITH NO_LOG

    GO

    DBCC SHRINKDATABASE ('AdventureWorks', 1)

    Thanks for sharing that jfoster.:-). Everyday you learn something new.

  • Paul Paiva (3/18/2009)


    Good article and everyone has brought up some good points.

    Thanks for the feedback Paul.

    However we should also investigate additional reasons that the log filled up in the first place.

    Yes, definitely but that would need one more article.

Viewing 13 posts - 16 through 27 (of 27 total)

You must be logged in to reply to this topic. Login to reply