Multiple Transaction Log Files

  • We have multiple transaction log files configured for some of our high volume clients; this is done for disaster prevention reasons. Should transaction log backups fail during late night activities repeatedly, and no tech notification go out in time, a secondary log file generally lets stuff keep going on a 'spillover' volume until backup and truncation issues are resolved.

    Our issue with this strategy is: once SQL Server has hit a threshold that requires utilization of this secondary log file, after backup issues are resolved, SQL Server appears to continue to use this secondary file, while leaving the primary log file alone. What is the reason for this? How do we ensure SQL Server will return to using this primary log file (perhaps it's on a SAN and resides on higher performance drives than the failover file does...)

    Thanks!

  • you can't, sql uses transaction logs in a sequential manner, you will have to wait till normal log activity has left no active vlfs in the secondary log files.

    Attempt to get the disk supporting the primary log file large enough to handle your needs.

    ---------------------------------------------------------------------

  • Hmmm that's crappy. The two files are suppose to be addressed as one sequential topology, but evidently are not.

    Often times, with constrained customer SAN's containing multiple vendor databases, it can be problematic to grow volumes sufficiently to prevent disaster scenarios involving single log files. We actually configure failover volumes on slower disks (cheaper, used for archiving) to accommodate this issue on purpose, as most large data volume products do, as we have many databases in our topology, each one of which could experience problems from different failable backup strategies. Banking and Healthcare data doesn't care if you don't have a backup plan for backup or notification systems failures.

    So I'll set the secondary log file maximum growth size to some number that buys me a few hours of client throughput and yell at microsoft for not considering this issue (Oracle has no problem dealing with log files in this way).

    Thanks.

  • mtnclimberutah (3/7/2013)


    Hmmm that's crappy. The two files are suppose to be addressed as one sequential topology, but evidently are not.

    They are. SQL will use one, then the second, then the first, then the second, etc. It's a circular file, and if you have multiple files they're treated as one larger circular file. Use beginning to end, then start at the beginning.

    If you have two log files, one on fast storage and one on slow, you're going to be having erratic performance, your operations will be slower when the second file is in use. I would suggest one file, sized for the database's normal usage and with autogrow so there's some spare room as necessary, plus monitoring on the backups.

    Log backups failing for 5 hours aren't just a concern due to the log growing, it also means increased data loss if the production system goes down, data loss that may be outside of acceptable range.

    Banking and Healthcare data doesn't care if you don't have a backup plan for backup or notification systems failures.

    The bank I used to work for most certainly did. A failure of the backup system is a crisis, a failure of the notification system is a serious problem

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • the multiple log files will be addressed as one sequential topology, but importantly If the log contains multiple physical log files, the logical log will move through all the physical log files before it wraps back to the start of the first physical log file. So all the files act as one wrap around file.

    The logical file being where the active vlfs start to where they end.

    How long the secondary takes to clear and cantherfore be dropped will depend on its size and level of database activity.

    you'll be wasting your time shouting at MS, its designed like that to be simple and robust, its got to be simpler than oracles redo logs 🙂

    ---------------------------------------------------------------------

  • thats what happens when you do other things whilst replying, Gail pops up :-), but at least you have confirmation now.

    ---------------------------------------------------------------------

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply