Transaction log file too large

  • Hello,

    I am administering a database where the transaction log is growing too fast. I backed up the transaction log this morning and shrank the log file to 50 MB, and by now it is already 615 MB again. That is gigantic given that our database file is only 612 MB as of now.

    Does anyone know why the log file would grow so qucikly, and is there any automated way of shrinking it after a transaction log backup? Is there a way to run the file shrink after the transaction log backups? (We have a job that backs up the transaction log every hour.) I had thought that the transaction log is truncated whenever it is backed up.

    Thanks in advance to anyone who can give me advice on how to solve this transaction log problem or who can tell me what to look at to help diagnose the issue.

    ---

    webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

  • If it is grow so fast, it must mean there is a significant amount of transaction processing (inserts/updates/deletes).  How often to you run transaction log backups during the day?  If you aren't running scheduled t-log backups, you should consider adding them.  Also, if you take the time to shrink the t-log, you are affecting system performance; and you are affecting performace every time the t-log has to grow to record new transactions.

    You need to find a balance between frequency of t-log backups and how large you can afford to have the t-log.  The less often the t-log has to grow, the less likely there is to be a performace hit.  Our HR and Finance systems t-log are large and mostly empty most of the time, but I don't shrink them as I don't want to experience the performace hit that the system would take if there were multiple requests to grow the t-log.

    hth,

  • Thanks for your reply. I'll try to answer your questions as best I can.

    We run transaction log backups every hour. We make a full backup once per dayat 6:00 pm. It looks like there is a large transaction log backup for every next hour's transaction log backup (say at 7:00).

    Here is what I have been doing to shrink the log file:

    1. Back up the transaction log.

    2. Run DBCC SHRINKFILE to reduce the file to 25 or 50 MB.

    3. At first it looks like the log file shrinks a little (say 615 to 560 MB)

    4. I repeat steps 1 and 2.

    5. At that point the log file is about 25 or 50 MB.

    Since I run the shrinkfile once at the end of the day, I don't think it's impacting performance all day long. However, perhaps the timing of the transaction log backups and full backups, along with the transactions themselves, are all combining somehow to drive up the transaction log file size during the course of the day.

    I hope this gives a few more clues to help you give some more advice.

    Thanks again!

    ---

    webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

  • Another thing you will want to consider with file growth is what you have the file growth set to. Had a frieng who shrunk his daily but by the end of the day was close to 1GB. He noted that it was only about 50% full. We checked the settings and found he set growth to 100% instead of 10% which meant it double each time.

    I have also seen was due to page movement in a few cases as well as found it can be your design that causes this as well as user processes. In one case I need to update a related table each time a change occurrs and for simplicity I had decided to delete all records each time and reinsert to be sure was an exact match of the pushed data. Well this was bad enough except that the users decided they should about every minute press save as draft becuase they wanted to be sure they did not forget and let the connection timeout.

    There can be other reasons such as bulk inserts you do along with you are set to bulk recovery model.

    You just need to understand why it is this way.

    As for backup I would do a full to file backup then shrink the log file and do another full backup to be sure there is no chance your full misses any transactions.

  • If you are doing the Full Backup in place of the Transaction Log Backup, that is why your 7:00 Transaction Log is bigger.  The Full Backup does not truncate the transaction log.  You need to either run a t-log backup just before or just after the full backup if you want to ensure that the t-log is truncated.

     

  • Why do you think this is too large? Are you running out of disk space? If you aren't then leave the t-log size alone. Since it's just continuing to grow after shrinking the shrink and re-allocation of disk space are wasted cycles. Leave well enough alone and you should see that the t-log size will stabilize.

    Yes, you'll get a bigger t-log backup right after a full backup. If the t-log size worries you, then do a single truncate and shrink right after the full backup.

    If you really need the size to be kept smaller, then do t-log backups more often. I'm doing them every 10 minutes on our production database, but that's mostly for our log-shipping setup.

    Steve G.

  • Hi, and thanks,

    The reason I think it is too large is that it is larger than the database file itself (the .mdf file). I thought that was a sign of a problem - is it not?

    I also wonder whether a large transaction file degrades performance of applicatons using the database - but I'm not sure, and if anyone knows whether that is true and how the slowdown happens in detail, I would be very grateful to know.

    Thanks again.

    ---

    webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

  • This is not necessarily a problem. The t-log size is (as was stated above) a reflection of the amount of change activity on the database. A small database with a large amount of updates/inserts/deletes will have a larger log file. The real question is: are you seeing degraded performance as the log file grows?

    Steve G.

  • Thanks, that is good to know. So a database that is, say, 610 MB, with a tlog file that is 615 MB, is not necessarily bad?

    In terms of performance, it is hard to say at this point. We are seeing some intermittent slowness in the web app, but I am investigating the DB side while others are investigating other possible causes. I would love to rule in or rule out the DB just to know what is happening.

    Do you know of a way to see more precisely whether or not the tlog file size is in fact affecting performance?

    By the way, I have been running a trace from my PC all morning looking for queries running longer than 10 seconds, and I have yet to see one show up. I don't know if that rules out the DB, but it certainly seems that no very long queries have been running so far today.

    Thanks again!

    ---

    webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

  • Sounds good - though 10 seconds would be a really long query. 😉

    There are many other things that can slow down performance, especially tables with large numbers of indexes that need to be updated during every insert, update or delete, and depending on MS for cursor handling and queries (i.e. .Net Framework automatic dataset stuff). I strongly suspect that t-log size has nothing directly to do with your application performance (but shrinking it regularly would).

    In any case, performance troubleshooting and tuning is a topic in and of itself, and there's lots of people here who know a lot more about it than I do. 😀

    Steve G.

  • Whenever you shrink the log file, the server will grow it again - thats an expensive operation and a definite performance hit, unless you have Fast File Initialization (FFI) set. Add the Database/Data File Auto Grow and Database/Log File Auto Grow events to your profiler trace to check for this.

    Reference http://msdn2.microsoft.com/en-us/library/ms175935.aspx or Kalen Delaney's "Inside SQL 2005: The Storage Engine" (page 98, 1 paragraph only) for a brief description of FFI. This is not an option you can control from T-SQL (or SSMS for that matter), it's controlled by a Windows Security setting SE_MANAGE_VOLUME_NAME. If your SQL Server account has the permission, and you're on Win XP or Win 2003, FFI will be used by default, unless you turn it off with traceflag 1806.

  • I'm inclined to agree with the comments here about leaving the size alone.  ShrinkFile just wastes CPU cycles and causes overhead on the disk when it comes time to expand again.

    My suggestion would be to:

    - run the log backups on a more regular basis, ie: 30 mins/15 mins/10 mins.  (do the '15')

    - define the log size to something along the size that you expect it to grow to be (ie: 350Mb for the 30-min or 200Mb for the 15-min time periods) and leave it locked in.  If there is space on a drive and someone or something comes along and acquires that space just after you've shrunk the file and then it goes to expand again..... *boom*

    - Forget about the Shrinkfile option.  Making your logfile-size go up and down like a yo-yo is just wasting cpu cycles.

    That's be my simplefied solution

    A lack of planning on your part does not constitute an emergency on mine.

  • Another reason for the transaction log to fill up quickly could be an 'Optimization' maintenance plan ('reorganize data and index pages'). This may cause a transaction log about the size of the database.

    Willem

  • As alluded to earlier, a transaction log which is growing, is usually indicative of a lot of transactions, from INSERT/UPDATE/DELETES to DDL statements.  With the growth of the transaction log coupled with the performance issues you're experiencing, I might be inclined to look at blocking problems caused by the DML statements and reads.  Set up a trace for this.

    Additionally, everyone has their own tolerance for "long running" queries, but for a typical OLTP database anything over a second, or two is probably long.  Drop your trace to queries taking over 2 seconds and see what that returns.

  • Never shrink any database file if you expect it to grow to its original size within 3 months, except for an emergency.  All you are doing is causing Windows NTFS disk fragmentation that WILL harm your performance.  Disk fragmentation can only be cured by running a disk defrag - an index rebuild does not impact this problem.

    Doing a regular log file shrink is bad news.  You waste time doing the shrink, then during normalk business use your users get hung now and then while the file grows...

    Remember, SQL requires that a transaction log is large enough to perform a rollback of all uncommitted work.  If you have 100 MB of uncommitted updates in your log, SQL will need at least 200 MB of log space (plus some extra), to ensure that the Rollback would work if it was needed.  In your situation, a log file size of 600 MB means you have 200 to 300 MB of uncommitted updates.  If this worries you, get your application people to add a COMMIT at the end of each logical unit of work.

     

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

Viewing 15 posts - 1 through 15 (of 40 total)

You must be logged in to reply to this topic. Login to reply