SQLServerCentral Article

SQL Log File Too Big

,

Is your SQL database log file too big? Maybe it is. But maybe it’s not.

When log files keep growing and appear to be too big some might suggest switching to Simple recovery, shrinking the log file, and switching back to Full recovery. While this will work to shrink the file and free up disk space, it may grow right back again. Also, the database will be at risk for data loss in the event of an unexpected disaster. You will only be able to recover data from the point in time of your last full backup. If it is important to minimize data loss and recover data to point in time, it is a very bad idea to use this method.

If your log file has grown very big, this article can help.

The Most Common Reasons for a Large Log File

There are a few common causes of this situation.

  1. Your database is set to Full Recovery and no transaction log backups are happening
  2. Large inserts or deletes are happening (and the operations really need to be that big)
  3. Long running transactions are open (such as index maintenance or bulk import)

It’s important to have enough room in your log file for SQL Server to do its work. Transactions in your database (i.e. changes to data) are logged and these records are written sequentially to the log file. As soon as the transaction has committed, or a checkpoint has written the changes to disk, or a backup of the transaction log has occurred, these records are no longer needed, and SQL will mark the log files ready for re-use. This means that the log file is empty (or emptier…) but does not shrink the file.

Think of your log file as being like a reusable grocery bag that you use and fill each week.

When you take the groceries out of the bag (truncate), the size of the bag is still the same, and it’s ready for more groceries (same amount) to fit in. Shrinking the bag will make it smaller, and assuming you need to put the same amount of groceries in the following week, you will have less storage space. The groceries won’t fit, so don’t shrink your grocery bag if you know you need the space!

The obvious solution to prevent log files from growing too large is sizing it correctly on the front end but sometimes events occur that require us to be reactive instead of proactive.

Solutions

Here are a few solutions to the problem of a too large transaction log file.

If your database is set to Full Recovery and no Tlog backups are happening

You can do one of two things in this situation:

  • Set the database to Simple recovery – do this only if you don’t care about point-in-time recovery in the event of disaster. This would most likely be used on a test or dev database, and is also suitable for data warehouses with read-only data.
  • Leave in Full recovery, but schedule tlog backups on a frequent basis depending upon your tolerance for possible data loss. In a perfect world, production databases should be in Full Recovery mode for disaster recovery reasons. Recovery model and frequency of backups are always going to depend on a business’ tolerance for data loss. Backing up your log file once every day or three won’t really help with your log size growth, especially in a highly transactional database because the log file will continue to grow until the log is backed up.  I’ve had client that back up Tlogs every 2 minutes to every 6 hours - the frequency of your Tlog backups really depends on your business needs.

If large inserts or deletes are happening

You have a few options here.

  • Consider breaking up large transactions into smaller batch jobs
  • Consider switching to Bulk Logged Recovery during the large bulk operations. Switching between Full and Bulk Logged will not break the transaction log backup chain.
  • Properly size your log file. For large transactions SQL has to be able to see logs back to the beginning of the transaction in case something goes wrong and it needs to recover. The size of your log file should accommodate the largest transaction, or the largest sum of concurrent transactions that regularly occur.
  • Properly size the autogrowth of your log file. Set auto-growth to appropriate amount. If more space is needed by a transaction and autogrowth occurs, transactions that require a write to the transaction log will also have to wait until the grow operation completes which can degrade performance. When your log needs more room for large transactions, and the auto growth size is set in too small increments, too many Virtual Log Files (VLFs) will be created which takes more time to perform database recovery.   In order to see how many VLFs are in your database, check how many records are returned after running this statement:
    Use MyDatabase
    GO
    DBCC LOGINFO
  • As a common rule of thumb, VLFs should be less than 500, depending on the size of your database. If VLFs greatly exceed this amount, take these steps:
    1. Take backup of your log file (may need to do this more than once)
    2. Shrink the log file
      USE MyDatabase;
      GO
      DBCC SHRINKFILE (MyDatabase_log, 1);
      GO
      -- shrink log file as small as it can go, preferably close to 1MB
      -- may have to shrink it gradually, in smaller chunks
    3. Re-grow the log file to a practical size, which should accommodate the largest transaction, or the largest sum of concurrent transactions that regularly occur. *Depending on the size you want your log file to be, consider growing your log file in multiple chunks.  Kimberly Tripp recommends growing your log file in 8GB chunks…read this for more info: http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/
      USE MyDatabase
      GO
      ALTER DATABASE MyDatabase
      MODIFY FILE (NAME = MyDatabase_log, SIZE = 8000MB) 
      -– grow the log in multiple chunks (i.e. 8GB, then 16GB, etc)
    4. Set auto grow size to larger chunks of space, so less VLFs are created during auto grows
      USE [master]
      GO
      ALTER DATABASE MyDatabase 
      MODIFY FILE(NAME = MyDatabase_log, FILEGROWTH = 1000MB)
      GO

If long running transactions are running (index maintenance or bulk inserts)

There are a couple ways to deal with this situation.

  • Again, properly size your log file. (See steps above). Log files should be created at the desired size when the database is created, rather than allowed to slowly grow over time. However if this was not done initially, then shrinking the log file and manually re-growing it to the desired size is your next option.
  • If possible, consider breaking up large transactions into smaller batch jobs

Conclusion

Being proactive in managing your SQL environment is always recommended for preventing problems, but occasionally things happen which require us to deal with issues.  Instead of creating your database log file with a small amount of space and allowing it to automatically grow in small amounts, create it with enough space from the start, so that it only grows on rare occasions.  Properly managing your database file sizes and auto growth will help ensure your disk space doesn’t run out, and also it will help improve the performance of your database.

Resources

Rate

5 (2)

You rated this post out of 5. Change rating

Share

Share

Rate

5 (2)

You rated this post out of 5. Change rating