Transaction log bigger than data File

  • Guys, the response from my database is very slow. I have checked all settings but I suspect the reason could be that the log file is very big - 10G as opposed to the data file's 4G.

    Any help wil be appreciated.


    Duzit

  • Duzit,

    How often do you backup your system?  If you dont you can run the code below to clean-up the LOG, HOWEVER please be aware that IF this is a production system then you may need to research backing up the database and log which will preserve your data.  (If this is still within your companies policy)

    BACKUP LOG TestDB WITH NO_LOG

    DBCC SHRINKFILE (TestDB_Log, 1000)

    The 1st command issues a backup "DUMP" of completed transactions from your T-LOG and the 2nd will shrink the physical file down to 1GB or whatever size you tell it (IF you want). 

    Good Luck



    Good Hunting!

    AJ Ahrens


    webmaster@kritter.net

  • It sounds like you don't backup your transaction log. Have you ever done a full backup? If not, DO IT NOW!!!

    If you have done a full backup, backup the transaction log. This shrinks the log file (not immediately though). If you want to shrink it right away, you can BACKUP LOG dbname WITH TRUNCATE_ONLY. (That can be run even if you just backed up the log). You should immediatly do a full backup after truncating the log. Or you can shrink it using the DBCC SHRINKFILE command.

    -SQLBill

  • If you do backup the transaction log, maybe you have another job running in a maintenance plan. Look to see if you are running DBCC DBREINDEX regularly under the Optimisation tab of the maintenance plan. The job is:

    Reorganize data and index pages. This can significantly grow the transaction log especially if you have 100% fill factor on your indexes.


    ------------------------------
    The Users are always right - when I'm not wrong!

  • Jonathon - I'm interested in your comment reorganising data and index pages.  I'm seeing a similar problem with out logs and data - with the logs twice the size as the data.  We're changing the backup logs to an intraday process so that should help.  However, let me give you my example:-

    I update one table with 500,000 rows daily.  This table has 8+ indexes on it (about 100 fields).  Would the number of indexes contribute to the volume of the transaction log we're seeing?  We're reviewing the indexes but this is an incremental process as the product is large. 

    Any help would be great.

    Thanks,

    Steve

     

     

     

  • Steve. The number of indexes will definitely contribute to growth. The only way I have ever found out the main culprits is by taking a copy of the database BEFORE you run any reorganisation jobs and then reindex each index in turn whilst monitoring the stats. I know this is cumbersome but it will give an explicit result to work with.

    However, even if you find the main offendor there is not a lot you can do other than index less frequently or expand the fill factor. What Microsoft need to do is give the option of reducing the amount of logging during re-indexing. A felow dba received a negative reply when he raised this as an issue with them.

    I have tried setting the db to bulk logging mode during reindexing but this only cuts down the log by about 25%. Every little helps I suppose.


    ------------------------------
    The Users are always right - when I'm not wrong!

  • Thanks Jonathan; just another quick one on this - so is an REINDEX more log intensive than a DEFRAG or do they have the same footprint regards to the transaction log.

     

     

  • Books on line says:

    QUOTE: Unlike DBCC DBREINDEX (or the index building operation in general), DBCC INDEXDEFRAG is an online operation. It does not hold locks long term and thus will not block running queries or updates. A relatively unfragmented index can be defragmented faster than a new index can be built because the time to defragment is related to the amount of fragmentation. A very fragmented index might take considerably longer to defragment than to rebuild. In addition, the defragmentation is always fully logged, regardless of the database recovery model setting (see ALTER DATABASE). The defragmentation of a very fragmented index can generate more log than even a fully logged index creation. The defragmentation, however, is performed as a series of short transactions and thus does not require a large log if log backups are taken frequently or if the recovery model setting is SIMPLE.

    Seems to be the same cost, if not more. However, as I said earlier, switching to bulk_log mode does not save a lot of logging anyway.

    Better the devil you know..


    ------------------------------
    The Users are always right - when I'm not wrong!

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply