Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

Transaction log bigger than data File Expand / Collapse
Author
Message
Posted Wednesday, April 28, 2004 2:58 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, May 11, 2004 1:58 AM
Points: 4, Visits: 1
Guys, the response from my database is very slow. I have checked all settings but I suspect the reason could be that the log file is very big - 10G as opposed to the data file's 4G.

Any help wil be appreciated.



Duzit
Post #113369
Posted Wednesday, April 28, 2004 5:01 AM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Friday, July 22, 2011 5:35 AM
Points: 1,758, Visits: 9

Duzit,

How often do you backup your system?  If you dont you can run the code below to clean-up the LOG, HOWEVER please be aware that IF this is a production system then you may need to research backing up the database and log which will preserve your data.  (If this is still within your companies policy)

BACKUP LOG TestDB WITH NO_LOG

DBCC SHRINKFILE (TestDB_Log, 1000)

The 1st command issues a backup "DUMP" of completed transactions from your T-LOG and the 2nd will shrink the physical file down to 1GB or whatever size you tell it (IF you want). 

Good Luck





Good Hunting!

AJ Ahrens


webmaster@kritter.net
Post #113388
Posted Wednesday, April 28, 2004 7:50 AM
Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Friday, April 11, 2014 1:57 PM
Points: 3,240, Visits: 491

It sounds like you don't backup your transaction log. Have you ever done a full backup? If not, DO IT NOW!!!

If you have done a full backup, backup the transaction log. This shrinks the log file (not immediately though). If you want to shrink it right away, you can BACKUP LOG dbname WITH TRUNCATE_ONLY. (That can be run even if you just backed up the log). You should immediatly do a full backup after truncating the log. Or you can shrink it using the DBCC SHRINKFILE command.

-SQLBill




Post #113437
Posted Thursday, April 29, 2004 10:00 AM
SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Friday, August 03, 2007 2:55 AM
Points: 928, Visits: 1
If you do backup the transaction log, maybe you have another job running in a maintenance plan. Look to see if you are running DBCC DBREINDEX regularly under the Optimisation tab of the maintenance plan. The job is:

Reorganize data and index pages. This can significantly grow the transaction log especially if you have 100% fill factor on your indexes.



------------------------------
The Users are always right - when I'm not wrong!
Post #113709
Posted Tuesday, May 04, 2004 4:25 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, December 07, 2012 6:43 AM
Points: 49, Visits: 94

Jonathon - I'm interested in your comment reorganising data and index pages.  I'm seeing a similar problem with out logs and data - with the logs twice the size as the data.  We're changing the backup logs to an intraday process so that should help.  However, let me give you my example:-

I update one table with 500,000 rows daily.  This table has 8+ indexes on it (about 100 fields).  Would the number of indexes contribute to the volume of the transaction log we're seeing?  We're reviewing the indexes but this is an incremental process as the product is large. 

Any help would be great.

Thanks,

Steve

 

 

 




Post #114223
Posted Wednesday, May 05, 2004 5:42 AM
SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Friday, August 03, 2007 2:55 AM
Points: 928, Visits: 1
Steve. The number of indexes will definitely contribute to growth. The only way I have ever found out the main culprits is by taking a copy of the database BEFORE you run any reorganisation jobs and then reindex each index in turn whilst monitoring the stats. I know this is cumbersome but it will give an explicit result to work with.

However, even if you find the main offendor there is not a lot you can do other than index less frequently or expand the fill factor. What Microsoft need to do is give the option of reducing the amount of logging during re-indexing. A felow dba received a negative reply when he raised this as an issue with them.

I have tried setting the db to bulk logging mode during reindexing but this only cuts down the log by about 25%. Every little helps I suppose.



------------------------------
The Users are always right - when I'm not wrong!
Post #114462
Posted Wednesday, May 05, 2004 7:01 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, December 07, 2012 6:43 AM
Points: 49, Visits: 94

Thanks Jonathan; just another quick one on this - so is an REINDEX more log intensive than a DEFRAG or do they have the same footprint regards to the transaction log.

 

 




Post #114480
Posted Wednesday, May 05, 2004 9:54 AM
SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Friday, August 03, 2007 2:55 AM
Points: 928, Visits: 1
Books on line says:
QUOTE: Unlike DBCC DBREINDEX (or the index building operation in general), DBCC INDEXDEFRAG is an online operation. It does not hold locks long term and thus will not block running queries or updates. A relatively unfragmented index can be defragmented faster than a new index can be built because the time to defragment is related to the amount of fragmentation. A very fragmented index might take considerably longer to defragment than to rebuild. In addition, the defragmentation is always fully logged, regardless of the database recovery model setting (see ALTER DATABASE). The defragmentation of a very fragmented index can generate more log than even a fully logged index creation. The defragmentation, however, is performed as a series of short transactions and thus does not require a large log if log backups are taken frequently or if the recovery model setting is SIMPLE.

Seems to be the same cost, if not more. However, as I said earlier, switching to bulk_log mode does not save a lot of logging anyway.

Better the devil you know..



------------------------------
The Users are always right - when I'm not wrong!
Post #114525
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse