Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

How to manage Transaction Log. Expand / Collapse
Author
Message
Posted Tuesday, July 31, 2012 12:56 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Wednesday, February 26, 2014 2:53 PM
Points: 19, Visits: 157
Hi All,

I stheir a way we can manage Transaction log even before it fills up? I have some archive tables, my requirement is to add fields to this tables and default the existing records with certain default values. But unfortunately halway through the update queries it throws an error saying transaction log is full. The reason probably might bcoz of the number of records(millions) and the updates it is doing. I was thinking if i can write my update query in such way that it actually doesnt log each an every thing even after the update for field is done.

Thanks in advance.
Post #1338108
Posted Tuesday, July 31, 2012 1:25 PM
SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Today @ 1:21 PM
Points: 5,975, Visits: 12,883
ALL transactions are logged, there is no way to prevent that. You will either have to size your transaction log so it can cope with the update or preferably batch the update up so it commits at regular intervals and therefore the committed transactions can be cleared from the log

---------------------------------------------------------------------

Post #1338132
Posted Tuesday, July 31, 2012 1:48 PM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Today @ 2:06 PM
Points: 1,279, Visits: 2,212
Is this a one time thing?

You could do a full backup, change your recovery model for the database to SIMPLE, do your updates, change recovery back to full, do another full backup, and backup your transaction log.

Setting the database to SIMPLE recovery model will minimize the amount of logging done, and clear transactions from the log once they are committed so the space can be reused, but you won't be able to restore your database to a point in time using the transaction log backups until you change it back and do a full backup and transaction log backup.

Post #1338146
Posted Tuesday, July 31, 2012 2:00 PM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 3:46 PM
Points: 42,462, Visits: 35,525
sestell1 (7/31/2012)
Setting the database to SIMPLE recovery model will minimize the amount of logging done


Won't help here. Updates are fully logged in all recovery models. A single update uses the same amount of log space in simple or full recovery



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1338157
Posted Tuesday, July 31, 2012 2:11 PM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Today @ 4:45 PM
Points: 252, Visits: 1,364
george sibbald (7/31/2012)
ALL transactions are logged, there is no way to prevent that. You will either have to size your transaction log so it can cope with the update or preferably batch the update up so it commits at regular intervals and therefore the committed transactions can be cleared from the log



I agree with George batching the update is probably the best solution, though you really want to make sure you're providing enough space for your files. Last thing you want is to crash your server and/or corrupt your database because you ran out of drivespace.


This should get you started.
http://www.sqlusa.com/bestpractices2005/hugeupdate/
http://www.codeproject.com/Articles/12802/Batch-Update-on-a-very-huge-table
Post #1338165
Posted Tuesday, July 31, 2012 2:24 PM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Today @ 2:06 PM
Points: 1,279, Visits: 2,212
GilaMonster (7/31/2012)
sestell1 (7/31/2012)
Setting the database to SIMPLE recovery model will minimize the amount of logging done


Won't help here. Updates are fully logged in all recovery models. A single update uses the same amount of log space in simple or full recovery


Sorry, I thought there were multiple updates being done.
If a single update is blowing out your transaction log, then yea your only option is to break the update into batches of records.
Post #1338177
Posted Tuesday, July 31, 2012 2:25 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Wednesday, February 26, 2014 2:53 PM
Points: 19, Visits: 157
george sibbald (7/31/2012)
ALL transactions are logged, there is no way to prevent that. You will either have to size your transaction log so it can cope with the update or preferably batch the update up so it commits at regular intervals and therefore the committed transactions can be cleared from the log


Thanks a lot, even i was thinking abt the same. But wanted to explore any optons whatsoever, i cannot change anything on database(no permissoins).
Post #1338179
Posted Tuesday, July 31, 2012 2:40 PM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 3:46 PM
Points: 42,462, Visits: 35,525
Update in chunks, maybe 50 000 rows at a time.


Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1338187
Posted Wednesday, August 1, 2012 1:53 PM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Today @ 4:49 AM
Points: 556, Visits: 2,581
prasadau2006 (7/31/2012)
Hi All,

Is their a way we can manage Transaction log even before it fills up? I have some archive tables, my requirement is to add fields to this tables and default the existing records with certain default values. But unfortunately halway through the update queries it throws an error saying transaction log is full. The reason probably might bcoz of the number of records(millions) and the updates it is doing. I was thinking if i can write my update query in such way that it actually doesnt log each an every thing even after the update for field is done.

Thanks in advance.


I dealt with a similar issue recently. We had a job that inserted a couple billion rows/week and would explode the transaction log.

Doing the update in chunks if essential here.

The script we used looked something like this:

Note: This is functioning template to get you started. It does not take into consideration things like: multiple transaction logs, log-shipping and/or Replication, DB with the Bulk Recovery Model, etc.

First a table for this test:

USE ajbTest; --your database
SET NOCOUNT ON;

-- Create a target table for testing
IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'target')
BEGIN
CREATE TABLE [target]
(
ID int,
Data varchar(100)
PRIMARY KEY (ID)
);
END
GO



And the script:

/******************************************************************************************
Prepare variables
******************************************************************************************/
-- These would be parameters in a stored proc
DECLARE @sampleDataRows INT=555000,
@rowsPerCycle INT=50000,
@logReset INT=1

-- startup message
DECLARE @startupMsg varchar(1000),
@logsize INT=(SELECT size/128 FROM sys.database_files WHERE type=1),
@maxlogsize INT=(SELECT max_size/128 FROM sys.database_files WHERE type=1),
@transLog varchar(100)=(SELECT physical_name FROM sys.database_files WHERE type=1),
@logName varchar(100)=(SELECT name FROM sys.database_files WHERE type=1),
@dynamicTruncate varchar(200) = 'alter database ' + DB_NAME() + ' set recovery SIMPLE;'

DECLARE @pctLogLeft varchar(6) =
CAST(CAST(CAST(@logsize AS float)/CAST(@maxlogsize AS float)*100 AS decimal(10,2)) AS varchar(5))+'%'

SET @startupMsg='Starting large insert into {Your DB}...'+CHAR(13)+CHAR(13)
SET @startupMsg=@startupMsg+'Current Transaction Log stats for '+DB_NAME()+':'+CHAR(13)
SET @startupMsg=@startupMsg+' Transaction Log File - '+@transLog+CHAR(13)
SET @startupMsg=@startupMsg+' Transaction Log Capacity - '+CAST(@maxlogsize AS varchar(20))+'MB'+CHAR(13)
SET @startupMsg=@startupMsg+' Transaction Log Size - '+CAST(@logsize AS varchar(20))+'MB'
SET @startupMsg=@startupMsg+' ('+CAST(@maxlogsize-@logsize AS varchar(20))
SET @startupMsg=@startupMsg+'MB remaining, '+@pctLogLeft+' of capacity)'+CHAR(13)+CHAR(13)
SET @startupMsg=@startupMsg+'Inserting rows into {your table}...'

PRINT @StartupMsg

/******************************************************************************************
Create sample source and target tables, populate with sample data
******************************************************************************************/
DECLARE @sourceData TABLE
(
ID INT,
Data varchar(100)
PRIMARY KEY (ID)
);

WITH sampledata (ID,Data) AS
(
SELECT 1 [ID],
SUBSTRING(CONVERT(varchar(40), NEWID()),0,9)+
SUBSTRING(CONVERT(varchar(40), NEWID()),0,9) [Data]
UNION ALL
SELECT ID+1,
SUBSTRING(CONVERT(varchar(40), NEWID()),0,9)+
SUBSTRING(CONVERT(varchar(40), NEWID()),0,9)
FROM sampledata
WHERE ID < @sampleDataRows
)
INSERT INTO @sourceData
SELECT ID, Data FROM sampledata
OPTION (MAXRECURSION 0);

-- Check to see if there is anything to update
IF NOT EXISTS
(
SELECT a.ID, a.Data FROM @sourceData a
LEFT JOIN [target] b ON a.ID=b.ID
WHERE b.ID IS NULL
)
PRINT CHAR(13)+' ... Nothing to update. Yay! (maybe)'

/******************************************************************************************
-- Begin the insert
******************************************************************************************/
BEGIN
SET ROWCOUNT @rowsPerCycle
WHILE EXISTS
(
SELECT a.ID, a.Data FROM @sourceData a
LEFT JOIN [target] b ON a.ID = b.ID
WHERE b.ID IS NULL
)
BEGIN
-- 1st, check if the translog is > 50% capacity. Shrink the log if True.
-- This will be checked during each iteration
IF @logsize > @maxlogsize/2
BEGIN
PRINT CHAR(13)+'The trans log needs to be shrunk. Shrinking to '+
CAST(@logReset AS varchar(10))+'MB...'

IF (SELECT recovery_model_desc FROM sys.databases WHERE name = DB_NAME()) = 'FULL'
BEGIN
EXEC(@dynamicTruncate);

DBCC shrinkfile (@logName,@logReset);

SET @dynamicTruncate = REPLACE(@dynamicTruncate,'SIMPLE','FULL');
EXEC(@dynamicTruncate);
END
ELSE
DBCC shrinkfile (@transLog,@logReset);
END

BEGIN TRAN
INSERT INTO [target]
SELECT a.ID, a.Data FROM @sourceData a
LEFT JOIN [target] b ON a.ID = b.ID
WHERE b.ID IS NULL;

PRINT ' '+CAST(@@ROWCOUNT AS VARCHAR)+' rows inserted.';
COMMIT;

BEGIN
SET @logsize = (SELECT size/128 FROM sys.database_files WHERE type=1)
PRINT ' '+'Log Size: '+CAST(@logsize AS varchar(20))+'MB'
END
END
END

SET ROWCOUNT 0

PRINT CHAR(13)+'All Done.'

How it works:

The variables at the top would be parameters in a stored proc.
@sampleDataRows - the # of rows to use for a test insert
@rowsPerCycle - the # of rows to insert per batch
@logReset - how big to reset the trans log in MB.

The data in the target table is compared to the source table. If records exist in the source database that do not exist in the target the process begins:

First, the size of the transaction log is checked, if it is more than 1/2 of capacity, the trans log is shrunk. If the Recovery Model is FULL then it is first changed to SIMPLE before the shrink, then back to FULL afterwards.

IF (SELECT recovery_model_desc FROM sys.databases WHERE name = DB_NAME()) = 'FULL'
BEGIN
EXEC(@dynamicTruncate);
DBCC shrinkfile (@logName,@logReset);

SET @dynamicTruncate = REPLACE(@dynamicTruncate,'SIMPLE','FULL');
EXEC(@dynamicTruncate);
END
ELSE
DBCC shrinkfile (@transLog,@logReset);

Next the insert begins in chunks (defined by @rowsPerCycle)


BEGIN TRAN 
INSERT INTO [target]
SELECT a.ID, a.Data FROM @sourceData a
LEFT JOIN [target] b ON a.ID = b.ID
WHERE b.ID IS NULL;

PRINT ' '+CAST(@@ROWCOUNT AS VARCHAR)+' rows inserted.';
COMMIT;

The log file is checked before each iteration and shrunk as needed.

I included a number of print statements which can be used for logging. For example, If you take this code and run it as a SQL Job, the job log will read like this:

Starting large insert into {Your DB}...

Current Transaction Log stats for ajbTest:
Transaction Log File - E:\SQL_Logs\MSSQL10.MSSQLSERVER\MSSQL\Data\ajbTest_log.ldf
Transaction Log Capacity - 40MB
Transaction Log Size - 36MB (4MB remaining, 90.00% of capacity)

Inserting rows into {your table}...

The trans log needs to be shrunk. Shrinking to 1MB...
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
50000 rows inserted.
Log Size: 12MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 23MB

The trans log needs to be shrunk. Shrinking to 1MB...
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
50000 rows inserted.
Log Size: 12MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 19MB
5000 rows inserted.
Log Size: 19MB

All Done.



-- Alan Burstein



Read this article for best practices on asking questions.
Need to split a string? Try this (Jeff Moden)
Need a pattern-based string spitter? Try this (Dwain Camps)
My blog
Post #1338787
Posted Wednesday, August 1, 2012 2:09 PM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Today @ 2:06 PM
Points: 1,279, Visits: 2,212
There shouldn't be any need to shrink the log file while doing the inserts. This just adds extra overhead as the log has to auto-grow back out as records are inserted.

It probably would have been a lot easier if you just set the recovery model to SIMPLE initially and left it until after you are done inserting, that way after each batch of records is committed the log space used is cleared and can be re-used for the next insert.

After you are done, shrink the log down, change the recovery model to FULL, and do a full backup and log backup.
Post #1338801
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse