SQL Clone
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 


How to manage Transaction Log.


How to manage Transaction Log.

Author
Message
prasadau2006
prasadau2006
SSC Rookie
SSC Rookie (28 reputation)SSC Rookie (28 reputation)SSC Rookie (28 reputation)SSC Rookie (28 reputation)SSC Rookie (28 reputation)SSC Rookie (28 reputation)SSC Rookie (28 reputation)SSC Rookie (28 reputation)

Group: General Forum Members
Points: 28 Visits: 175
Hi All,

I stheir a way we can manage Transaction log even before it fills up? I have some archive tables, my requirement is to add fields to this tables and default the existing records with certain default values. But unfortunately halway through the update queries it throws an error saying transaction log is full. The reason probably might bcoz of the number of records(millions) and the updates it is doing. I was thinking if i can write my update query in such way that it actually doesnt log each an every thing even after the update for field is done.

Thanks in advance.
george sibbald
george sibbald
SSC-Insane
SSC-Insane (24K reputation)SSC-Insane (24K reputation)SSC-Insane (24K reputation)SSC-Insane (24K reputation)SSC-Insane (24K reputation)SSC-Insane (24K reputation)SSC-Insane (24K reputation)SSC-Insane (24K reputation)

Group: General Forum Members
Points: 24696 Visits: 13698
ALL transactions are logged, there is no way to prevent that. You will either have to size your transaction log so it can cope with the update or preferably batch the update up so it commits at regular intervals and therefore the committed transactions can be cleared from the log

---------------------------------------------------------------------
sestell1
sestell1
Hall of Fame
Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)

Group: General Forum Members
Points: 3543 Visits: 3510
Is this a one time thing?

You could do a full backup, change your recovery model for the database to SIMPLE, do your updates, change recovery back to full, do another full backup, and backup your transaction log.

Setting the database to SIMPLE recovery model will minimize the amount of logging done, and clear transactions from the log once they are committed so the space can be reused, but you won't be able to restore your database to a point in time using the transaction log backups until you change it back and do a full backup and transaction log backup.
GilaMonster
GilaMonster
SSC Guru
SSC Guru (225K reputation)SSC Guru (225K reputation)SSC Guru (225K reputation)SSC Guru (225K reputation)SSC Guru (225K reputation)SSC Guru (225K reputation)SSC Guru (225K reputation)SSC Guru (225K reputation)

Group: General Forum Members
Points: 225203 Visits: 46321
sestell1 (7/31/2012)
Setting the database to SIMPLE recovery model will minimize the amount of logging done


Won't help here. Updates are fully logged in all recovery models. A single update uses the same amount of log space in simple or full recovery

Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass


benjamin.reyes
benjamin.reyes
Ten Centuries
Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)

Group: General Forum Members
Points: 1223 Visits: 2286
george sibbald (7/31/2012)
ALL transactions are logged, there is no way to prevent that. You will either have to size your transaction log so it can cope with the update or preferably batch the update up so it commits at regular intervals and therefore the committed transactions can be cleared from the log



I agree with George batching the update is probably the best solution, though you really want to make sure you're providing enough space for your files. Last thing you want is to crash your server and/or corrupt your database because you ran out of drivespace.


This should get you started.
http://www.sqlusa.com/bestpractices2005/hugeupdate/
http://www.codeproject.com/Articles/12802/Batch-Update-on-a-very-huge-table
sestell1
sestell1
Hall of Fame
Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)

Group: General Forum Members
Points: 3543 Visits: 3510
GilaMonster (7/31/2012)
sestell1 (7/31/2012)
Setting the database to SIMPLE recovery model will minimize the amount of logging done


Won't help here. Updates are fully logged in all recovery models. A single update uses the same amount of log space in simple or full recovery


Sorry, I thought there were multiple updates being done.
If a single update is blowing out your transaction log, then yea your only option is to break the update into batches of records.
prasadau2006
prasadau2006
SSC Rookie
SSC Rookie (28 reputation)SSC Rookie (28 reputation)SSC Rookie (28 reputation)SSC Rookie (28 reputation)SSC Rookie (28 reputation)SSC Rookie (28 reputation)SSC Rookie (28 reputation)SSC Rookie (28 reputation)

Group: General Forum Members
Points: 28 Visits: 175
george sibbald (7/31/2012)
ALL transactions are logged, there is no way to prevent that. You will either have to size your transaction log so it can cope with the update or preferably batch the update up so it commits at regular intervals and therefore the committed transactions can be cleared from the log


Thanks a lot, even i was thinking abt the same. But wanted to explore any optons whatsoever, i cannot change anything on database(no permissoins).
GilaMonster
GilaMonster
SSC Guru
SSC Guru (225K reputation)SSC Guru (225K reputation)SSC Guru (225K reputation)SSC Guru (225K reputation)SSC Guru (225K reputation)SSC Guru (225K reputation)SSC Guru (225K reputation)SSC Guru (225K reputation)

Group: General Forum Members
Points: 225203 Visits: 46321
Update in chunks, maybe 50 000 rows at a time.

Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass


Alan.B
Alan.B
SSChampion
SSChampion (13K reputation)SSChampion (13K reputation)SSChampion (13K reputation)SSChampion (13K reputation)SSChampion (13K reputation)SSChampion (13K reputation)SSChampion (13K reputation)SSChampion (13K reputation)

Group: General Forum Members
Points: 13430 Visits: 8001
prasadau2006 (7/31/2012)
Hi All,

Is their a way we can manage Transaction log even before it fills up? I have some archive tables, my requirement is to add fields to this tables and default the existing records with certain default values. But unfortunately halway through the update queries it throws an error saying transaction log is full. The reason probably might bcoz of the number of records(millions) and the updates it is doing. I was thinking if i can write my update query in such way that it actually doesnt log each an every thing even after the update for field is done.

Thanks in advance.


I dealt with a similar issue recently. We had a job that inserted a couple billion rows/week and would explode the transaction log.

Doing the update in chunks if essential here.

The script we used looked something like this:

Note: This is functioning template to get you started. It does not take into consideration things like: multiple transaction logs, log-shipping and/or Replication, DB with the Bulk Recovery Model, etc.

First a table for this test:


USE ajbTest; --your database
SET NOCOUNT ON;

-- Create a target table for testing
IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'target')
BEGIN
CREATE TABLE [target]
(
ID int,
Data varchar(100)
PRIMARY KEY (ID)
);
END
GO



And the script:


/******************************************************************************************
Prepare variables
******************************************************************************************/
-- These would be parameters in a stored proc
DECLARE @sampleDataRows INT=555000,
@rowsPerCycle INT=50000,
@logReset INT=1

-- startup message
DECLARE @startupMsg varchar(1000),
@logsize INT=(SELECT size/128 FROM sys.database_files WHERE type=1),
@maxlogsize INT=(SELECT max_size/128 FROM sys.database_files WHERE type=1),
@transLog varchar(100)=(SELECT physical_name FROM sys.database_files WHERE type=1),
@logName varchar(100)=(SELECT name FROM sys.database_files WHERE type=1),
@dynamicTruncate varchar(200) = 'alter database ' + DB_NAME() + ' set recovery SIMPLE;'

DECLARE @pctLogLeft varchar(6) =
CAST(CAST(CAST(@logsize AS float)/CAST(@maxlogsize AS float)*100 AS decimal(10,2)) AS varchar(5))+'%'

SET @startupMsg='Starting large insert into {Your DB}...'+CHAR(13)+CHAR(13)
SET @startupMsg=@startupMsg+'Current Transaction Log stats for '+DB_NAME()+':'+CHAR(13)
SET @startupMsg=@startupMsg+' Transaction Log File - '+@transLog+CHAR(13)
SET @startupMsg=@startupMsg+' Transaction Log Capacity - '+CAST(@maxlogsize AS varchar(20))+'MB'+CHAR(13)
SET @startupMsg=@startupMsg+' Transaction Log Size - '+CAST(@logsize AS varchar(20))+'MB'
SET @startupMsg=@startupMsg+' ('+CAST(@maxlogsize-@logsize AS varchar(20))
SET @startupMsg=@startupMsg+'MB remaining, '+@pctLogLeft+' of capacity)'+CHAR(13)+CHAR(13)
SET @startupMsg=@startupMsg+'Inserting rows into {your table}...'

PRINT @StartupMsg

/******************************************************************************************
Create sample source and target tables, populate with sample data
******************************************************************************************/
DECLARE @sourceData TABLE
(
ID INT,
Data varchar(100)
PRIMARY KEY (ID)
);

WITH sampledata (ID,Data) AS
(
SELECT 1 [ID],
SUBSTRING(CONVERT(varchar(40), NEWID()),0,9)+
SUBSTRING(CONVERT(varchar(40), NEWID()),0,9) [Data]
UNION ALL
SELECT ID+1,
SUBSTRING(CONVERT(varchar(40), NEWID()),0,9)+
SUBSTRING(CONVERT(varchar(40), NEWID()),0,9)
FROM sampledata
WHERE ID < @sampleDataRows
)
INSERT INTO @sourceData
SELECT ID, Data FROM sampledata
OPTION (MAXRECURSION 0);

-- Check to see if there is anything to update
IF NOT EXISTS
(
SELECT a.ID, a.Data FROM @sourceData a
LEFT JOIN [target] b ON a.ID=b.ID
WHERE b.ID IS NULL
)
PRINT CHAR(13)+' ... Nothing to update. Yay! (maybe)'

/******************************************************************************************
-- Begin the insert
******************************************************************************************/
BEGIN
SET ROWCOUNT @rowsPerCycle
WHILE EXISTS
(
SELECT a.ID, a.Data FROM @sourceData a
LEFT JOIN [target] b ON a.ID = b.ID
WHERE b.ID IS NULL
)
BEGIN
-- 1st, check if the translog is > 50% capacity. Shrink the log if True.
-- This will be checked during each iteration
IF @logsize > @maxlogsize/2
BEGIN
PRINT CHAR(13)+'The trans log needs to be shrunk. Shrinking to '+
CAST(@logReset AS varchar(10))+'MB...'

IF (SELECT recovery_model_desc FROM sys.databases WHERE name = DB_NAME()) = 'FULL'
BEGIN
EXEC(@dynamicTruncate);

DBCC shrinkfile (@logName,@logReset);

SET @dynamicTruncate = REPLACE(@dynamicTruncate,'SIMPLE','FULL');
EXEC(@dynamicTruncate);
END
ELSE
DBCC shrinkfile (@transLog,@logReset);
END

BEGIN TRAN
INSERT INTO [target]
SELECT a.ID, a.Data FROM @sourceData a
LEFT JOIN [target] b ON a.ID = b.ID
WHERE b.ID IS NULL;

PRINT ' '+CAST(@@ROWCOUNT AS VARCHAR)+' rows inserted.';
COMMIT;

BEGIN
SET @logsize = (SELECT size/128 FROM sys.database_files WHERE type=1)
PRINT ' '+'Log Size: '+CAST(@logsize AS varchar(20))+'MB'
END
END
END

SET ROWCOUNT 0

PRINT CHAR(13)+'All Done.'



How it works:

The variables at the top would be parameters in a stored proc.
@sampleDataRows - the # of rows to use for a test insert
@rowsPerCycle - the # of rows to insert per batch
@logReset - how big to reset the trans log in MB.

The data in the target table is compared to the source table. If records exist in the source database that do not exist in the target the process begins:

First, the size of the transaction log is checked, if it is more than 1/2 of capacity, the trans log is shrunk. If the Recovery Model is FULL then it is first changed to SIMPLE before the shrink, then back to FULL afterwards.


IF (SELECT recovery_model_desc FROM sys.databases WHERE name = DB_NAME()) = 'FULL'
BEGIN
EXEC(@dynamicTruncate);
DBCC shrinkfile (@logName,@logReset);

SET @dynamicTruncate = REPLACE(@dynamicTruncate,'SIMPLE','FULL');
EXEC(@dynamicTruncate);
END
ELSE
DBCC shrinkfile (@transLog,@logReset);



Next the insert begins in chunks (defined by @rowsPerCycle)



BEGIN TRAN
INSERT INTO [target]
SELECT a.ID, a.Data FROM @sourceData a
LEFT JOIN [target] b ON a.ID = b.ID
WHERE b.ID IS NULL;

PRINT ' '+CAST(@@ROWCOUNT AS VARCHAR)+' rows inserted.';
COMMIT;



The log file is checked before each iteration and shrunk as needed.

I included a number of print statements which can be used for logging. For example, If you take this code and run it as a SQL Job, the job log will read like this:


Starting large insert into {Your DB}...

Current Transaction Log stats for ajbTest:
Transaction Log File - E:\SQL_Logs\MSSQL10.MSSQLSERVER\MSSQL\Data\ajbTest_log.ldf
Transaction Log Capacity - 40MB
Transaction Log Size - 36MB (4MB remaining, 90.00% of capacity)

Inserting rows into {your table}...

The trans log needs to be shrunk. Shrinking to 1MB...
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
50000 rows inserted.
Log Size: 12MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 23MB

The trans log needs to be shrunk. Shrinking to 1MB...
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
50000 rows inserted.
Log Size: 12MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 19MB
50000 rows inserted.
Log Size: 19MB
5000 rows inserted.
Log Size: 19MB

All Done.



-- Alan Burstein



Best practices for getting help on SQLServerCentral
Need to split a string? Try DelimitedSplit8K or DelimitedSplit8K_LEAD (SQL 2012+)
Need a pattern-based splitter? Try PatternSplitCM
Need to remove or replace those unwanted characters? Try PatExclude8K and PatReplace8K.

"I can't stress enough the importance of switching from a 'sequential files' mindset to 'set-based' thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code. " -- Itzek Ben-Gan 2001
sestell1
sestell1
Hall of Fame
Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)Hall of Fame (3.5K reputation)

Group: General Forum Members
Points: 3543 Visits: 3510
There shouldn't be any need to shrink the log file while doing the inserts. This just adds extra overhead as the log has to auto-grow back out as records are inserted.

It probably would have been a lot easier if you just set the recovery model to SIMPLE initially and left it until after you are done inserting, that way after each batch of records is committed the log space used is cleared and can be re-used for the next insert.

After you are done, shrink the log down, change the recovery model to FULL, and do a full backup and log backup.
Go


Permissions

You can't post new topics.
You can't post topic replies.
You can't post new polls.
You can't post replies to polls.
You can't edit your own topics.
You can't delete your own topics.
You can't edit other topics.
You can't delete other topics.
You can't edit your own posts.
You can't edit other posts.
You can't delete your own posts.
You can't delete other posts.
You can't post events.
You can't edit your own events.
You can't edit other events.
You can't delete your own events.
You can't delete other events.
You can't send private messages.
You can't send emails.
You can read topics.
You can't vote in polls.
You can't upload attachments.
You can download attachments.
You can't post HTML code.
You can't edit HTML code.
You can't post IFCode.
You can't post JavaScript.
You can post emoticons.
You can't post or upload images.

Select a forum

































































































































































SQLServerCentral


Search