Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

Transaction log SHRINK Expand / Collapse
Author
Message
Posted Friday, January 11, 2013 1:55 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, May 30, 2014 2:39 AM
Points: 2, Visits: 8
Hi,

There's been lots of posts about shrinking databases so sorry to tread over old ground but I have a fairly specific question and am working with live production data so definitely do not want to risk anything.

I've inherited the admin of an SQL server and the previous admin decided to only take full backups (using NT Backup) of the db's running in full recovery mode. The trans logs of all the databases add up to more than 80GB which is pushing the capacity on the server.

Now I've started running trans log backups from SSMS so they shouldn't get any bigger but I can't run them very often (and increase the profitability of point in time recovery) because they're too big.

Question is should I shrink the transaction logs so they become more manageable? Obviously they will grow again but if I run the transaction backup frequently that should keep them comparatively small and I'll get the benefits of a closer to 'now' point in time recovery.

I can 'manage' with them being the size they are now so definitely do not want to risk the live data if there's a possibility it could cause problems.

What's the expert opinion?

Thanks in advance...
Post #1406253
Posted Friday, January 11, 2013 2:52 PM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: 2 days ago @ 11:44 AM
Points: 1,155, Visits: 4,641
Yes you are right. You can shrink it to the reasonable size and schedule the Tlog backup to maintain the size as much as same.

Muthukkumaran Kaliyamoorthy

Helping SQL DBAs and Developers >>>SqlserverBlogForum
Post #1406276
Posted Friday, January 11, 2013 3:55 PM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Yesterday @ 2:25 PM
Points: 2,044, Visits: 3,059
You can definitely shrink the log file "live". SQL simply won't do it if it would cause a loss of data.

The tricky part is finding the right size to shrink it to: you don't want to shrink so much that it has to grow dynamically, but you don't want to leave it overallocated so much that it holds significant unused extra disk space. It may take a little experimentation to get the best size for each log file.


SQL DBA,SQL Server MVP('07, '08, '09)
"In America, every man is innocent until proven broke!" Brant Parker
Post #1406285
Posted Friday, January 11, 2013 7:52 PM


Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Tuesday, August 12, 2014 3:57 PM
Points: 369, Visits: 1,211
Full/diff backup does not clear transaction log ("clear" is sometimes referred as "truncate" and actually means "mark parts of tran log as free for reuse". Clearing tran log DOES NOT make tran log file smaller!). Only transaction log backup can clear transaction log. Only shrink can shrink the tran log size if certain conditions are met.
Transaction log backup consists just of the log records after the previous transaction log backup. They have nothing to do with tran log size, as strange it seems! So saying "I can't run them very often because they're too big" is not true - fortunately for you. Log file size is big, but log backup size is not (except the initial tran log backup which really will be around 80GB)!

This is the query which will show you VLF's (virtual log files - logical units of transaction log that can be marked as "free to reuse"):

declare @vlfs table
( --RecoveryUnitId int, -- sql2012 only
FileId int,
FileSize bigint, -- VLF size in bytes
StartOffset bigint, -- VLF offset in bytes from beginning of transaction log
FSeqNo int,
Status int,
Parity tinyint,
CreateLSN decimal(25,0)
)
insert into @vlfs
--( RecoveryUnitId, -- sql2012 only
-- FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateLSN
--)
exec('DBCC LOGINFO () WITH TABLERESULTS, NO_INFOMSGS')
SELECT
file_id = v.FileId, logical_name = f.name, f.physical_name, log_size_kb = REPLACE(CONVERT(varchar, f.size*$8, 1), '.00', ''),
vlf_size_kb = REPLACE(CONVERT(varchar, v.FileSize/$1024, 1), '.00', ''),
vlf_physical_offset = v.StartOffset,
WriteSequenceNo = ROW_NUMBER() OVER(ORDER BY v.FSeqNo),
Status = CASE WHEN v.Status=2 THEN 'ACTIVE' ELSE 'INACTIVE (free)' END,
v.CreateLSN
FROM @vlfs v
JOIN sys.database_files f on f.file_id = v.FileId
order by v.StartOffset

You will see which VLF's are free, and their sizes. You should not have more than 50 VLF's, so possibly you should do some tran. log optimizations after this situation is resolved.

If you already did not do that very slow initial 80GB tran log backup, and in order to avoid it, AND if you can afford to be without point-in-time restore capability until this operation is finished, you can do it with this extra-fast procedure. But again, it WILL BREAK the log backup chain:

- make diff backup. That is the point you can recover to if anything goes wrong. DIFF backup is much faster than initial backup of 80GB tran. log., and also much faster and smaller than full backup.
- switch db to simple recovery model (that breaks the log backup chain)
- checkpoint and shrink the log file (should be very fast operation)
- switch db to full recovery model
- make differential backup to initialize new log backup chain, and actually start behaving like full recovery model. Why diff backup and not full? Diff will be faster than full, so you get less time spent exposed in simple recovery model. From now on you take regular transaction log backups as usual, and as often is required. You can recover point-in-time from time that diff backup finished and on.

SCRIPT that as a whole unit, and TEST before you run it.
That is the fastest method and the process should be finished in minutes (as long as two diff backups take plus few seconds), AND you do not have to find 80GB of space for initial tran log backup.

If you do not want to lose point-in-time recovery capability, and are willing to wait spending your time in a much longer process, do this procedure (no log backup chain is lost here):

1) manually start the job that takes transaction log backup and wait for it to finish OR do it yourself (be careful to change filename in each iteration):
BACKUP LOG MYDatabase TO DISK='D:\transactionLogBackup1.trn'
This step will take a VERY long time for the initial 80GB backup. Log backups after that will be fast and small (relatively to initial log backup) even the log itself is still 80GB.
2) checkpoint
3) DBCC SHRINKFILE('MyTransactionLogName', 1000) -- 1000 MB. Set as you like.
Repeat those 1,2,3 until the log is shrinked to desired size, probably 1-2 times will be enough.

Use backup compression (if available) for all the backup steps (diff and tran log backups) to speed up the process.

After that, optimize the number of VLF's, log size, and growth size, but that's another story.

Good luck!


_____________________________________________________
Microsoft Certified Master: SQL Server 2008
XDetails Addin - for SQL Developers
blog.sqlxdetails.com - Transaction log myths
Post #1406302
Posted Saturday, January 12, 2013 7:41 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, May 30, 2014 2:39 AM
Points: 2, Visits: 8
Thanks for the replies, which have prompted some more questions

1) I've already run the first 80GB tlog backup and was under the impression that every subsequent tlog backup would be the same size until I used a SHRINK on the tlog? I can't efficiently test this because I would run out of space if indeed the second tlog backup was the same size.

2) My idea before was that after a SHRINK I could control the size of the tlog with regular tlog backups and just leave the growth increment to the default (10%). You said that I shouldn't let it grow 'dynamically' by this do you mean that I shouldn't SHRINK it to a really small size and let it grow by itself? This would create a performance issue?

3) The advised procedure on SHRINKING the tlog after the initial 80GB tlog backup includes a CHECKPOINT. I understand this writes the 'dirty' pages from memory onto the disk. What's the benefit of doing this before the SHRINK? Are there any negative impacts of running the CHECKPOINT (I'm guessing it doesn't break the backup log chain)?
Post #1406344
Posted Saturday, January 12, 2013 7:56 AM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Yesterday @ 7:53 AM
Points: 42,822, Visits: 35,952
jonathanforster (1/12/2013)
1) I've already run the first 80GB tlog backup and was under the impression that every subsequent tlog backup would be the same size until I used a SHRINK on the tlog? I can't efficiently test this because I would run out of space if indeed the second tlog backup was the same size.


No. A log backup will be the size of the log records backed up, not the size of the log file. Just as a database backup is the size of the data in the database, not the size of the database files

2) My idea before was that after a SHRINK I could control the size of the tlog with regular tlog backups and just leave the growth increment to the default (10%).


10% is a poor choice. It leads to lots of tiny grow operations when the log is small and huge, time-consuming log growths when the log is large. Set your growth increment to a fixed size

You said that I shouldn't let it grow 'dynamically' by this do you mean that I shouldn't SHRINK it to a really small size and let it grow by itself? This would create a performance issue?


No, don't do that. Shrink the log to the size needed for normal operations + some head room (yes, you may need to do some monitoring to tell that value), set a sensible increment value (fixed MB, not %)

3) The advised procedure on SHRINKING the tlog after the initial 80GB tlog backup includes a CHECKPOINT. I understand this writes the 'dirty' pages from memory onto the disk. What's the benefit of doing this before the SHRINK?


In full recovery model, none. In simple that would truncate the log (mark as reusable) potentially allowing shrink to a smaller size

Are there any negative impacts of running the CHECKPOINT (I'm guessing it doesn't break the backup log chain)?


Seeing as SQL runs automatic checkpoints on a regular basis during normal operations, no negative impacts.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1406347
Posted Saturday, January 12, 2013 9:02 AM


Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Saturday, August 23, 2014 6:03 AM
Points: 351, Visits: 1,556
jonathanforster (1/11/2013)
Hi,
Now I've started running trans log backups from SSMS so they shouldn't get any bigger but I can't run them very often (and increase the profitability of point in time recovery) because they're too big.

Once you perform a transaction log backup, the existing transaction log space is marked for re-use where applicable. The process repeats each time you perform a transaction log backup and hence why the recommended practice is to perform frequent log backups.

This means that your existing transaction log files do not need to be the size that they have grown to. From what you describe, that there have been no transaction log backups until you intervened, you should be able to shrink your transaction logs (quite significantly) because currently they are sized to accommodate the history of all previous transactions.

You need to perform an exercise to correctly size all of your transaction log files and you should also schedule regular log backups. To assist with this I suggest you familirise yourself with the contents of the Books Online topic Transaction Log Management. I hope this helps and let me know if you have any further questions.



John Sansom (@sqlBrit) | www.johnsansom.com
Post #1406368
Posted Saturday, January 12, 2013 9:17 AM


Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Tuesday, August 12, 2014 3:57 PM
Points: 369, Visits: 1,211
Checkpoint will not flush pages from memory. Just writes dirty pages and they stay in memory, now in clean state. No danger there. Unlike lazy writer that responds to memory pressure and certain dbcc commands that really flushes-out pages from memory. You may benefit a little from that checkpoint step becase there is certain amount of time passed between implicit checkpoint that happens within tran log backup command and the shrink command. There could be benefit when log grows really fast, and log backup takes larger amount of time, but is not something essential. Just may give you a bit better result.
If you want to decrease number of VLFs, shrink to near zero, than expand to desired initial size in one step, eg 4000MB. Set the growth to fixed size as Gail said. Eg, 512MB or 256MB if your storage is not very fast. Initial size should be large enough to accomodate normal db operation without any growth.


_____________________________________________________
Microsoft Certified Master: SQL Server 2008
XDetails Addin - for SQL Developers
blog.sqlxdetails.com - Transaction log myths
Post #1406377
Posted Saturday, January 12, 2013 9:57 AM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Yesterday @ 7:53 AM
Points: 42,822, Visits: 35,952
Vedran Kesegic (1/12/2013)
You may benefit a little from that checkpoint step becase there is certain amount of time passed between implicit checkpoint that happens within tran log backup command and the shrink command.


There's no implicit (or explicit) checkpoint that happens within the transaction log backup, and in full recovery checkpoint won't do anything to the log. Only a log backup clears the log in full recovery.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1406385
Posted Saturday, January 12, 2013 10:48 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: 2 days ago @ 11:44 AM
Points: 1,155, Visits: 4,641
Have a look Truncating and shrinking the transaction log file

Also take a look @ http://www.sqlserverblogforum.com/2011/06/sql-server-recovery-models-and-backup-types/


Muthukkumaran Kaliyamoorthy

Helping SQL DBAs and Developers >>>SqlserverBlogForum
Post #1406391
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse