Blog Post

Truncating very large transaction log in full recovery mode

,

While I always configure transaction log backups for every database in non-simple recovery model, and to compliment it I also have monitoring in place if transaction log doesn't get backed up, which does happen on rare occasions. For instance in my current organization where all applications/databases are third party apps with no in-house developed apps, once in a while an app/user may run a huge transaction, it could be that they decided to purge almost all historical data from a 500TB database and delete 450TB data from it all in a single transaction, leaving a huge transaction log and not enough available disk space on the backup drive (separate issue/topic).     

Per best practice and for almost any conceivable argument, its preferred that you take a transaction log backup and then shrink the log file/s. But in an emergency situation you could use the following SQL script to quickly transaction the transaction log.  I believe the script is self-explanatory but I missed anything or have questions, please feel free to post a comment here and I will go my best to get to back you. 

  

/* CAUTION:
USE THIS ONLY IN EMERGENCY WHERE THE TRANSACTION LOG HAS NOT BEEN BACKED 
IN A VERY LONG TIME AND HAS GROWN TO BE TOO BIG

The script does the following in order to shrink the transaction log file:
- Ensure the db is not running in Simple recovery mode
- Store the database recovery mode into a variable
- Change recovery mode to SIMPLE
- Issues checkpoint command
- Wait 2 seconds
- Shrink the log file
- Wait 2 seconds
- Change recovery mode back to original value

IMPORTANT: This script does not take a full or differential database backup so its crucial that you take a full or differential backup to re-set the transaction log chain. Otherwise any attempt to take a transaction log backup will fail, creating more problems.


*/

declare @db nvarchar(100) = 'MyDB'               -- Specify name of the database here
declare @target_log_size_mb nvarchar(20) = 64 -- if DB has multiple log files, each log file will be of this size declare @recovery_mode nvarchar(100) declare @sql nvarchar(4000)
select @recovery_mode = recovery_model_desc from sys.databases where name = @db if @recovery_mode='SIMPLE' raiserror('Error: Use this script only on database in Full recovery mode.', 16, 1) else if databasepropertyex(@db, 'Updateability')='READ_ONLY' raiserror('Error: Database is not updatable. Please ensure that the database is not read only, mirrored or otherwise inaccessible.', 16, 1) else BEGIN set @sql = ' USE [master] ALTER DATABASE [' + @db + '] SET RECOVERY SIMPLE WITH NO_WAIT USE [' + @db + '] WAITFOR DELAY ''00:00:02'' checkpoint WAITFOR DELAY ''00:00:02'' -- open a cursor so we can iterate through and shrink all log files declare @log_file_name sysname declare c1 cursor for select name from sys.database_files where type=1 open c1 fetch c1 into @log_file_name while @@FETCH_STATUS = 0 begin print @log_file_name DBCC SHRINKFILE (@log_file_name, ' + @target_log_size_mb +') fetch c1 into @log_file_name end close c1 deallocate c1 WAITFOR DELAY ''00:00:02'' USE [master] ALTER DATABASE [' + @db + '] SET RECOVERY ' + @recovery_mode + ' WITH NO_WAIT ' print @sql exec sp_executesql @sql END

Obviously, if a DBA or a more experienced Database Developer is the one running a gigantic transaction (e.g. delete 450TB worth of data from a 500TB database), they would most likely switch the recovery mode to SIMPLE and disable the transaction log backups BEFORE running such a transaction, then change it back to FULL/Bulk-Load mode, take a full backup and re-enable transaction log backup job.
They would also likely to break such operations into a much smaller, manageable batches/transactions.

Original post (opens in new tab)

Rate

5 (2)

You rated this post out of 5. Change rating

Share

Share

Rate

5 (2)

You rated this post out of 5. Change rating