April 1, 2008 at 9:54 am
On the server with 1000 and more databases, very often you can se the following: "transaction log for database 'db_name' is full" (if you limit the size of Transaction log (i.e.., 10 MB)).
To reduce the size of the transaction log file, you must shrink the transaction log file. To shrink a transaction log file to the requested size and to remove the unused pages, you must use the DBCC SHRINKFILE operation.
Example:
backup log db_name with TRUNCATE_ONLY
and
use db_name
DBCC SHRINKFILE(db_name_dat,1)
DBCC SHRINKFILE(db_name_log,1)
However, you fix one and after another comes up with the same problem.
The easiest way to avoid this is changing the Recovery Mode from Full to Simple.
Note: this will also change model database so any new database will be created with Full or Simple recovery Mode, depends on the script you use.
Here is the script to perform this via Query in SQL Management Studio:
-------------------------------------
USE master
GO
-- Declare a variable to store the value [database name] returned by FETCH.
DECLARE @dbname sysname, @cmd varchar(1000)
-- Declare a cursor to iterate through the list of databases
DECLARE db_recovery_cursor CURSOR FOR
SELECT name from sysdatabases
-- Open the cursor
OPEN db_recovery_cursor
-- Perform the first fetch and store the value in a variable.
FETCH NEXT FROM db_recovery_cursor INTO @dbname
-- loop through cursor until no more records fetched
WHILE @@FETCH_STATUS = 0
BEGIN
-- create the alter database command for each database
SET @cmd = 'ALTER DATABASE "' + @dbname + '" SET RECOVERY FULL'
-- alter each database setting the recovery model to FULL
EXEC(@cmd)
PRINT @dbname
-- fetch the next database name
FETCH NEXT FROM db_recovery_cursor INTO @dbname
END
-- close the cursor and deallocate memory used by cursor
CLOSE db_recovery_cursor
DEALLOCATE db_recovery_cursor
----------------------------------------------
If you want to change from Simple to Full:
USE master
GO
-- Declare a variable to store the value [database name] returned by FETCH.
DECLARE @dbname sysname, @cmd varchar(1000)
-- Declare a cursor to iterate through the list of databases
DECLARE db_recovery_cursor CURSOR FOR
SELECT name from sysdatabases
-- Open the cursor
OPEN db_recovery_cursor
-- Perform the first fetch and store the value in a variable.
FETCH NEXT FROM db_recovery_cursor INTO @dbname
-- loop through cursor until no more records fetched
WHILE @@FETCH_STATUS = 0
BEGIN
-- create the alter database command for each database
SET @cmd = 'ALTER DATABASE "' + @dbname + '" SET RECOVERY SIMPLE'
-- alter each database setting the recovery model to SIMPLE
EXEC(@cmd)
PRINT @dbname
-- fetch the next database name
FETCH NEXT FROM db_recovery_cursor INTO @dbname
END
-- close the cursor and deallocate memory used by cursor
CLOSE db_recovery_cursor
DEALLOCATE db_recovery_cursor
April 2, 2008 at 9:03 am
I would be careful with switching recovery modes. It is not really as simple (no pun intended) to go from Full to Simple.
A database is set to Full to allow a point-in-time recovery. If you need to recover the data from the time of a crash/failure, Simple recovery would not provide you with this recovery option.
The delivered script from BOL we do use but more or less for reporting databases where if the database were to fail, the data would simply be 'refreshed'
Switching from Full to Simple is not the best way to maintain db/tlog size levels...
Just my two cents...
April 3, 2008 at 4:56 am
Absolutely agree.
If you don't want point-in-time recovery, don't use the FULL recovery mode - you don't get any other gain from using FULL. In that case, just use SIMPLE.
If you want point-in-time recovery, use FULL and take transaction log backups. That will keep your log size in check (as long as there isn't some other reason why the log can't be truncated).
Note that WITH NO_LOG and WITH TRUNCATE_ONLY are both removed in SQL Server 2008.
See these blog posts for more details:
- http://www.sqlskills.com/blogs/paul/2007/09/24/SearchEngineQA1RunningOutOfTransactionLogSpace.aspx
Thanks
Paul Randal
CEO, SQLskills.com: Check out SQLskills online training!
Blog:www.SQLskills.com/blogs/paul Twitter: @PaulRandal
SQL MVP, Microsoft RD, Contributing Editor of TechNet Magazine
Author of DBCC CHECKDB/repair (and other Storage Engine) code of SQL Server 2005
April 4, 2008 at 8:07 am
Setting a limit to the size of transaction logs is not the way to keep them under control. As you have found out, this just results in the database error out when you reach the limit. As stated before, it is much better to backup the transactions regularly to keep them from growing so large, and then monitor them to see if they grow too much. The following script will display log sizes for you.
create Table #LogSpace (
DbName CHAR (32),
LogSize DECIMAL(12,2),
PerCentUsed DECIMAL(12,2),
Status INT)
go
insert into #LogSpace
exec ('DBCC SQLPERF ( LOGSPACE )')
go
select left(DbName,20) DbName, LogSize,
cast(LogSize * PerCentUsed / 100 as DECIMAL(12,2)) UsedSize,
cast(LogSize * (1 - (PerCentUsed / 100)) as DECIMAL(12,2)) FreeSize
from #LogSpace
order by LogSize desc
go
Drop Table #LogSpace
go
If you do shrink the log file, I would do a full database backup afterward.
Steve
Viewing 4 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply