Log file growth

  • Well, this rears it's ugly head again. My disk filled up (only 60Gb remember) because the db grew like crazy, so I dropped the database and restored from a backup I made last time I had to rebuild this beast. I set up all the replication again and went through the above steps to shrink the log file back down, but alas it's still growing - now up to 7Gb with a 2.5Gb db. What can I check and how can I get this beast of a log file back to normal size.

  • GilaMonster (7/9/2009)


    Query sys.databases. What's the value of log_reuse_wait_descr for that database?

    Hi GilaMonster, I was just going through this thread, just want to know

    what is the value of log_reuse_wait_descr to shrink the log file? how we will determine, please explain..thanks!

  • The main db is REPLICATION; replicated db is CHECKPOINT

  • I feel compelled to add that replication is an ADVANCED topic. Almost every 'regular user' out there that tries to implement it goes astray - often with really bad consequences. If you are going to do replication, PLEASE do yourself and your company a favor and get a professional to assist you from planning through implementation and monitoring. There are soooo many ways you can mess things up or do them suboptimally.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • LOL!!! Thanks, but I've tried to tell them that. I'm doing what I'm doing from Google, a book, and this Forum. They don't see a problem. It's just a database, right?

  • doug turner (1/14/2010)


    LOL!!! Thanks, but I've tried to tell them that. I'm doing what I'm doing from Google, a book, and this Forum. They don't see a problem. It's just a database, right?

    I see that again and again. And when a company has to call in someone like me to fix stuff that is screwed up it will cost them a hell of a lot more than if they had been proactive. Best case they 'spend' a lot more money on staff salary because it takes untrained/inexperienced staffers way more time to do (and often REDO) things. Medium case is then having to still pay a consultant when things are broken or perform like a 2.5 legged dog. Worst case can also lead to complete loss of the company due to irrecoverable data errors.

    BTW, what book are you using?

    Are you also referencing Books Online?

    Best of luck with it ... 😎

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • If I understand correctly you are replicating database once nightly?

    There is a fairly straightforward method which involves using a simple script to copy latest full backup file to second server and restore the database on second server from that backup by calling a stored procedure in a scheduled Job. The advantage of this method is no log growth etc and indexes are built from scratch each time so no fragmentation etc.

    ---------------------------------------

    Here's a link to a vbscript program you can set-up using Windows Scheduler to find latest *.bak file and copy it to a new location, and deletes old *.bak files in the new folder. So the new folder contains only the latest backup file.

    http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=117870&SearchTerms=bak

    Here is a stored procedure to restore a database in a directory folder from *.bak file. The above script ensures there is only the latest BAK file in the folder.

    I based following on a proc by nigel rivett which includes restoring from transaction backups also. http://www.nigelrivett.net/SQLAdmin/s_RestoreDatabase.html

    You will need to alter the Bak file name to match your own naming, e.g. change " '_db_%.bak' " section to match your files.

    USE [master]

    GO

    /****** Object: StoredProcedure [dbo].[s_RestoreFullDatabase] Script Date: 06/06/2008 11:45:18 ******/

    SET ANSI_NULLS ON

    GO

    SET QUOTED_IDENTIFIER ON

    GO

    --Used to restore database from a windows directory

    CREATE proc [dbo].[s_RestoreFullDatabase]

    @SourcePath varchar(200) ,

    @DataPath varchar(200) ,

    @LogPath varchar(200)

    as

    /*

    usage:

    exec s_RestoreFullDatabase

    @SourcePath = 'c:\a_test\' ,

    @DataPath = 'c:\a_test\' ,

    @LogPath = 'c:\a_test\'

    */

    /*

    Get all files from directory (they will be *.bak)

    The process is controlled by the files in the directory

    If there is a full backup then restore it.

    */

    declare @dbname varchar(128) ,

    @cmd varchar(2000) ,

    @filename varchar(128) ,

    @s-2 varchar(128) ,

    @t varchar(128) ,

    @sql nvarchar(2000)

    create table #files (lname varchar(128), pname VARCHAR(128), type varchar(10), fgroup varchar(128), size varchar(50), maxsize varchar(50))

    create table #dir (s varchar(2000))

    -- get list of files in directory

    select @cmd = 'dir /B ' + @SourcePath + '*.bak'

    insert #dir exec master..xp_cmdshell @cmd

    delete #dir

    where s is null

    or s not like '%.bak'

    or ( s not like '%^_db^_%' escape '^'

    )

    select * from #dir

    -- deal with each database in turn

    while exists (select * from #dir)

    begin

    -- any Full backups

    select @dbname = null

    select top 1 @dbname = left(s,charindex('_db_', s) - 1)

    from #dir

    where charindex('_db_', s) <> 0

    order by s

    -- find the last full backup for this db

    select @filename = null

    select @filename = max(s)

    from #dir

    where s like @dbname + '_db_%.bak'

    -- now we can go through each file in turn for this database and restore it

    while exists (select * from #dir where s like @dbname + '^_%' escape '^')

    begin

    select top 1 @filename = s

    from #dir

    where s like @dbname + '^_%' escape '^'

    order by right(s, 20)

    select filename = @filename

    if @filename like '%^_db^_%' escape '^'

    begin

    -- restore a full backup

    if exists (select * from master..sysdatabases where name = @dbname)

    begin

    select @cmd = 'drop database ' + @dbname

    exec (@cmd)

    end

    -- now buld the restore

    select @cmd = null ,

    @s-2 = ''

    while @s-2 < (select max(lname) from #files)

    begin

    select top 1 @s-2 = lname, @t = type from #files where lname > @s-2 order by lname

    select @cmd = coalesce(@cmd + ',move ', '') + '''' + @s-2

    + ''' to ''' + case when @t = 'D' then @DataPath else @LogPath end

    + @s-2 + ''''

    end

    select @cmd = 'restore database ' + @dbname

    + ' from disk = ''' + @SourcePath + @filename + ''' with File =1, NOUNLOAD, STATS = 10, REPLACE'

    exec (@cmd)

    end

    delete #dir where s = @filename

    end

    end

  • yep, we're doing a nightly replication. I do not do backups. The way we use this db, backups are not needed. This db houses data from devices that capture the number of viruses, spam, etc from our email system. I then have another system that taps into this db and extracts the data daily for compilation and reporting. We were told by the vendor of this system to NOT extract directly from the main db, but to replicate it and extract from the replicated db. It is without a doubt, the most convoluted system I have ever seen. The only thing we use this for is metrics on emails counts, virus counts, and spam counts. Once the data is in my reporting system (OVPI running on Oracle), I don't care what happens to the data in the MS SQL db, only that the MS SQL db is up and running to collect the data for my next pull into PI. I could truncate every table in MS SQL and start from scratch every month if I wanted to, as long as I had a good capture of all that data in PI.

  • If you set up a FULL backup of original database you could use it for the simple method of replication from previous reply. Also if you don't do Transaction Log backups the log file will continue to grow, unless you manually shrink it.

  • sparky-407434 (1/15/2010)


    If you set up a FULL backup of original database you could use it for the simple method of replication from previous reply. Also if you don't do Transaction Log backups the log file will continue to grow, unless you manually shrink it.

    Manually shrinking in the scenario you mention (full recovery mode with no tlog backup of any kind) will not remove data from the tlog nor allow it to be reduced in size.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • If your database is using either BULK_LOGGED or FULL recovery models, you need to set up transaction log backups to manage the growth of the transaction log. If you don't need this, then set the database to use the SIMPLE recovery model. Just realize that you will lose point in time recover of your database.

  • It's set to Simple

  • Manually shrinking in the scenario you mention (full recovery mode with no tlog backup of any kind) will not remove data from the tlog nor allow it to be reduced in size.

    RE: manual shrink - isn't this what the poster said he was already doing? i.e. "I ran the backup with truncate_only, then the dbcc_shrink and it went down to 104K!"

    I wasn't recommending this approach - I was pointing out that the Log will grow unless he sets up automated Transaction Log backups - or else he will continue to have to manually shrink the file. (I'm not sure what you mean by "nor allow it to be reduced" - the users approach will reduce it in size.

    It is all moot now anyway since the poster has moved to Simple mode.

  • One thing that caught us out recently with replication is that you need to back up the replicated copy of the database, or else it's log file will grow and grow!

Viewing 14 posts - 16 through 28 (of 28 total)

You must be logged in to reply to this topic. Login to reply