Shirnk Tlog(LDF) size?

  • Hi,

    Database High availability - Mirror setup configured, Principal server Tlog (.LDF) file size 25 GB, also warning messages recording at error log file like VLF file more than 1000.

    TLog backup configured at every 15 min.

    I want reduce Tlog size, How to do that with out using DBCC SHIRNKFILE (db_LOG, TRUNCATE_ONLY)

    Because if using TRUNCATE_ONLY command LSN chain will be broken and not able to restore point in time recovery.

    Thanks

    ananda

  • The only way to shrink the size of the file is to shrink the size of the file.

    If your file is growing too large, you have a couple of choices. First, take more frequently log backups. If you're mirroring, increase the frequency of the log transmission. Second, reduce the size or number of transactions by adjusting your code. That one is frequently extremely difficult, but it is sometimes possible.

    That's it. We really have very few options in managing log file size.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Yes.. this is OLTP database, it is very huge tansaction across 8 manufactring sites.

    configured sperate Tlog backups ran at every 15 min, each Tlog backup file size is maximum 8 MB, but it would be increased upto 4 to 5 GB during Index maintanence, update stats, Integrity jobs etc.. on weekly basis.

    Pls. confirm, if execute DBCC SHIRNKFILE (db_log, TRUNCATE_ONLY) what will be impact on mirror database? does it keep on restore log files at mirror database?

    I am planing for execute that command. For reduce the log file size.

    Thanks

    ananda

  • truncate only is not a valid option when shrinking a log. If you don't specify a size, it will shrink as far as possible, probably requiring you to grow it again.

    Don't run that command.

    If you choose to shrink, shrink to a sensible size which is sufficient for your day to day operation.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Absolutely what Gail says. But, the real issue is going to be, what happens next time you run index maintenance, etc. Isn't the log just going to grow again? Then do you shrink it again? Then won't it grow again?

    See where this is going?

    Unless you've experienced a one time issue with the log, backups were failing or something like that, I wouldn't suggest you shrink it. It's probably the size it needs to be. If it's too big for a given drive, get a bigger drive.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Ok, Many thanks form experties suggestions.

    server side 800 GB freespace available at log file partition, so I am not executed that command let it be gorw that file.

    Pls. confirm,

    Does it any performance issues happend if it is huge VLF created in LDF file?

    Thanks

    ananda

  • ananda.murugesan (7/22/2014)


    Ok, Many thanks form experties suggestions.

    server side 800 GB freespace available at log file partition, so I am not executed that command let it be gorw that file.

    Pls. confirm,

    Does it any performance issues happend if it is huge VLF created in LDF file?

    Thanks

    ananda

    Yes. You've either been shrinking and growing it over and over, and/or, your growth increment is really tiny (probably the default 1mb) and you've grown it thousands of times. That scatters things across the disk and can cause performance issues primarily in startup and backup and recovery.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Source: Glenn Berry

    -- Switch to your database

    USE ngmetadata;

    GO

    -- Check VLF Count for current database

    DBCC LogInfo;

    -- Check individual File Sizes and space available for current database

    SELECT name AS [File Name] , physical_name AS [Physical Name], size/128.0 AS [Total Size in MB],

    size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS [Available Space In MB], [file_id]

    FROM sys.database_files;

    -- Step 1: Compressed backup of the transaction log (backup compression requires Enterprise Edition in SQL Server 2008)

    BACKUP LOG [ngmetadata] TO DISK = N'N:\SQLBackupsgmetadataLogBackup.bak' WITH NOFORMAT, INIT,

    NAME = N'ngmetadata- Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 1;

    GO

    -- Step 2: Shrink the log file

    DBCC SHRINKFILE (N'ngmetadata_log' , 0, TRUNCATEONLY);

    GO

    -- Check VLF Count for current database

    DBCC LogInfo;

    -- Step 3: Grow the log file back to the desired size,

    -- which depends on the amount of write activity

    -- and how often you do log backups

    USE [master];

    GO

    ALTER DATABASE ngmetadata MODIFY FILE (NAME = N'ngmetadata_log', SIZE = 8GB);

    GO

    -- Switch back to your database

    USE ngmetadata;

    GO

    -- Check VLF Count for current database after growing log file

    DBCC LogInfo;

  • smtzac (7/23/2014)


    Source: Glenn Berry

    -- Switch to your database

    USE ngmetadata;

    GO

    -- Check VLF Count for current database

    DBCC LogInfo;

    -- Check individual File Sizes and space available for current database

    SELECT name AS [File Name] , physical_name AS [Physical Name], size/128.0 AS [Total Size in MB],

    size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS [Available Space In MB], [file_id]

    FROM sys.database_files;

    -- Step 1: Compressed backup of the transaction log (backup compression requires Enterprise Edition in SQL Server 2008)

    BACKUP LOG [ngmetadata] TO DISK = N'N:\SQLBackupsgmetadataLogBackup.bak' WITH NOFORMAT, INIT,

    NAME = N'ngmetadata- Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 1;

    GO

    -- Step 2: Shrink the log file

    DBCC SHRINKFILE (N'ngmetadata_log' , 0, TRUNCATEONLY);

    GO

    -- Check VLF Count for current database

    DBCC LogInfo;

    -- Step 3: Grow the log file back to the desired size,

    -- which depends on the amount of write activity

    -- and how often you do log backups

    USE [master];

    GO

    ALTER DATABASE ngmetadata MODIFY FILE (NAME = N'ngmetadata_log', SIZE = 8GB);

    GO

    -- Switch back to your database

    USE ngmetadata;

    GO

    -- Check VLF Count for current database after growing log file

    DBCC LogInfo;

    As already pointed out by Gail, TRUNCATEONLY is an invalid option for DBCC SHRINKFILE when shrinking logs.

    The script from Glenn shows you how to resize your log if you have an excessive number of VLFs, it does not advocate you shrink the log regularly or long term 😉

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • You're right Perry Whittle!!!! That is just for more VLFs.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply