Cannot shrink log

  • My Transaction log is about 3GB and I can't shrink it.

    It is using a FULL backup and I try over and over again to back and shrink it and it won't do it.

    DECLARE @DatabaseName VARCHAR(50);

    SET @DatabaseName = 'myDatabase'

    SELECT name, recovery_model_desc, log_reuse_wait_desc

    FROM sys.databases

    WHERE name = @DatabaseName

    This gives me:

    LOG_BACKUP in the log_reuse_wait_desc.

    I have been all over the web looking for an answer but anything I try still leaves it the same size. I tried to change the inital size to 600MB but that didn't work.

    Any ideas what I am missing?

    Thanks,

    Tom

  • What version of SQL are you running? I know that different versions of SQL run shrinking the transaction log differently. When was the last time you did a transaction log backup? Here is the Microsoft article on shrinking the transaction log:

    http://msdn.microsoft.com/en-us/library/ms178037(v=sql.105).aspx

    --------
    For SQL Tips, check out my blog: http://lantztechknowledge.blogspot.com/[/url]
    You can also follow my twitter account to get daily updates: @BLantz2455

  • FULL backup takes care only about data pages from data files.

    It does not touch TRN log file at all.

    You need to release some sppace in log file by doing BACKUP LOG, and only then you'll be able to shrink it.

    But it's not that simple.

    You can only shrink TRN file if latest LSN entries (Virtual Log Files) are "inactive" (check Status returned by DBCC LOGINFO, "0" is iactive, "2" is active).

    1st BACKUP log will inactivate all the VLF's up to the latest one(s) whcih are in use by the current transactions.

    You need to wait until SQL Server "loops over" and starts recording new transactions from the beginning of the LOG file. You will see Status=2 appear for the VLF's at the beginning of the TRN file.

    Then you can run BACKUP LOG again, and if the VLF's at the end (check againg with DBCC LOGINFO) have become inactive then you can shrink the file.

    _____________
    Code for TallyGenerator

  • Sergiy (4/25/2013)


    FULL backup takes care only about data pages from data files.

    It does not touch TRN log file at all.

    That's not entirely accurate. You are right in that when in FULL recovery mode a Database Backup will not mark portions of the log inactive, however Database backups (a.k.a. Full Backups) actually do interact with the Transaction Log. Please read this article for more info:

    More on how much transaction log a full backup includes By Paul Randal[/url]


    @tshad:

    The real question you should be asking is, why is it that I am thinking of shrinking my transaction log? Your database is in FULL recovery mode which means whoever set it up that way either did not answer the question "do I need point in time recovery?" and did not realize the implications of setting up the database in FULL recovery, or, they answered "yes". You need to find out which one. If they answered "yes" then you need to start managing your transaction logs properly.

    If they did not know they were choosing the FULL recovery mode, many inexperienced DBAs do not, and you do not need point in time recovery on this database then you can switch the recovery mode to SIMPLE which would allow you to shrink your log file and relieve you of the need to worry about taking transaction log backups ever again.

    Please read this article in its entirety: Managing Transaction Logs By Gail Shaw[/url]

    Post questions you have about it, or any responses you have received on this thread so far, and we will try to safely help you out of the mess you are in.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • Are you in some kind of competition for a stupidiest advice ever?

    Is it what you suggest your poor customers: switch recovery mode on all databases to SIMPLE???

    So they do not worry about transaction log ever again?

    _____________
    Code for TallyGenerator

  • For a database that does not need point in time recovery, yes, I would recommend changing to Simple recovery mode.

    Try to keep it professional Sergiy.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • opc.three (4/26/2013)


    For a database that does not need point in time recovery,

    Which means - can afford to lose data.

    Which is not the case by default.

    yes, I would recommend changing to Simple recovery mode.

    You did not identify that condition in your statement, and still recommended this "solution".

    Very "professional".

    OP might be a newbie, might be unaware of proper ways to support databases.

    So we suppose to teach him proper ways, not to add another cowboy hack to his arsenal of tools.

    Try to keep it professional Sergiy.

    I believe, my evaluation of your suggestion was absolutely professional.

    _____________
    Code for TallyGenerator

  • Sergiy (4/27/2013)


    Try to keep it professional Sergiy.

    I believe, my evaluation of your suggestion was absolutely professional.

    No Sergiy, I would have to agree with opc.three, you weren't being professional in your response to him.

  • Sergiy (4/27/2013)


    opc.three (4/26/2013)


    For a database that does not need point in time recovery,

    Which means - can afford to lose data.

    Which is not the case by default.

    Agreed. If you reread my post you'll see that I put forth a question and a set of possible responses the OP should ask and expect to determine if they should consider changing the recovery mode to simple.

    yes, I would recommend changing to Simple recovery mode.

    You did not identify that condition in your statement, and still recommended this "solution".

    Very "professional".

    OP might be a newbie, might be unaware of proper ways to support databases.

    So we suppose to teach him proper ways, not to add another cowboy hack to his arsenal of tools.

    Again, if you refer to my post you'll notice that I welcomed the OP to ask further questions to us if they had doubts about anything that wad said on this thread, or in Gail's article. Please consider your respond before polluting this thread any further, and allow the OP to ask questions if needed.

    Try to keep it professional Sergiy.

    I believe, my evaluation of your suggestion was absolutely professional.

    If you are not only trying to make inflammatory comments and you truly believe you were being professional in your comments then I would be surprised if it were not extremely difficult for you to function in any of the business environments where I have contributed. I am not sure what it is like where you live and with the people you work with, but you have not met any of the criteria for professionalism in any of the circles I have operated in.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • tshad (4/25/2013)


    My Transaction log is about 3GB and I can't shrink it.

    It is using a FULL backup and I try over and over again to back and shrink it and it won't do it.

    DECLARE @DatabaseName VARCHAR(50);

    SET @DatabaseName = 'myDatabase'

    SELECT name, recovery_model_desc, log_reuse_wait_desc

    FROM sys.databases

    WHERE name = @DatabaseName

    This gives me:

    LOG_BACKUP in the log_reuse_wait_desc.

    I have been all over the web looking for an answer but anything I try still leaves it the same size. I tried to change the inital size to 600MB but that didn't work.

    Any ideas what I am missing?

    Thanks,

    Tom

    Hi Tom,

    There's a ton to learn about the Log File (LDF, usually).

    In the area of performance, the initial settings are horrible. It'll grow the log file in a very fragmented way which can have a great impact on performance. It'll also allocate some pretty small "VLF's" or "Virtual Log Files". Think of it kind of like having a disk with a super small sector size. Not very effective. For more on how to "right size" the underlying VLF's in the Sql Server log file, please see the following URL.

    http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/

    That brings us to the next point. How big is your database and how big is the largest table in the database? There are a few more aspects to database maintenance that use the log file and that you must consider when trying to right size the log file. Index maintenance and the rebuilding of statistics. Both are extremely import to performance especially when you have nightly jobs that affect or read from many rows and both are logged in the log file when they occur just in case something goes wrong. The link to Gail's article that you were previously provided will help there quite a bit. I use the rule of thumb that the log file should be about 1.5 times the size of the largest table to start with and then should be managed for growth to control the size of the VLFs that I previously spoke of. Log file growth should never be a surprise (should never happen automatically) in my humble opinion. Neither should the data file growth.

    Another thing to consider is how important your data is. Way too many people will set a database to "Simple" recovery mode because it's a small database and they do nightly full backups and think that's enough. If the data is important, nothing could be further from the truth. Ask yourself how many minutes or hours of data your company can afford to lose and then setup log backups to run in no more than that amount of time. Personally, I won't tolerate the loss of any data so I do "Point-in-Time" backups on my production databases every 15 minutes. That also helps keep the log size down on busy systems. "Point-in-Time" backups require that only the FULL recovery mode be used. If you go to the Bulk Logged mode, then every backup taken while you're in that mode must be used in it's entirety or not at all during a restore... and you can't skip logs. That's why they call it a "log chain".

    Getting back to your original problem, step 1 is to visit the articles I recommended above and figure out what the correct size log file should actually be. Even if you decide it's too small and needs to grow, if the initial settings were left at the default, consider blowing away the log and starting over.

    Step 1 would be to take a full backup.

    Step 2 would be to take a log file backup.

    Step 3 would be to change the recovery model to the SIMPLE mode when not much is happening so that if something does go wrong during this timeframe, you won't lose much.

    Step 4 is to shrink the log file to 0 bytes. Don't shrink the database. Shrink only the log file. This will clear out all the "bad stuff" with the incorrect sized VLFs.

    Step 5 is to change the recovery model back to FULL.

    Step 6 is to change the initial size of the log file to the size you planned on like I recommmended earlier. Also change the growth setting to grow in MB rather than percent just in case there's acccidental growth. The number of MBs should be with what size you want the VLF's to be if growth occurs.

    Step 7 would be to take either a differential backup (smaller and quicker than a full backup) or another full backup to restablish the log chain.

    Step 8 would be to turn on a maintenance plan or custom code to do log file backups every x number of minutes as previously discussed.

    Step 9 would be to ensure that regular tape backups are being taken of all of your SQL Server backups.

    Step 10 would be to setup a plan to delete the backups from disk after they've been backed up to tape and after some period of time has elapsed. I keep 2 to 3 days of backups on disk for quick recoveries if anything goes wrong and it has saved my bacon more than once (especially when you have developers and other folks that have too many privs in prod).

    To emphasize, the only time that I'd ever set a database to SIMPLE recovery on a permannent basis is when I truly didn't give a damn what was in it. Those could be staging databases, scratchpad databases (similar to Temp DB), and maybe even experimental "sandbox" databases. I would never permanently set a production database to anything other than the FULL recovery mode.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks Jeff for keeping it up.

    🙂

    Your list of suggested action is perfect, but I'd say it's a bit overkill for beginners.

    To start - it's important to remember doing regular LOG backups apart from DB backups.

    Then after the pattern of TRN usage over day/week is established then shrink TRN file to the apprpriate size.

    Ideally - shrinking should be done by doing all the steps in your list.

    Which never happens (apart of the databases I'm in charge of :-P).

    _____________
    Code for TallyGenerator

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply