Transaction log and simple recovery mode

  • Hello everyone,

     

    I would appreciate your thoughts on the topic of transaction logs and DB recovery modes. I have a 25GB DB (in simple recovery mode) that get loaded with data nightly. During this load the transaction log grows from 504K to 23GB, until I shrink it in the morning.

    If I understand the issue correctly, this should not be happening, because simple recovery mode means that transaction logs are truncated as soon as transactions are done. Yet every morning I check this server and its tr. log is huge, but there are no active transactions. In fact, if there were any activer transactions I would not be able to shrink it back to 504K with dbcc shrinkfile.

    Please comment,

     

    Thanks.

  • This is a misconception of most people. SIMPLE mode does not mean the log file is shrunk. The log file is checkpointed, then truncated. Truncating only leaves more empty space in the log file - it does NOT 'return the unused portion to the operating system'.

    Let's say you have 10 GB Log file. The active portion is 3 GB. SQL Server will checkpoint it, and truncate the 7 GB of inactive log. That means the amount of data in the log is ony 3 GB, but the log file size is still 10 GB.

    Why does this happen? So the log doesn't have to go through an autogrow. SQL Server figures that if the space was needed once, it will probably be needed again. It's less resource consuming to reuse the space, then to have to increase it.

    -SQLBill

  • Thanks, that explains it.

  • Hi,

    on a sidenote, if your log grows to 23GB every night, it would be wise to leave it at this size and not shrink it every day. It is quite time- and resource-consuming when the log has to grow from 0.5 to 23 GB; if you leave the log in original size, dataload should be quicker. Maybe you'll need to add some disk space to your server, but a few GB shouldn't be a big issue these days.

  • Where there an advantage of switching to a Bulk-Logged model instead?

  • Bulk-logged mode will only affect bulk operations. See books on-line for a list of these. Examples are bcp, bulk insert, create index and rebuilding indexes.

    So, if you drop indexes, do bulk inserts, and then recreate the indexes, bulk-logged mode should reduce the size of your log file. If you use a normal DTS data pump or some other mechanism that simply does normal INSERTs and adds entries to indexes one at a time, you will still see large volumes.

    In all recovery modes, but especially Simple recovery, the size of a batch determines how big a transaction is. If you use DTS to load a table, the default batch size of zero says the entire load is a single large transaction. If you use a smaller batch size, you will get many smaller transactions. In simple recovery mode, the previously completed transactions can be marked as "reusable" log space at each checkpoint, so the transaction log file doesn't have to grow very large. Of course, this means that if the load fails part way through, you will need to have some means of either restoring the database, removing any successfully loaded records or rerunning the load process without creating duplicates.

    David Lathrop
    DBA
    WA Dept of Health

  • thanks,

     

    That's actually what I wind up doing.

  • Along these lines...

    I have a production DB environment that does massive calculations.  The data file is 25G in size, yet the logs are 50G.  We've verified that the system is in Simple mode.  We can truncate the log back to something reasonable, but it continues to grow.

    Is there any possibility that SQL isn't handling the "simple mode" correctly? 

    Obviously the application could be itterating through stuff and generating that many transactions in a session...But...DAMN that's a lot...

    Thanks,

    Mike

  • Hi Mike,

     

    see what SQLBill wrote about this (above). It seems to me that the best thing (and that's what I wound up doing) is to find the max size and leave it there.

  • It's not that I don't understand what SQL Bill is saying...It's that I've never seen an ACTIVE transaction log be double the size of the DB...Typically it can grow to the same size of the DB when doing reindexing or some other DBCC task, but I've never seen normal use cause logs like this...

  • normally I'd agree with you, but fact is that this log is growing, and you can guess what is going on there, but if you really want to know take a look inside of the actuall t-log. (for example with dbcc log command, or there are a bunch of log viewing 3-rd party tools)

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply