• Gail, great questions (especially the proof question). I split this into two postings; to make a little easier to read...as well as easier for anyone who wants to respond to specific topic of choice.

    First, I want to address and ensure that everyone following this understands what I mean by "performance degradation". In context of this thread, we are purely talking about performance of the Transaction Log itself (not the impact on the database engine or the hard drives)...however, it is very hard not to include those aspects within the same conversation.

    Now with that cleared up; the "concept" is very simple. Assume we have a transaction that we know is 10KB in size and we know we will be committing this transaction 500,000 times. Also, assume you have a transaction log just large enough to store 1,000 of these transactions (approximately 10 MB). And assume the Auto-growth option is turned on, and the setting is set high enough to adjust the log to a size that will fit 500,000 transactions (approximately grow another 500 MB when space runs out; bringing to a total of 510MB). I know this is a bit absurd, but very possible to occur.

    In the SIMPLE model we would experience the following occur within the transaction log (this is assuming the log has no other transactions previously stored, and has a fresh checkpoint)...

    A-1) After approximately 700 transactions are committed SQL Server will then issue an Automatic Checkpoint

    A-2) Once the automatic checkpoint occurs, SQL Server will mark the transactions as ready for release from the log

    A-3) SQL Server will release the transactions (TRUNCATE the log)

    B-1) SQL Server will resume with where it left off on the set of transactions it was processing

    B-2) SQL Server will restart with Step A-1 for the next set of 700 transactions

    Now, notice here that SQL Server never actually increases the log file size...even though it has the setting to do so? This is because the transactions fit within the log file size and SQL has settings that will use Auto Checkpoint either after a timed interval (predetermined by SQL, but may be overridden by user) or upon 70% capacity used up within the transaction log...whichever occurs first.

    With the FULL recovery model we will see a slightly different behavior...

    A-1) When the transactions no longer fit within the log then SQL Server check to see if the FILEGROWTH option is enabled, if so then to see what size to increase by (10% is default; however, our setting in this scenario is 500MB)

    A-2) SQL Server then grows the log file by proper size (assuming disk space is available); should either FILEGROWTH be turned off or disk space runs out, SQL returns a 9002 error

    A-3) If no errors, SQL resumes with transactions until either completed or further space is required (SQL returns to step A-1)

    In the above scenarios you will notice for the SIMPLE model that SQL will continuously TRUNCATE to make room for more transactions, instead of increasing the file size of the transaction log (even though the FILEGROWTH option is enabled). This is because a single transaction easily fits within the log, and SQL’s behavior is to release log space before growing the file, should the transaction fit. Now, if the transaction was too large to fit within the log; then SQL would increase the log file size to accommodate. But, alas this is not the case. Since, only 700 records would fit within the transaction log before the log hits an Automatic Checkpoint, this would result in SQL truncating the log approximately 714 times. Each time obviously has a minimal time to complete the TRUNCATE command before it can continue. So, say that each TRUNCATE were to take about 100ms; this would equate to 7.14 seconds overall (now, this is concept here..and that TRUNCATE time would more than likely be closer to 10 ms times, instead of 100s ms times…but, this is just to illustrate the concept to numbers for following).

    Now, the FULL recovery model would increase the size of the file; and since in this scenario the file size is adequate to store the remaining transactions, this would result in a 1x file increase. Say that file increase took about 4 seconds (again time is for concept reasons, not actual); this would net 3 seconds of time saved (“performance increase”) by choosing to use the FULL model.

    As you can see from the above scenario and some sample times, the SIMPLE model would be more than obviously slower for the log performance. But, one could say that’s an apple to oranges comparison…and yes it is; but, none-the-less it’s a very real world likely scenario to occur (especially among the newer DBAs).

    Ok, so what about an apples to apples comparison; this is much easier to follow. Say same scenario…except the log is already sized out to the proper size of 510 MB.

    In the SIMPLE model, the log would still get truncated as soon as it either hit the recovery time interval or 70% capacity; this would result in either 1 or 2 TRUNCATE commands (depending on time interval SQL auto sets, or user overrides to), and speed of system…I say at least 1 TRUNCATE because as soon as you hit around 420,000 – 450,000 transactions the log would hit 70% capacity and thus TRUNCATE; if it can’t hit that # of transactions before the recovery time interval, then this could create additional TRUNCATEs. TRUNCATE in this scenario would take a bit longer because instead of truncating just under 1,000 records, it would now be truncating just over 400,000 records (more than 400% increase in time) or however many fit within the allotted time for the recovery time interval.

    In the FULL model, the log wouldn’t have anything more than to log each transaction…as long as the log never hits capacity the log would continue to add records; most that will happen (which happens in both SIMPLE and FULL models) is a CHECKPOINT is marked in the log when it hits the SQL auto set recovery interval (or the user-defined recovery interval); however in FULL model this doesn’t cause a TRUNCATE because that’s up to the user to execute.

    <2nd posting will cover proof, or real-world testing>

    Hope this helps,

    James

    ~ Without obstacles, you cannot progress ~
    http://sqln.blogspot.com/