August 26, 2008 at 4:57 am
I have a database created for data conversion purpose. The recovery model selected is Simple (since the data held is less important), but my transaction log kept on growing. Database data size is 13 GB and Transaction log size is 8GB. I run DTS packages (without transactions) on the database to bulk insert data from another database and there are several queries to create indexes, searching and update data. There are no queries using transactions. I run those operations daily and it has been running for 2 months.
My questions are
1) The recovery I have used is suited for the scenario described above?
2) Is there a way to limit the transaction log growth?
3) Since I have chosen simple recovery model, I was expecting transaction log less than 2 GB (not an estimate just guessing), how does it grew bigger, does this indicate simple recovery mode is also writing to transaction log heavily?
Thank you in advance
Eranda
August 26, 2008 at 5:27 am
SQL Server always uses the transaction log, and the recovery model doesn't change how much is logged (except for bulk operations). What the recovery model changes is how long the log records are kept.
In simple recovery, inactive log records are discarded when a checkpoint occurs. In bulk logged or full recovery, inactive log records are discarded when a log backup occurs.
All queries use transctions. If you don't specify a transaction, then the automatically created transaction starts when a data modification starts and is auto-committed when that statement completes.
You probably have something like lots of data modifications happening between checkpoints, requiring lots of log space. Large updates, inserts, bulk inserts into tables with indexes, index rebuilds, etc. The log file won't shrink by itself (and it's not a good idea to shrink it regularly)
Do you have transactional or merge replication on that database?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 26, 2008 at 5:59 am
No, replication is not done, but you are correct when you said there can be lots of updates and index creations too. There are many updates and new indexes are created on tables having more than 1 million records.
Recently I noticed sql server log (in EM) reported lots of IOs taking longer than 15 seconds messages, does this also add up?
August 26, 2008 at 6:02 am
Eranda (8/26/2008)
Recently I noticed sql server log (in EM) reported lots of IOs taking longer than 15 seconds messages, does this also add up?
That just means you have an IO bottleneck on one of your drives. Which file is that happening on?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 26, 2008 at 6:13 am
Its the MDF file.
Here is the message;
SQL Server has encountered 1713 occurrence(s) of IO requests taking longer than 15 seconds to complete on file [E:\Databases\SQLDATA\epddc_993_DEV_Data.MDF] in database [epddc_993_DEV] (11). The OS file handle is 0x00000CF0. The offset of the latest long IO is: 0x00000052794000
August 26, 2008 at 6:26 am
That's saying that there's high IO latency on the E drive. A large number of IOs that were issued hadn't completed 15 seconds later. SQL prefers IOs to take under 50 ms.
What's the drive setup on that server? RAID level? What else is on E?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 26, 2008 at 8:16 am
E drive is congested, but it has more than 10 GB space left.
Coming back to my original issue, does IO latency contributes to large log file?
August 26, 2008 at 8:33 am
IO latency can contribute only if new transactions need to be written and the checkpoint cannot complete to clear space. I'd say no in general, but it could cause larger log files. I doubt they would be disproportionally large.
August 26, 2008 at 8:53 am
Eranda (8/26/2008)
E drive is congested, but it has more than 10 GB space left.
The amount of stuff on the drive doesn't influence the speed of reading/writing data.
Coming back to my original issue, does IO latency contributes to large log file?
No. Latency just indicates you have slow IOs and that your queries may take a little longer than they should because reading and writing to the drive is taking longer than it should.
The size of your log is related (in simple recovery mode) to the amount of transactions that occur and the amount of data that they modify between checkpoint operations.
If you are doing large inserts and updates, it may well be that the log needs to be that size. That's not a problem. If the log needs to be 8 GB, then it needs to be 8 GB
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 9 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply