January 24, 2012 at 3:50 am
I have been reading Gail's excellent article which can be found here:
http://www.sqlservercentral.com/articles/Administration/64582/
I have however experienced an issue on which I have an unclear picture. We have a DB in Simple recovery that recently ran out of log space. This DB is populated nightly by dropping and recreating various tables, and then creating a clustered index against the table.
The drive the logfile resides on now has 11GB free as the log was shrunk (I know), and the DB size is 6GB. (There are other logfiles on the drive which were also shrank to get this extra space)
I know which transactions are used to populate the DB, but how can I translate these transactions into size used by the log in order to size it correctly?
Also is it worth me putting a CHECKPOINT in at the beginning of each of these transactions, my understanding is that SQL should manage this itself, however I have read occasions where this has been needed?
Thanks
January 24, 2012 at 9:29 am
...and then creating a clustered index against the table...
How large is the clustered index? A fair measure of how large your log file needs to be can be determined by the largest index that has to be rebuilt: So if you've an 11 GB LDF file and your largest clustered index is 16GB (ex) then you'll run out of space...so size the LDF file accordingly.
Others with more experience may say otherwise (prolly for a good reason as I'm no expert) but I believe using CHECKPOINT while a DB is in SIMPLE mode is unneccessary. SQL seems to do a decent job of doing CHECKPOINTS automatically as needed.
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
Viewing 2 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply