June 22, 2007 at 2:17 pm
Hello,
We have a SQL Server bulk-insert job that runs once a month. The Recovery model on the DB was set simple and later set to Bulk-Logged. Everytime the job runs the log file grows to ~ 25GB. Is there a way to keep the log file size to a minimum? We tried using the
1. Simple recovery model and that didn't help.
2. We then switch to bulk-logged and it's still doing the same thing.
3. We even tried to restrict the log file size but obviously that resulted in an error stating that the "log is full and transactions cannot be inserted".
Any suggestions? Currently we are running the following statement to shrink the log file before the job starts:
BACKUP LOG sqlp_PVM WITH TRUNCATE_ONLY
DBCC SHRINKFILE (PVM_Log, TRUNCATEONLY)
We're afraid that if the log keeps growing at that speed, then we might have to adjust the disk quotas etc...but there has to be a more practical solution before we consider that option.
Any suggestions from you would be very helpful.
June 22, 2007 at 4:00 pm
This is a normal behaviour for a bulk insert process. I would say, instead to expand the disk, create a second log file in a separrate disk. Simple recovery model is not recommended; instead keep your db in full recovery while running other sql transactions than your bulk insert and take transaction log backups at a shortest latency as your transactions permit. When about to start the bulk insert, switch the recovery model to bulk-logged. When bulk insert is finished, switch it back. Log backups should run the same way either recovery model (excluding simple of course). There are other possible aids for bulk transactions apps as for eg disabling indexes that are not used by this process, disabling triggers on tables participating in the bulk insert; these as well will shorten out your log files as well.
Good luck.
June 27, 2007 at 1:55 pm
You might get better feedback if you post in one of the SQL subfolders instead of the Notification Services area.
June 29, 2007 at 7:02 am
Try BULK INSERT using the arguments: ROWS_PER_BATCH or BATCHSIZE to a min no. of rows and this will reduce the logging as the one batch is treated as one transaction. Else, it copies the entire BULK insert as one batch.
Viewing 4 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy