January 17, 2008 at 4:15 am
Hi,
I've run through the forums looking at the different ways of managing transaction logs. However, so far, I don't see an answer to this scenario:-
1 have :-
. To load about 50GB of data into a SQL Server database. The target database has about 300 tables.
. To do this, I have 4 jobs which pull the data in - so you can reckon on each 'job' pulling in about 13GB of data. (I cannot make it less than four jobs - they are 'what I've been given to do the job').
. So far, managed to get 24GB of data loaded - but the transaction log has hit 30GB - and I ran out of space. (Space is VERY tight).
. No interest in keeping the transaction logs. If any job fails - I go back to the start.
. To get it to run in the shortest possible time. (This is a 'cut over').
. To cut down on the total execution time. Ideally I'd run with no transaction logs - but I understand that's not an option in SQL Server 2005.
So, essentially, I want to somehow use the least overhead for the transaction log, stop it growing and/or shrink it after each of the four jobs.
I need to do any db maintenance from sqlcmd - not SSMS.
Any ideas gratefully appreciated.
Thanks,
Pete
January 17, 2008 at 5:13 am
How ae you loading the data? If you're using a bulk load (bcp, Bulk insert) then put the DB into bulk-logged recovery.
In that mode, bulk operations are minimally logged and have very little impact on the transaction log.
Otherwise, put the DB into simple recovery, and run a checkpoint from time to time during your load. In simple recovery mode, a checkpoint will cause the inactive entries to be removed from the transaction log.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 2 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply