nonlogged insert

  • Hi,

    I have a problem that may be very simple...

    Is there another way to do an "typical" INSERT from one db to another in the same SQL SERVER without logging the operation.

    I'm thinking most if i have a table with 5 milions rows, the transaction log gets full and no disk is available.

    I prefer not to use SELECT/INTO or BULK COPY.

    Is there a way to "shut down" SQL SERVER's log.

    Use DTS transaction log for transfer operations?

    Thanks in advance.

    Regards,

    Sica

  • Not sure off hand...what about truncating the log. i.e.(do the insert in a loop, insert a record at a time, truncate that log every iteration through the loop) Not very efficient, but its the first thing that comes to mind.

    Maybe try setting the recovery option on the db to BULK_LOGGED. This minimizes the amount of space used when writing to the log file for 'select into'.

    Anyone else?

  • Logging is based on what kind of mode your db is in. If you set it to simple, truncate on checkpoint, that eliminates the problem of the log filling your disk as long as it is not one big transaction.

    Once you decide on your logging, then its just a matter of preference. DTS is probably the easiest use, BCP the fastest. If you change your log setting to do the import be sure to change it back when you're done, then immediately run a full backup.

    Slightly off topic, for large bulk loads like this you may want to drop your indexes before loading and restore afterward.

    Andy

  • Please excuse my ignorance but I'm not sure if I get it , so please correct me if I'm wrong.

    I have truncate on checkpoint enabled ,but as you say this is one big transaction so it doesn't help too much. Is this what you mean by "simple" db mode?

    Use DTS packages transation log in this case?

    I will try to drop all indexes before loading,but I'm more interesting in disable logging whenever I want if possible.

    Thnaks.

    Sica

  • you cannot disable logging. even "on chkpt" logs the operation. You don't want to disable this. it is your safety net.

    better to spend some $$ on another disk for logs.

    Steve Jones

    steve@dkranch.net

  • In simple mode/truncate on check point what happens is SQL writes the action to the log, then applies the change to the db, then removes the action from the log. This keeps the log size small, but you dont have the ability to do log backups and their associated point in time recovery.

    When I say transaction I mean SQL transaction, like this:

    Begin Trans

    insert 5million rows here

    commit trans

    Instead, either use no SQL transaction or keep it small, maybe 1000 rows.

    Steve is right, bypassing logging is NOT something you would want to do. With logging you can walk over to the server while those 5 millions rows are loading and physically pull the server plug - when you restart and recovery is complete you'll have either your original db or the original db plus 5 million rows (depending on exactly what was occurring when the power went off), but nothing in between.

    Andy

  • Which is the best way to transfer 4 milions posts from one db to another?

    BULK COPY or DTS ?

    Thanks for you time.

  • I mentioned this earlier. DTS is easiest, BCP the fastest. I dont think you'll be disappointed with DTS.

    Andy

  • Hello,

    My Log File is giving me a hard time by filling up before my insert is complete. I am trying to do an insert into of about 1.2 million records. I have plenty of disk space and was wondering if I can tune the log file to take as much as it needs. The changes I make seem to have no effect...

    Thanks,

    Dan

  • If you have some key in your data that you can use to segment then do it in pieces that way you can truncate the log after each copy. Also make sure no indexes and auto statistics are off as they will cause some logging and slow down the transfer. After you are done the put those items back.

    This should do it. Or if the new database is the same as the old and you just need to move the data you could do a file backup from the original server and restore to the new which would be easiest of all with just a little cleanup.

    "Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)

  • Thanks for the reply.

    I have done the insert with a quantity of 600000 records at a pop with a shrink in between and it worked fine, but... if I have the disk space, can't I somehow allow that log file to just grow and run 1 large insert, then shrink?

    Thanks again,

    Dan

  • If you have the space, then yes but I would give the log file a large grow ratio and start size then force it down after done. There is a script on the site that can force the log to shrink and in SQL 2000 you can shrink it easily in EM below the original size.

    "Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply