Script Stops with No Error on 250,000 INSERT... VALUES Statements

  • We send out one script with table updates to our clients and use INSERT INTO TableName VALUES (v1,v2, etc) statements. We recently needed to do an insert to a table of 250,000 rows this way, and on some less powerful servers, the script stops with no errors. When broken into chunks of 50,000 statements, they individually run fine. We tried putting GO commands every 5,000 statements, but the script still stops about 70,000 rows through. The assumption is that there is a memory problem, since the script does not throw an error even when it fails.

    Since the GO commands aren't working to flush the server's memory, I'm thinking about experimenting with WAITFOR DELAY, to simulate the effect of running individual scripts. Before I do that though, I'd like to see if anyone knows why the GO commands don't work to keep the script from running out of memory and if there are any ideas for other ways to avoid the problem on less powerful servers, while only using one script file.

    To give an example of the script file as it is now:

    INSERT INTO TableName VALUES (v1,v2, etc)

    --REPEAT 5,000 TIMES

    GO

    INSERT INTO TableName VALUES (v1,v2, etc)

    --REPEAT 5,000 TIMES

    GO

    --etc. to 250,000 INSERT INTO VALUES STATEMENTS.

  • Have you considered running the 50,000/5,000 or so rows within a BEGIN TRANSACTION and COMMIT TRANSACTION?

    What is your backup mode/schedule for this database?

    If everything seems to be going well, you have obviously overlooked something.

    Ron

    Please help us, help you -before posting a question please read[/url]
    Before posting a performance problem please read[/url]

  • If you think the issue is related to memeory, you might want to flush data cache(dbcc dropcleanbuffers) before you start your insert. Of course this may slow down performance of SQL due to pulling info from disk rather than cache memory for frequently accessed data. But it may help your issue.

  • Try using BULK INSERT instead (or bcp.exe) and play with batchsize setting.

  • You may want to look at the disk space on those server boxes, particularly the logs. Insert INTO creates logs for the session. Afterwards the logs are emptied when you DB is in Simpel mode, but you still need to shrink the log.

    http://msdn.microsoft.com/en-us/library/ms174335.aspx

    "The INSERT statement is always fully logged except when using the OPENROWSET function with the BULK keyword or when using INSERT INTO <target_table> SELECT <columns> FROM <source_table>. These operations can be minimally logged. For more information, see the section "Best Practices for Bulk Loading Data" earlier in this topic."

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply