Transactions in stored procedures

  • I've got a sproc that performs insert and delete statements, with the total row count being anything up to 1.5 million.  The basic sproc is:

    Begin Transaction

    Delete From Table Where Criteria

    If Error Then Rollback And Return -1

    Insert Into Table

    Select From SourceTable

    If Error Then Rollback And Return -1

    Commit Transaction

    The problem is that sometimes the sproc takes a very long time to run, and sometimes it takes so long that I have to stop the process.  However, if I remove the Transaction commands and try it, the sproc always runs in its 'usual' timeframe.

    Is this common with sprocs and transactions?  Could it be a problem caused by processing up to 1.5 million rows in a transaction?

    I am really interested in knowing why the behaviour is sometimes so different between a Transactioned and non-Transactioned statement.

  • How many records are typically being DELETEd?  IF Most THEN right the good records to a TEMP table TRUNCATE the REAL table and reload.

    Transaction keeps track of everything and allows an "UNDO" vs. NON-transactional which if an error is found will just stop and NOT "UNDO"



    Good Hunting!

    AJ Ahrens


    webmaster@kritter.net

  • Depending on the load (whether it's a net-change load or a full load) I can be deleting anything from 1 to the full 1.5 million.  Good tip about Truncate, especially with the full loads.

    So, it could be a problem with the Transaction Log filling up and therefore the sproc can't complete?

  • It could be that, or a timeout, or...., or...



    Good Hunting!

    AJ Ahrens


    webmaster@kritter.net

  • Anyone else have trouble with Transactions like this?  Surely, SQL can handle Transactions with this many inserts/deletes?

    Maybe it can't and maybe a re-design is required.  However, I'd rather find a way to make it run as is, but inside a Transaction.

  • If you don't care about recovery, think about using SELECT ... INTO instead of INSERT.

  • Recovery as in rolling back a transaction?  Is 'Select...Into' a quicker method than 'Insert...Into'?

     

  • Select into can cause system tables locks in tempdb while the data is inserted, this can literally destroy the apllication. I'd stay far away from this.


  • Select into can cause system tables locks in tempdb while the data is inserted, this can literally destroy the apllication. I'd stay far away from this.


    I'd hope that this sort of activity would be carried out when no-one is using the system, so locks taken out as a result wouldn't matter. Deleting a large amount of data will cause enough locking problems of its own anyway.

    Why would system table locks in tempdb be a problem if you are selecting into another database?

    Select into is much faster than insert because it is a non logged operation (assuming you have simple or bulk-logged recovery model).

  • If the any of those tempdb tables are locked (sysobjects, syscolumns, sysindexes, sysindexes), any code that needs to create a temp table or even a table variable will not be able to execute because no entry will be possible in those tables.

  • Thanks for the tips, guys.

    The job will (generally) be running at a time when no users are on the system.

    I'll look into the 'Select...Into' approach.

    🙂

     

  • Don't come back running here when it fails.

  • Hey, I'm not ignoring what you are saying, but the least I can do is try it out, right?

  • I suggest you immediatly fameliarize yourself with sp_who2, sp_lock and dbcc inputbuffer(spid), you're gonna need it.

  • Ok, stop trying to scare me!

     

Viewing 15 posts - 1 through 15 (of 22 total)

You must be logged in to reply to this topic. Login to reply