Locking tables with SSIS AND/OR SQL Server

  • Hey all,

    I have an SSIS script which looks at a series of tables, and checks to see if they are up to date. If they are, it does nothing, and if they are not, it empties the records out of the tables for the given date, and then re-loads them.

    This script is launched from a stored proc, and is run on a daily basis in the morning. No problems there.

    The problem is that I also want to include this stored proc as an option to run from the .NET environment that my application is running in, as well. This application can be run by multiple people simultaneously.

    I'd imagine that if multiple people were to run the script at the same time, strange results might appear.

    So what I'd like to do is to have the stored proc, or the SSIS script, when started, lock all the tables that are being accessed, do their operations, then release the locks.

    I just don't know how I'd go about doing that. If I encased the DTEXEC command that I'm using in my stored proc to launch the SSIS task with a BEGIN TRANSACTION - END TRANSACTION block, would that accomplish my objective?

  • kramaswamy (12/16/2011)


    I just don't know how I'd go about doing that. If I encased the DTEXEC command that I'm using in my stored proc to launch the SSIS task with a BEGIN TRANSACTION - END TRANSACTION block, would that accomplish my objective?

    Almost 🙂

    You need to set the RetainSameConnection property of the connection manager to TRUE.

    Furthermore, the second SQL statement should be COMMIT, not End Transaction.

    Also make sure that the current isolation level supports your requirements.

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • What do you mean by the current isolation level supporting my requirements?

  • kramaswamy (12/19/2011)


    What do you mean by the current isolation level supporting my requirements?

    http://msdn.microsoft.com/en-us/library/ms173763.aspx

    For example: read committed - which is the default if I'm not mistaken - prevents dirty reads, but you can still have phantom data.

    Better do some research about isolation levels, what the default is in your system and if it supports the business needs. Some companies are OK with dirty reads, most of them aren't.

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • I had a stored procedure once that updated plenty of tables that would have created a mess when executed concurrently by different users / processes.

    So I encased the entire procedure in a transaction ('begin transaction' at the top and 'commit' at the bottom if successful), and had as the first action to update a single known row in a table created specifically to indicate the process was in use. Any subsequent occurrence of this process would then block until the first process did a commit, because my theory was that the transaction put a lock on this one record that lasted until the commit and every occurence of the procedure had to wait for access to that single row.

    It seemed to work for me in this case, but I wonder if theres any holes in that strategy across the various isolation levels other than obviously not being a recipe for high concurrency?

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply