• ErikMN - Friday, January 13, 2017 5:18 PM

    I'm hoping someone can help me with this problem I've encountered.

    Here's my issue.  I've got a set of tables:
    Log (Heap)
    Log2015 (Heap)
    Log2016 (Heap)
    Log2017 (Clustered Index) <- Written to every 5mins.
    ...

    There were a ton of bad rows inserted into the heap tables above (Log,  Log2015, Log2016) which have subsequently been deleted as they were misleading and incorrect. However, since the Log, Log2015, Log2016 are heap tables, deleting the records doesn't free up any space inside the data file of the databases. Since we would very much like to reclaim the space in these tables (several TB across a few hundred databases) and these tables are no longer inserted into and are rarely queried even during business hours, I created a job that goes through and ultimately runs:

    ALTER TABLE <tableName> REBUILD; On each of the tables that need it.

    The problem I've run into is that the while running this on a busy QA environment, I noticed that the stored procedure that writes the records to the live (Log2017) table, was being blocked by my Alter Table job.  Looking at that stored proc, I see the following (it's doing a single row insert of the values in the parameters):


    IF YEAR (@LogDate) < 2015
        INSERT INTO dbo.Log ( <columns> ) 
        VALUES ( <@params for the proc> )
    ELSE IF YEAR (@LogDate) = 2015
        INSERT INTO dbo.Log2015 ( <columns> )
        VALUES ( <@params for the proc> )

    ELSE IF YEAR (@LogDate) = 2016
        INSERT INTO dbo.Log2016 ( <columns> )
        VALUES ( <@params for the proc> )
    ELSE IF YEAR (@LogDate) = 2017
        INSERT INTO dbo.Log2017 ( <columns> )
        VALUES ( <@params for the proc> )
    ELSE
        INSERT INTO dbo.Log ( <columns> )
        VALUES ( <@params for the proc> )

    So what I think is happening is that since the ALTER table command takes a schema lock (LCK_M_SCH_S) the entire table is locked for the duration of the index rebuilds.  I was prepared for queries against the Log, Log2015, and Log2016 tables to be blocked (especially since they are exceedingly rare) but not the inserts into Log2017 to fail.  Given that I have some tables that are 200+GB that need to be rebuilt, I can't have this INSERT blocked for the entire (2-3 hour) duration of that process.

    The thing I don't understand, is why is the above code getting blocked?  Wouldn't SQL only care about taking a lock in the Log, Log2015 or Log2016 tables if the IF condition was true?  Or does it try to take a lock for each table as it goes through?

    Also, are there any other ideas for preventing the proc from getting blocked without changing the application itself?  Would reordering it so that 2017 check was first do the trick?  Or would it need to be changed to dynamic SQL or change the initial stored procedure to a stored procedure that calls a different proc for each year.  Thanks in advance for any helpyou can give me.

    My guess is that since its all one statement (if's and else's) then it has to compile all the relevant text first and that might mean column checks to make sure your insert statements are valid. Remember, first pass of the T-SQL translation checks validity of statement syntax and referenced database objects, the second (and subsequent passes?) are to actually do the execution thing. I think you're getting hung up on that first pass because of the schema locks.

    Maybe do the inserts themselves in called stored procedures? That way if the parameter doesn't indicate the insert on the table that's getting the schema lock, it won't call that particular stored procedure and cause a compile that blocks.

    Experts welcome to tell me I'm full of it LOL