logging without blocking?

  • Hi,

    I have some logging procedures that work fine, like this:

    BEGIN TRANSACTION

    sp_start_log... (writes to table LOG)

    LONG RUNNING SQL

    sp_close_log... (adds results of long running sql to the log)

    END TRANSACTION

    My issue is, that I have multiple procedures containing code like this. These procedures may run in parallel. But because of the transaction my logging table is locked and the procedures are forced to run sequentially.

    I cannot put the call to the log outside the transaction because the code is more complex in reality (multiple dml and logs within the transaction)

    Regards,

    Tobias

  • Possible solutions:

    1) Don't log from the stored procedures. Use a different connection from the application.

    2) Log to a table variable and dump the logs to the permanent log table at the end of the transaction.

    3) Use a CLR procedure to log with a different connection (no context connection, not enlisted in the same DTC transaction)

    4) Use a loopback linked server with enlist=false in the connection string. Same as 3) but completely in the T-SQL realm. Yes, it's a hack.

    -- Gianluca Sartori

  • Use Extended Events to capture the execution of the queries instead of writing to a log table.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • spaghettidba (9/4/2015)


    Possible solutions:

    1) Don't log from the stored procedures. Use a different connection from the application.

    2) Log to a table variable and dump the logs to the permanent log table at the end of the transaction.

    3) Use a CLR procedure to log with a different connection (no context connection, not enlisted in the same DTC transaction)

    4) Use a loopback linked server with enlist=false in the connection string. Same as 3) but completely in the T-SQL realm. Yes, it's a hack.

    Hello Spaghettidba,

    thanks for this competent answer. I will try 2)

    I did already implement 4). This helped to create "autonomous" transactions that persist even, if the outer transaction is rolled back. But somehow, it hat no impact on the locking. You have clue, what could have been done wrong there?

    BR

    Tobias

  • Grant Fritchey (9/4/2015)


    Use Extended Events to capture the execution of the queries instead of writing to a log table.

    I will consider it on the next solution. Current solution is nearly finalized.

  • Does the log table have an identity column? Did you try removing that column and see if the table is still locked? Typically multiple tasks should be able to write to a table as part of a transaction.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher (9/4/2015)


    Does the log table have an identity column? Did you try removing that column and see if the table is still locked? Typically multiple tasks should be able to write to a table as part of a transaction.

    Yes it had. I replaced it by usage of a sequence now - but the behavior did not change.

  • tobe_ha (9/4/2015)


    ScottPletcher (9/4/2015)


    Does the log table have an identity column? Did you try removing that column and see if the table is still locked? Typically multiple tasks should be able to write to a table as part of a transaction.

    Yes it had. I replaced it by usage of a sequence now - but the behavior did not change.

    Hmm, you shouldn't really need a sequence either. Anything that serializes the requests might (not necessarily will) cause a disruption.

    Cluster it by datetime written. That should allow multiple tasks to write to the table at the same time. SQL will resolve the duplicates for you. Later, if you need to, you can add your own "uniquifier".

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • I tried to remove the sequence as well. But the problem is still not solved.

    It is weird, as I tried to reproduced the issue with a simplified sample, but was not able to.

    However, solution 2) described above, worked.

  • I do such logging from within stored procedures all the time. The key is to NOT include the logging inside an explicit transaction.

    I'll also state that having "long running transactions" anywhere is a recipe for blocking and deadlocking. There are several "Divide'n'Conquer" methods to avoid such things.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • All of the above advice is good (especially Jeff Moden's), but have you considered using Service Broker? Asynchronous logging is one of its primary use cases. On the "down" side, it isn't a quick fix, and there is a bit of a learning curve if you haven't used it before. On the "up" side, it is perfect for this sort of thing and would give you a generic logging solution that you could use when extended events is a "less than optimal" solution. (Grant is right, though - take a look at extended events, too.)

  • @jeff:

    Sure, there are 90 million rows, which I have load to two corresponding tables. If one fails, the other shouldn't be loaded as well.

    I think about doing this in chunks. Perhaps you have a good idea regarding my new post:

    Or how would you make the transaction small?

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply