Stop query if it is taking too long

  • Hello

    How can I stop the long running query that is run from application?

    I want to kill the queries that are running longer than 40 seconds.

  • mah_j (10/22/2016)


    Hello

    How can I stop the long running query that is run from application?

    I want to kill the queries that are running longer than 40 seconds.

    Do you want to automatically terminate any query that runs for more than 40 seconds? What about maintenance operations or scheduled jobs which might run for longer? How are you going to identify which spids to kill and which should be left alone?

    😎

    Wouldn't it be a better approach to find out why those queries are taking so long and then optimize them?

  • Set the app's timeout to 40 seconds (how depends how the app connects to SQL). What you ask is exactly what an application timeout is.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (10/22/2016)


    Set the app's timeout to 40 seconds (how depends how the app connects to SQL). What you ask is exactly what an application timeout is.

    This approach could possibly leave the SPID dangling in an async_network_io wait.

    😎

  • mah_j (10/22/2016)


    Hello

    How can I stop the long running query that is run from application?

    I want to kill the queries that are running longer than 40 seconds.

    I don't know if you're the "DBA" or just a person who's been put into a position of high trust but killing SPIDs of long running queries from an application is a total violation of the trust you've been given. Your job is to make data available at all times and killing long running queries is severely contrary to the requirements of that job. A query that runs too long means that query isn't making the data available in a reasonable time.

    What you need to do is...

    1. Identify what the queries are.

    2. Identify what the parameters for the queries are.

    3. Using the information from #1 and #2 above, figure out what the problem with the queries are.

    4. Either fix the queries or make recommendations as to how the queries can be fixed.

    Is it going to be painful to do so? Most likely but that's a part of your job.

    If you're trying to make a point to management, then don't kill the queries. Let them run forever and document them and the problems they're causing to the system. Simply killing them is just going to mask the problem.

    Stop putting Band-Aids on stab wounds and fix the real problem.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Eirikur Eiriksson (10/22/2016)


    GilaMonster (10/22/2016)


    Set the app's timeout to 40 seconds (how depends how the app connects to SQL). What you ask is exactly what an application timeout is.

    This approach could possibly leave the SPID dangling in an async_network_io wait.

    Ref?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I am so sorry,I asked my question in a wrong way.Yes I know that we have problem in one of our queries,and I am involved.(The logical io is too high(3939737 for 43300 record) and after update statistics the estimated number of rows is still 1.Total number of extents for that table is 1284925.)

    So because of this problem we have time out in insert statement.I want to know if there is a way to kill these queries until I find the solution for the query performance.

    At the moment developers set the app's timeout to 40 seconds,but the query is still running after it shows timeout error to the users.

  • GilaMonster (10/22/2016)


    Eirikur Eiriksson (10/22/2016)


    GilaMonster (10/22/2016)


    Set the app's timeout to 40 seconds (how depends how the app connects to SQL). What you ask is exactly what an application timeout is.

    This approach could possibly leave the SPID dangling in an async_network_io wait.

    Ref?

    I frequently see high ASYNC_NETWORK_IO waits where the client application has timed out and the query / spid is left running on the server. The dangling query / spid can have serious impact on the performance of the server, especially when the client application retries the previously failed query.

    😎

    The most common reasons are inefficient queries with a very high number of logical reads coming from scalar / multi-statement table valued functions or very high cardinality in the initial query being suppressed by the distinct operator, combination of those makes the problem worse.

    I have seen speculations on network and slow application's consumption of the data beeing the cause but I'm yet to experience that, so far the cause has always been those inefficient queries.

    The only applicable solution is to optimize the queries and the underlying database, killing the queries only suppresses the problem.

  • mah_j (10/22/2016)


    I am so sorry,I asked my question in a wrong way.Yes I know that we have problem in one of our queries,and I am involved.(The logical io is too high(3939737 for 43300 record) and after update statistics the estimated number of rows is still 1.Total number of extents for that table is 1284925.)

    So because of this problem we have time out in insert statement.I want to know if there is a way to kill these queries until I find the solution for the query performance.

    At the moment developers set the app's timeout to 40 seconds,but the query is still running after it shows timeout error to the users.

    Can you post the query and the actual execution plan?

    😎

    Question, are there any user defined functions called in the query? Does the query use the distinct operator:cool:?

  • Currently the index is just on invoiceuid,but because of lookup I test it with other columns too.But for both modes the logical io is high.

  • mah_j (10/23/2016)


    Currently the index is just on invoiceuid,but because of lookup I test it with other columns too.But for both modes the logical io is high.

    The second execution plan is much better, the index eliminates the expensive key lookup.

    😎

    The main problem now is the dbo.Invoice table, close to 1M rows passed to the join with less than 44K rows out of the join. A filtered covering index for the query would mitigate this, something like this

    CREATE NONCLUSTERED INDEX [INDEX_NAME] ON dbo.Invoice(InvoiceUID)

    WHERE ([Amount] <= 9999999999.00)

    AND ([Amount] >= 0.00)

    AND ([VerificationState] <> 0)

    AND ([TermID] = 735)

    AND ([MerchID] = 302)

    INCLUDE

    ( [Amount]

    ,[InvoiceNumber]

    ,[InvoiceDate]

    ,[VerificationState]

    ,[TermID]

    )

    If any of the predicates are passed as parameters then obviously those needs to be moved from the filter clause to the index columns.

    Another thing, the TOP(100000) doesn't do any good in this case, suggest you remove it which will eliminate the sort if there is an index with the output sort order on the dbo.Transaction table.

  • But the Invoiceuid is PK and has a clustered index in dbo.Invoice table.

    ALTER TABLE [dbo].[Invoice]

    ADD CONSTRAINT [PK_Invoice] PRIMARY KEY CLUSTERED([InvoiceUID] ASC)

  • mah_j (10/23/2016)


    But the Invoiceuid is PK and has a clustered index in dbo.Invoice table.

    ALTER TABLE [dbo].[Invoice]

    ADD CONSTRAINT [PK_Invoice] PRIMARY KEY CLUSTERED([InvoiceUID] ASC)

    The problem is still that the query is seeking on the index for almost 1M rows, adding the new index would bring this down to a reasonable number.

    😎

  • I created the filtered covering index on InvoiceUID as you said,(run dbcc freeproccache and dbcc dropcleanbuffers) but it was not usable.Again it uses the IX_invoice index and the plan dose not change . also the elapsed time increases over 1 minute.

    Another thing is that the parameters are dynamic and they change each time.(the queries are created from LINQ and run with sp_executesql)

  • mah_j (10/23/2016)


    I created the filtered covering index on InvoiceUID as you said,(run dbcc freeproccache and dbcc dropcleanbuffers) but it was not usable.Again it uses the IX_invoice index and the plan dose not change . also the elapsed time increases over 1 minute.

    Another thing is that the parameters are dynamic and they change each time.(the queries are created from LINQ and run with sp_executesql)

    Which of the parameters are fixed? Those are the one that should be in the filter. The remaining parameters must be moved to the index column combination and referenced in the correct order.

    😎

    Can you change this to a stored procedure rather than LINQ generated code?

Viewing 15 posts - 1 through 15 (of 30 total)

You must be logged in to reply to this topic. Login to reply