Capture Record count and Time elapsed through SQL Server Stored Procedure

  • I have a stored procedure which returns the millions of rows on specific condition. There is a need of plot a line graph which should show the number of records returned after every 25 miliseconds. For example- the overall return records count is 2 million and this takes 2 seconds. Is there any provision if I can populate any additional table with time elapsed and records return? This can help me in designing the line graph.

    Or suggest any other way to accomplish this task.

    Any help is appreciated.

    Regards,
    Abhi

  • I'm not sure about others, but I think this may be a tough nut to crack. 

    First of all, I wouldn't put any kind of process inside your existing query that attempts to log this information. You'd have to break down the query in some fashion and there's no way I know of to tell a query to break itself apart based on time. You could only do this based on rows, and then there's no way to guarantee that 100,000 rows are processed in 25ms or 22ms or 37ms (blocking, resource contention, etc.). Any additional load on the query itself will cause it to run slower, and if it's already processing millions of rows, slower would be bad.

    That means we have to look at an external process. Now, the immediate question, what do we mean by returned? Read from disk into memory, ready from memory into the network queue or received at the client? Received at the client is the easiest to measure, just have your code that is processing the data interrupt itself and do the count every 25ms (and yes, that will slow down that part of the process).

    If we wanted to measure this from Extended Events on the server, I'm not sure what event we could use. The standard events for measuring query performance rely on the completion of a batch or statement. That won't give you any information on the fly. To my knowledge none of the memory oriented events count rows.

    I don't think the precise thing you're looking for is possible.

    So, what's the intent of this? What problem are you attempting to solve?

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey - Tuesday, August 1, 2017 5:11 AM

    I don't think the precise thing you're looking for is possible.

    Seconded.

    Closest thing to this is SQL 2016's live query statistics, but without an upgrade that's not an option.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Grant Fritchey - Tuesday, August 1, 2017 5:11 AM

    I'm not sure about others, but I think this may be a tough nut to crack. 

    First of all, I wouldn't put any kind of process inside your existing query that attempts to log this information. You'd have to break down the query in some fashion and there's no way I know of to tell a query to break itself apart based on time. You could only do this based on rows, and then there's no way to guarantee that 100,000 rows are processed in 25ms or 22ms or 37ms (blocking, resource contention, etc.). Any additional load on the query itself will cause it to run slower, and if it's already processing millions of rows, slower would be bad.

    That means we have to look at an external process. Now, the immediate question, what do we mean by returned? Read from disk into memory, ready from memory into the network queue or received at the client? Received at the client is the easiest to measure, just have your code that is processing the data interrupt itself and do the count every 25ms (and yes, that will slow down that part of the process).

    If we wanted to measure this from Extended Events on the server, I'm not sure what event we could use. The standard events for measuring query performance rely on the completion of a batch or statement. That won't give you any information on the fly. To my knowledge none of the memory oriented events count rows.

    I don't think the precise thing you're looking for is possible.

    So, what's the intent of this? What problem are you attempting to solve?

    Thanks for your reply Grant.
    The intent is to get the breakup of query execution time with relevant record count. As mentioned in my request, I need to draw a line chart and to plot overall records counts on a regular interval of .10 ms.
    Any other way to achieve this? Please share if possible.

  • GilaMonster - Tuesday, August 1, 2017 5:34 AM

    Grant Fritchey - Tuesday, August 1, 2017 5:11 AM

    I don't think the precise thing you're looking for is possible.

    Seconded.

    Closest thing to this is SQL 2016's live query statistics, but without an upgrade that's not an option.

    Thanks Gila for your reply. Could it be possible if I anyway upgrade from SQL Server 2012 to SQL Server 2016?
    Any specific link for such topic? Please share.

  • Abhishek-385339 - Tuesday, August 1, 2017 10:15 PM

    GilaMonster - Tuesday, August 1, 2017 5:34 AM

    Grant Fritchey - Tuesday, August 1, 2017 5:11 AM

    I don't think the precise thing you're looking for is possible.

    Seconded.

    Closest thing to this is SQL 2016's live query statistics, but without an upgrade that's not an option.

    Thanks Gila for your reply. Could it be possible if I anyway upgrade from SQL Server 2012 to SQL Server 2016?
    Any specific link for such topic? Please share.

    Probably yes, using Live Query Statistics or their underlying implementation (which is covered in the documentation and can be found via google).

    On 2012  I don't see any way to achieve this, it's just not exposed from the engine. Even on 2016, you'll have to have another procedure, running from another session, to get this data. It can't be done async from inside the procedure that you're trying to monitor.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Abhishek-385339 - Tuesday, August 1, 2017 10:15 PM

    GilaMonster - Tuesday, August 1, 2017 5:34 AM

    Grant Fritchey - Tuesday, August 1, 2017 5:11 AM

    I don't think the precise thing you're looking for is possible.

    Seconded.

    Closest thing to this is SQL 2016's live query statistics, but without an upgrade that's not an option.

    Thanks Gila for your reply. Could it be possible if I anyway upgrade from SQL Server 2012 to SQL Server 2016?
    Any specific link for such topic? Please share.

    2016 Live Query Statistics only allows you to observe from the outside.

    What you need to do is pick a reasonable break point, say 100,000 rows, and break apart the query that way. There is no way to break it apart by time. It cannot be done that way. Choose rows instead. Look up "Windowing Functions" and "Tally Table" or "Table of Numbers" to find ways to do this that are very efficient. However, it will have to be by number of rows. T-SQL is set based. There is no way to interpret sets based on time. Instead, you break apart the sets by row counts using the functions I reference here.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 7 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply