Last Time Stored Procedure Ran ?

  • Hi, I've found several examples using dm_exec_procedure_stats but the results are only useful  as long as the stored procedure remains cached.

    Is there a better way in SQL 2019 other than having a trace running to capture SQL activity ?

  • Try

    SELECT *

    FROM sys.dm_exec_query_stats QA

    CROSS APPLY sys.dm_exec_sql_text(QS.plan_handle) ST

     

    See if that gives you what you need.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Add code to the start of the proc to log to table.  While you're at it, you might as well log all the parameter values as well, if you have space for them.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • The goal is to look at all SPs across databases with a specific reference in the code, so I can't make changes to all the SPs affected.

    Logging might be a good idea going forward.

  • You could automate the change easily enough (including grabbing the parameters), but I understand the reluctance to change all the procs.  You would likely want to capture the previous modify_dates for all the proc so you would know when the last change *other than* just adding the proc start-up logging was done.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher wrote:

    Add code to the start of the proc to log to table.  While you're at it, you might as well log all the parameter values as well, if you have space for them.

    Depending upon the number of procs, and the frequency of being  called, this may not be a good idea.

    Assuming you create a single table to hold the name of the proc and the execution date and time, that may be a blocking and deadlock issue.

    Creating a separate logging table for each proc may be somewhat cumbersome as the number of procs increase.

     

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Michael L John wrote:

    ScottPletcher wrote:

    Add code to the start of the proc to log to table.  While you're at it, you might as well log all the parameter values as well, if you have space for them.

    Depending upon the number of procs, and the frequency of being  called, this may not be a good idea.

    Assuming you create a single table to hold the name of the proc and the execution date and time, that may be a blocking and deadlock issue.

    Creating a separate logging table for each proc may be somewhat cumbersome as the number of procs increase.

    The rows should be very short, so I don't really see that becoming an issue.  I think you'd have to have an extraordinarily high rate of calling procs to get close to causing an issue there.

    Keep in mind that the code generating the code inside the proc would be dynamic, but the generated code itself would be static, that is, there'd be no meta-data reading overhead involved in logging the proc call, just a single INSERT statement.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher wrote:

    Michael L John wrote:

    ScottPletcher wrote:

    Add code to the start of the proc to log to table.  While you're at it, you might as well log all the parameter values as well, if you have space for them.

    Depending upon the number of procs, and the frequency of being  called, this may not be a good idea.

    Assuming you create a single table to hold the name of the proc and the execution date and time, that may be a blocking and deadlock issue.

    Creating a separate logging table for each proc may be somewhat cumbersome as the number of procs increase.

    The rows should be very short, so I don't really see that becoming an issue.  I think you'd have to have an extraordinarily high rate of calling procs to get close to causing an issue there.

    Keep in mind that the code generating the code inside the proc would be dynamic, but the generated code itself would be static, that is, there'd be no meta-data reading overhead involved in logging the proc call, just a single INSERT statement.

    Well, as an exercise to capture the values of a parameter, as well as usage of the procs, we implemented a single table with the proc name, a timestamp, and a column with all the parrameters and their values when called.  During times of high usage, we did experience blocking and deadlocking.   Of course, this was an extremely busy system.

    We reduced the number of procs where we were storing the variables and the values. There were less issues,  but still a few.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Michael L John wrote:

    ScottPletcher wrote:

    Michael L John wrote:

    ScottPletcher wrote:

    Add code to the start of the proc to log to table.  While you're at it, you might as well log all the parameter values as well, if you have space for them.

    Depending upon the number of procs, and the frequency of being  called, this may not be a good idea.

    Assuming you create a single table to hold the name of the proc and the execution date and time, that may be a blocking and deadlock issue.

    Creating a separate logging table for each proc may be somewhat cumbersome as the number of procs increase.

    The rows should be very short, so I don't really see that becoming an issue.  I think you'd have to have an extraordinarily high rate of calling procs to get close to causing an issue there.

    Keep in mind that the code generating the code inside the proc would be dynamic, but the generated code itself would be static, that is, there'd be no meta-data reading overhead involved in logging the proc call, just a single INSERT statement.

    Well, as an exercise to capture the values of a parameter, as well as usage of the procs, we implemented a single table with the proc name, a timestamp, and a column with all the parrameters and their values when called.  During times of high usage, we did experience blocking and deadlocking.   Of course, this was an extremely busy system.

    We reduced the number of procs where we were storing the variables and the values. There were less issues,  but still a few.

    A single column with all of the parameters?  Could be bad idea, potentially lots of overhead, depending on how it's done.

    I'd also have to see the table DDL including for the clustered index.

    The code does indeed need to be written efficiently to reduce chances of such blocking.  As I stated above, the code in the proc should be a single INSERT statement, period, nothing more.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher wrote:

    A single column with all of the parameters?  Could be bad idea, potentially lots of overhead, depending on how it's done.

    .

    The parms and values where concatenated into a string and inserted.

    ScottPletcher wrote:

    I'd also have to see the table DDL including for the clustered index.

    .

    There was a created date that had a default constraint on the column, a column for the proc name, and the column for the parameters.

    ScottPletcher wrote:

    The code does indeed need to be written efficiently to reduce chances of such blocking.  As I stated above, the code in the proc should be a single INSERT statement, period, nothing more.

    Cut me a break.  What else would this do? INSERT INTO TABLE. That's it.

    Again, this was a VERY busy system with thousands of users during busy periods, and about 3000 procs.

     

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Michael L John wrote:

    ScottPletcher wrote:

    A single column with all of the parameters?  Could be bad idea, potentially lots of overhead, depending on how it's done.

    .

    The parms and values where concatenated into a string and inserted.

    ScottPletcher wrote:

    I'd also have to see the table DDL including for the clustered index. .

    There was a created date that had a default constraint on the column, a column for the proc name, and the column for the parameters.

    ScottPletcher wrote:

    The code does indeed need to be written efficiently to reduce chances of such blocking.  As I stated above, the code in the proc should be a single INSERT statement, period, nothing more.

    Cut me a break.  What else would this do? INSERT INTO TABLE. That's it.

    Again, this was a VERY busy system with thousands of users during busy periods, and about 3000 procs.

    If you refuse to provide specific DDL and SQL, then I can't comment any further on this.

    The details of this are important to understanding its performance.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • Do you have Query Store enabled for the database(s) you are interested in? sys.dm_exec_query_stats has a column called last_execution_time.

    The default for STALE_QUERY_THRESHOLD_DAYS is 30 - so if it has run in the last 30 days it will be in there.

     

  • I'd go with either Extended Events (much less overhead than Trace AND can be filtered on capture which Trace can't do) or Query Store. If you want to capture lots of detail on how and when procedures get executed, I think that's a lot easier than building out a system and changing procedure code.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey wrote:

    I'd go with either Extended Events (much less overhead than Trace AND can be filtered on capture which Trace can't do) or Query Store. If you want to capture lots of detail on how and when procedures get executed, I think that's a lot easier than building out a system and changing procedure code.

    This is my current methodology.

    As an example, there is a utility that is used by the help desk to "fix" data errors.   Of course, it's dangerous and has modified data incorrectly far too often.  I created an extended even that captures the executions of this utility.

    I've also, at various times, created EE's to capture ad-hoc queries from specific people, specific sets of proc executions, and so forth.

    The overhead is next to nothing, and it's very flexible.

     

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • If you don't need the parameters, or they require too much overhead to log, then don't log them.

    Just capture the start time, as a datetime2, and the proc name (or better yet, the object_id, only 4 bytes, if you can reasonably quickly translate that offline into the proc name).

    Yes, extended events is great for this if you're comfortable working with extended events and have the skills needed to programmatically process the extended events results data.  A lot of people are not comfortable with doing both of those.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply