Reset memory cache for query tuning?

  • I've noticed that in SSMS, running a complex query or query on large tables often returns data in much less time the second time you run it. I'm guessing this is because the second run uses data pages which were loaded into memory during the first query run rather then reading them from disk, which is slower.

    For the purposes of optimizing queries & indexes, is there a way to force the second run of a query to be done in the same manner as the first run, so that I know changes in speed are due to adjustments I've done and not because data pages were pre-loaded to memory by the earlier query run? Or should I just use the "Cost" figure in the Execution Plans and pay less attention to the actual times?

  • Not recommended for use in production, but used for dev/test you run:

    DBCC DROPCLEANBUFFERS()

    to clear the cache to simulate having to read everything from disk.

  • Cost can help you determine if the query has been tuned.

    In addition, you could use statistics time and statistics IO.

    Examining the execution plan can really help to determine if the query is executing better than previously.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • You might also want to account for getting the execution plan from cache too. You can use this to clear the proc cache dbcc freeproccache.

    It is good to compare apples to apples, but I tend to test both with a cold cache and a warm cache (since that is how it will be in production, hopefully). I like to compare the number of reads as well as the time it takes, so I use SET STATISTICS IO ON and SET STATISTICS TIME ON.



    Microsoft Certified Master - SQL Server 2008
    Follow me on twitter: @keith_tate

    Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • Keith Tate (3/4/2014)


    You might also want to account for getting the execution plan from cache too. You can use this to clear the proc cache dbcc freeproccache.

    It is good to compare apples to apples, but I tend to test both with a cold cache and a warm cache (since that is how it will be in production, hopefully). I like to compare the number of reads as well as the time it takes, so I use SET STATISTICS IO ON and SET STATISTICS TIME ON.

    This is another one that I would be careful about running in prod. I will use this occasionally in a dev environment.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Oops forgot to add the caveat! Thanks Jason. Also you can use DBCC FLUSHPROCINDB(<db_id>); to only flush the plans from a single database.



    Microsoft Certified Master - SQL Server 2008
    Follow me on twitter: @keith_tate

    Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • My preference is to run queries/procs I'm testing multiple times and ignore the durations/IOs of the first result, rather than clearing plan and data cache after every single execution.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (3/5/2014)


    My preference is to run queries/procs I'm testing multiple times and ignore the durations/IOs of the first result, rather than clearing plan and data cache after every single execution.

    +1 🙂

  • SQLRNNR (3/4/2014)


    Cost can help you determine if the query has been tuned.

    In addition, you could use statistics time and statistics IO.

    Examining the execution plan can really help to determine if the query is executing better than previously.

    Oh, be careful now. "Cost" is just an estimate and is frequently way off compared to what actually happens. Using "Statistics TIME, IO" will also be way off if there are any non iTVF functions in the mix.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (3/12/2014)


    Using "Statistics TIME, IO" will also be way off if there are any non iTVF functions in the mix.

    IO will be. Time, if you look at the final one for the entire batch, is correct.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • To aid, you can also use:

    DBCC FREEPROCCACHE(plan_handle) - this will only remove the plan for the specific thing you're running and will not flush the entire cache.

    Or, if it's a procedure being executed, you could add "WITH RECOMPILE" - so an entirely new execution plan is generated each time. I would only use this for testing, and would opt to remove it before making it a production-like procedure.

    Or, simply execute sp_recompile 'schema.ObjectName' - which will accomplish the same thing.

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • MyDoggieJessie (3/13/2014)


    Or, if it's a procedure being executed, you could add "WITH RECOMPILE" - so an entirely new execution plan is generated each time. I would only use this for testing, and would opt to remove it before making it a production-like procedure.

    Or, simply execute sp_recompile 'schema.ObjectName' - which will accomplish the same thing.

    These don't quite do the same thing.

    EXECUTE MyProc WITH RECOMPILE generates a new execution plan for that compile, the execution plan generated is discarded after the execution. If there was a plan already in cache it's left there.

    exec sp_recompile 'MyProc' removes the plan for MyProc from cache so the next execution has to generate a new plan, cache it and then execute

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks for clarifying that Gail, I see the difference!

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • GilaMonster (3/13/2014)


    Jeff Moden (3/12/2014)


    Using "Statistics TIME, IO" will also be way off if there are any non iTVF functions in the mix.

    IO will be. Time, if you look at the final one for the entire batch, is correct.

    I guess it depends on your definition of "correct". The use of statistics skews how long it takes for non iTVF functions to run a whole lot. You can do some simple tests from the code in the following article. When a measuring tool changes the outcome, that's "incorrect" to me.

    http://www.sqlservercentral.com/articles/T-SQL/91724/

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (3/13/2014)


    GilaMonster (3/13/2014)


    Jeff Moden (3/12/2014)


    Using "Statistics TIME, IO" will also be way off if there are any non iTVF functions in the mix.

    IO will be. Time, if you look at the final one for the entire batch, is correct.

    I guess it depends on your definition of "correct".

    Correct in the sense that it isn't way too low, like stats IO is (where the reads done by the UDF aren't displayed), but yeah, observer effect can hit pretty badly with them, probably because of the massive number of batches which get run and must be tracked.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply