Why A Query May be Faster the Second Time it Runs

  • Kendra Little

    SSC Enthusiast

    Points: 117

    Comments posted to this topic are about the item Why A Query May be Faster the Second Time it Runs

  • julian.fletcher

    SSCrazy

    Points: 2825

    For "may be" read "very often is".

    Would you go further to say that if a query does run far more quickly the second time it runs, one of many reasons might be a lack of suitable indexes? That's to say if a query is spending a lot of time loading data in to memory it might be doing that unnecessarily and a suitable index or two might result in a faster first run.

    Of course, it's entirely dependent on what the query is doing.

  • MVDBA (Mike Vessey)

    SSC-Insane

    Points: 21757

    I've just had a little cry in the corner, rocking back and forth

    a few years ago I walked into a new job and was told that a certain system had real performance issues.. what they had found was that if the report  timed out then run it again and it's fine.... my head is immediately screaming "buffer cache" (this is sql 2000/2005)

    I found a sql agent job that was pre-running the reports at 8am to get them in cache and give the illusion of performance... oh yeah, you don't care about the stuff you pushed out of the cache to get a single report out.. 🙁

    thankfully , because the proc calls were in the sql agent job, I didn't have to trace them - just get query plans and find all the bad stuff.

    now i'm going to go back to my little dark corner and cry about linked servers 🙂

    MVDBA

  • MVDBA (Mike Vessey)

    SSC-Insane

    Points: 21757

    julian.fletcher wrote:

    For "may be" read "very often is".

    Would you go further to say that if a query does run far more quickly the second time it runs, one of many reasons might be a lack of suitable indexes? That's to say if a query is spending a lot of time loading data in to memory it might be doing that unnecessarily and a suitable index or two might result in a faster first run.

    Of course, it's entirely dependent on what the query is doing.

    one addition (because I quite find it's either an index missing or somebody put a function in the where clause...evil scalar non deterministic functions that go off and query other tables....yuk)

    it could just be a tipping point in the data size that has caused sql to get a different and bad plan - or worse , parameter sniffing has kicked in

    MVDBA

  • ktflash

    Ten Centuries

    Points: 1145

    MVDBA (Mike Vessey) wrote:

    I found a sql agent job that was pre-running the reports at 8am to get them in cache and give the illusion of performance

     

    I am confused, this sounds like a bad thing.

    What is so bad about cache warming?

    I would like to use it myself for certain cube reports that take a long time being run the first time in the morning.

    The issue i am running to is that they dont stay in cache very long because of other users and background processes.

    I want to be the very best
    Like no one ever was

  • erb2000

    SSCommitted

    Points: 1697

    How can I force Sql Server to not use the cache or other optimizations so that the same query always takes the same amount of time? When trying to tune a query I need consistent results to compare. Thanks.

  • jasona.work

    SSC-Forever

    Points: 49988

    ktflash wrote:

    MVDBA (Mike Vessey) wrote:

    I found a sql agent job that was pre-running the reports at 8am to get them in cache and give the illusion of performance

    I am confused, this sounds like a bad thing.

    What is so bad about cache warming?

    I'd expect the issue in Mikes case would be, they were "fixing" the problem for a report that was perhaps run once a day, which was pushing out everything ELSE in the cache.

    Like query plans, etc.

    So sure, the report users were happy ("Yay! Our report came out quick!") but all the OTHER people whose work was impacted when SQL had to generate and compile a new query plan (because the existing one was pushed out of the cache,) thus meaning things were slower for them would be...

    Less happy.

    And of course, the fingers would be pointed at the database / DBA.

  • julian.fletcher

    SSCrazy

    Points: 2825

    Declare @DBID Int Set @DBID = DB_ID ()
    Checkpoint
    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS
    DBCC FLUSHPROCINDB (@DBID) WITH NO_INFOMSGS

    That will clear execution plans and data from memory, which would be one way of doing it.

  • MVDBA (Mike Vessey)

    SSC-Insane

    Points: 21757

    jasona.work wrote:

    ktflash wrote:

    MVDBA (Mike Vessey) wrote:

    I found a sql agent job that was pre-running the reports at 8am to get them in cache and give the illusion of performance

    I am confused, this sounds like a bad thing.

    What is so bad about cache warming?

    I'd expect the issue in Mikes case would be, they were "fixing" the problem for a report that was perhaps run once a day, which was pushing out everything ELSE in the cache.

    Like query plans, etc.

    So sure, the report users were happy ("Yay! Our report came out quick!") but all the OTHER people whose work was impacted when SQL had to generate and compile a new query plan (because the existing one was pushed out of the cache,) thus meaning things were slower for them would be...

    Less happy.

    And of course, the fingers would be pointed at the database / DBA.

    absolutely correct - and to answer ktflash… cache warming is just ignoring the underlying problem... if you ever call a support centre and they say "oh just re-run that page again, it will work this time, it just takes time to warm up" then as a decent DBA or developer  you start doing a big fat "face palm"

    if I encounter a situation where cache warming is used then I start by getting a query plan. and then go from there. think about how evil the locking is when you run a report that takes 15 minutes and then run it again... it also smacks of way too little RAM allocated to SQL - so your Page life expectancy and buffer cache hit ratios will be well under par.

    MVDBA

  • MVDBA (Mike Vessey)

    SSC-Insane

    Points: 21757

    julian.fletcher wrote:

    Declare @DBID Int Set @DBID = DB_ID ()
    Checkpoint
    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS
    DBCC FLUSHPROCINDB (@DBID) WITH NO_INFOMSGS

    That will clear execution plans and data from memory, which would be one way of doing it.

    am I wrong in thinking that rather than flushing the plan cache for an entire DB, you can find all plans for a proc and get rid of those.????.. i'm 90% sure it's something like

    SELECT * FROM Person.Address;  
    GO
    SELECT plan_handle, st.text
    FROM sys.dm_exec_cached_plans
    CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
    WHERE text LIKE N'SELECT * FROM Person.Address%';
    GO

    then

    DBCC FREEPROCCACHE (whatever the plan handle value is);

    I only saw this recently as I didn't want to drop the entire cache because of a bad plan

    …. I just wish we had a way to drop data for a single table from the buffer cache

    MVDBA

  • DesNorton

    SSC-Insane

    Points: 23087

    MVDBA (Mike Vessey) wrote:

    julian.fletcher wrote:

    Declare @DBID Int Set @DBID = DB_ID ()
    Checkpoint
    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS
    DBCC FLUSHPROCINDB (@DBID) WITH NO_INFOMSGS

    That will clear execution plans and data from memory, which would be one way of doing it.

    am I wrong in thinking that rather than flushing the plan cache for an entire DB, you can find all plans for a proc and get rid of those.????.. i'm 90% sure it's something like

    SELECT * FROM Person.Address;  
    GO
    SELECT plan_handle, st.text
    FROM sys.dm_exec_cached_plans
    CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
    WHERE text LIKE N'SELECT * FROM Person.Address%';
    GO

    then

    DBCC FREEPROCCACHE (whatever the plan handle value is);

    I only saw this recently as I didn't want to drop the entire cache because of a bad plan

    …. I just wish we had a way to drop data for a single table from the buffer cache

     

    I use the following code to drop a specific plan

          DECLARE @PlanHandle VARBINARY(64);

    SELECT @PlanHandle = st.plan_handle
    FROM sys.dm_exec_procedure_stats AS st
    WHERE st.object_id = OBJECT_ID('dbo.SomeProcedureName')
    AND st.database_id = DB_ID();

    IF @PlanHandle IS NOT NULL
    BEGIN
    DBCC FREEPROCCACHE(@PlanHandle);
    END;
  • ktflash

    Ten Centuries

    Points: 1145

    MVDBA (Mike Vessey) wrote:

    jasona.work wrote:

    ktflash wrote:

    MVDBA (Mike Vessey) wrote:

    I found a sql agent job that was pre-running the reports at 8am to get them in cache and give the illusion of performance

    I am confused, this sounds like a bad thing.

    What is so bad about cache warming?

    I'd expect the issue in Mikes case would be, they were "fixing" the problem for a report that was perhaps run once a day, which was pushing out everything ELSE in the cache.

    Like query plans, etc.

    So sure, the report users were happy ("Yay! Our report came out quick!") but all the OTHER people whose work was impacted when SQL had to generate and compile a new query plan (because the existing one was pushed out of the cache,) thus meaning things were slower for them would be...

    Less happy.

    And of course, the fingers would be pointed at the database / DBA.

    absolutely correct - and to answer ktflash… cache warming is just ignoring the underlying problem... if you ever call a support centre and they say "oh just re-run that page again, it will work this time, it just takes time to warm up" then as a decent DBA or developer  you start doing a big fat "face palm"

    if I encounter a situation where cache warming is used then I start by getting a query plan. and then go from there. think about how evil the locking is when you run a report that takes 15 minutes and then run it again... it also smacks of way too little RAM allocated to SQL - so your Page life expectancy and buffer cache hit ratios will be well under par.

     

    My problem is a query report in a MOLAP Cube with MDX.

    SSAS Problem not a DBA problem ( i guess?)

    I want to be the very best
    Like no one ever was

  • MVDBA (Mike Vessey)

    SSC-Insane

    Points: 21757

    erb2000 wrote:

    How can I force Sql Server to not use the cache or other optimizations so that the same query always takes the same amount of time? When trying to tune a query I need consistent results to compare. Thanks.

    I've just realised that freeproccache is pointless here - if you are tuning then you will get a new plan anyway - it's just about buffer cache

    MVDBA

  • nicksamuel

    Valued Member

    Points: 71

    Just a question about flushing temp tables.

    From what I have experienced, if a routine creates one or more temp tables on the first run, these are just used again on the second run without being recreated even if preceded by IF EXISTS DROP - if it is in the same batch run.

    That could speed things up if it happening on the production server/

  • MVDBA (Mike Vessey)

    SSC-Insane

    Points: 21757

    nicksamuel wrote:

    Just a question about flushing temp tables.

    From what I have experienced, if a routine creates one or more temp tables on the first run, these are just used again on the second run without being recreated even if preceded by IF EXISTS DROP - if it is in the same batch run.

    That could speed things up if it happening on the production server/

    I'm not sure this is true - in 20 years of digging through issues I've never seen that. Temp tables are at session level and destroyed at end of session (ok ##tables slightly different)

    even if this is true, the impact would be so insignificant as to be not measurable.

    if you have something to prove me wrong on this then i'm happy to take new knowledge on board

    MVDBA

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply