Seriously Quirky Performance Behavior in a Query

  • I have come across a strange phenomenon when querying data from one of our historical databases and was wondering if someone could please provide some sort of explanation?

    I am seeing an enormous performance hit when I execute a query containing a literal date range (embedded in the query) as opposed to using variables.

    As an example, I have 2 queries:

    Query 1

    select count(a.[counter]), sum(a.[clmtotal]) from [direct].dbo.[claims] a with (nolock)

    inner join [direct].dbo.[pclaim] c with (nolock) on c.counter = a.claimlink

    where a.[type] = 'P'

    and c.[hubid] = 1

    and c.billdate between '10/1/2012' and '11/30/2012'

    Execution time varies anywhere from 20 - 30 minutes (or more)

    Query 2

    declare @begdate datetime, @enddate datetime

    select @begdate = '10/1/2012', @enddate = '11/30/2012'

    select count(a.[counter]), sum(a.[clmtotal]) from [direct].dbo.[claims] a with (nolock)

    inner join [direct].dbo.[pclaim] c with (nolock) on c.counter = a.claimlink

    where a.[type] = 'P'

    and c.[hubid] = 1

    and c.billdate between @begdate and @enddate

    Execution time averages around 5 -10 seconds

    Why the difference? I can easily change my code to use the "variable approach", but I would really like an explanation as to this enormous difference as it may help to increase performance in other projects.

    Thanks in advance.

  • My first guess would be with all the implicit conversions it has to do converting the strings '10/1/2012' and '11-30-2012' to datetime.

    Can you upload the execution plans in a SQLPLAN format so we can see the differences between the two?

    The link in my signature on posting performance problems will help if you are unsure.

  • done - see attached.

    Your explanation makes perfect sense though. Sometimes the answer is staring you right in the face - but you still need someone to point it out. :w00t:

    Thanks for the quick reply.

  • On the one with the string literals, try '20121130' and '20121001' instead of the US-formatted dates and see what happens.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Parameter sniffing, or lack thereof. Probably combined with stale statistics so that SQL gets a completely incorrect row estimate when it can sniff the 'parameters' (in this case string literals) while the variables, because they can't be sniffed give a rougher estimate that's not so likely to be affected by stale stats.

    http://sqlinthewild.co.za/index.php/2007/11/27/parameter-sniffing/

    http://sqlinthewild.co.za/index.php/2008/02/25/parameter-sniffing-pt-2/

    Look at the two estimated plans posted (actuals would be far better), look at the row estimations. Without the actual plans there's no way to see which of those estimates are more accurate, but I'll bet that the 5000 is closer than the 1.

    p.s. Watch those nolocks. http://blogs.msdn.com/b/davidlean/archive/2009/04/06/sql-server-nolock-hint-other-poor-ideas.aspx

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I think your statistics are out of date on that table.

    Update them , then run the queries again (using WITH RECOMPILE) and see what the plans look like.

    The estimations and therefore the plans are vastly different, you would normally expect a query with literal value to be faster, but it estimates at 1 row.



    Clear Sky SQL
    My Blog[/url]

  • Dave Ballantyne (12/19/2012)


    Update them , then run the queries again (using WITH RECOMPILE) and see what the plans look like.

    No need to use recompile. The stats update alone would invalidate the plan.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (12/19/2012)


    Dave Ballantyne (12/19/2012)


    Update them , then run the queries again (using WITH RECOMPILE) and see what the plans look like.

    No need to use recompile. The stats update alone would invalidate the plan.

    Tru dat 🙂



    Clear Sky SQL
    My Blog[/url]

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply