Performance of the new (2014) Cardinality Estimator

  • Has anybody had any experience (good or bad) with this? Microsoft's own documentation (http://msdn.microsoft.com/en-us/library/dn673537.aspx) hints at improvements while warning:

    While many workloads will benefit from the new cardinality estimator changes, in some cases, workload performance may degrade without a specific tuning effort.

    and

    Given the risk of performance regression, it is very important that existing applications be thoroughly tested on the new cardinality model before migrating to production.

    We did some quick tests of code which performs adequately on SQL 2005 to 2012 and found almost immediately parts which ground to a halt on 2014. Does this effectively mean that we need to performance test each and every part of our application? At the moment, we're going to have to demand that any customer with a 2014 Server run our database in 2012 mode.

  • I have a few dev boxes running 2014 and have have experienced some instances where 2014 is giving me a parallel query plan where on 2012 I get a serial plan. The parallel plan seems to be the same speed as the serial version but with a notably higher CPU time. This is a bad thing in my opinion. I have found myself using maxdop 1 more often in 2014.

    On a different note, I just read that white paper that you included a link for. It's excellent!

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

  • You do need to fully test for performance. While some queries may perform worse, there are many cases where the query will perform much much better.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Alan.B (10/2/2014)


    I have a few dev boxes running 2014 and have have experienced some instances where 2014 is giving me a parallel query plan where on 2012 I get a serial plan. The parallel plan seems to be the same speed as the serial version but with a notably higher CPU time. This is a bad thing in my opinion. I have found myself using maxdop 1 more often in 2014.

    On a different note, I just read that white paper that you included a link for. It's excellent!

    Quick question, what are the threshold for parallelism settings? I've found the defaults being far too low.

    😎

    A (very) general comment on the performance difference between 2012 and 2014, I've found that 2014 performs better in most cases, sometimes up to 50% in certain domains like the CHARINDEX function when splitting strings, haven't found anything performing worse.

  • Eirikur Eiriksson (10/4/2014)


    Alan.B (10/2/2014)


    I have a few dev boxes running 2014 and have have experienced some instances where 2014 is giving me a parallel query plan where on 2012 I get a serial plan. The parallel plan seems to be the same speed as the serial version but with a notably higher CPU time. This is a bad thing in my opinion. I have found myself using maxdop 1 more often in 2014.

    On a different note, I just read that white paper that you included a link for. It's excellent!

    Quick question, what are the threshold for parallelism settings? I've found the defaults being far too low.

    😎

    A (very) general comment on the performance difference between 2012 and 2014, I've found that 2014 performs better in most cases, sometimes up to 50% in certain domains like the CHARINDEX function when splitting strings, haven't found anything performing worse.

    Sorry for getting back to you late, just getting back from vacation. I'm in a different city than my servers and can't access them through the VPN at the moment. The instance I was referring to specifically is a SQL Server 2014 Enterprise Box (downloaded via MSDN) and I did not change the defaults except for memory settings and drive configuration. So it is whatever the default is - is what it is set at. I will tell you exactly tomorrow.

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

  • Alan.B (10/6/2014)


    So it is whatever the default is - is what it is set at. I will tell you exactly tomorrow.

    Which means it'll be cost threshold of 5 and maxdop of 0. Leave maxdop, set cost threshold to something higher. 25, 30, 50, something. There's no easy way to tell the perfect value, but 5 is far too low.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (10/7/2014)


    Alan.B (10/6/2014)


    So it is whatever the default is - is what it is set at. I will tell you exactly tomorrow.

    Which means it'll be cost threshold of 5 and maxdop of 0. Leave maxdop, set cost threshold to something higher. 25, 30, 50, something. There's no easy way to tell the perfect value, but 5 is far too low.

    Yep. It's 5. It's 50 on the 2012 server I was mentioning earlier in this thread. This would explain the behavior I mentioned earlier in this this thread.

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

  • julian.fletcher (10/2/2014)


    Has anybody had any experience (good or bad) with this? Microsoft's own documentation (http://msdn.microsoft.com/en-us/library/dn673537.aspx) hints at improvements while warning:

    While many workloads will benefit from the new cardinality estimator changes, in some cases, workload performance may degrade without a specific tuning effort.

    and

    Given the risk of performance regression, it is very important that existing applications be thoroughly tested on the new cardinality model before migrating to production.

    We did some quick tests of code which performs adequately on SQL 2005 to 2012 and found almost immediately parts which ground to a halt on 2014. Does this effectively mean that we need to performance test each and every part of our application? At the moment, we're going to have to demand that any customer with a 2014 Server run our database in 2012 mode.

    I've not personally suffered such a change but everytime MS comes out with a new version, SSC is filled with posts about previously "fast" code becoming much slower. The conversion from 2000 to 2005 and 2005 to 2008 seemed especially bad about this.

    I don't know how much of this is just temporary slowness from having to build new execution plans or maybe having to build stats or what.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (10/7/2014)


    julian.fletcher (10/2/2014)


    Has anybody had any experience (good or bad) with this? Microsoft's own documentation (http://msdn.microsoft.com/en-us/library/dn673537.aspx) hints at improvements while warning:

    While many workloads will benefit from the new cardinality estimator changes, in some cases, workload performance may degrade without a specific tuning effort.

    and

    Given the risk of performance regression, it is very important that existing applications be thoroughly tested on the new cardinality model before migrating to production.

    We did some quick tests of code which performs adequately on SQL 2005 to 2012 and found almost immediately parts which ground to a halt on 2014. Does this effectively mean that we need to performance test each and every part of our application? At the moment, we're going to have to demand that any customer with a 2014 Server run our database in 2012 mode.

    I've not personally suffered such a change but everytime MS comes out with a new version, SSC is filled with posts about previously "fast" code becoming much slower. The conversion from 2000 to 2005 and 2005 to 2008 seemed especially bad about this.

    I don't know how much of this is just temporary slowness from having to build new execution plans or maybe having to build stats or what.

    Or how much is related to hardware settings for the new server, or sql settings on the new box, or any number of variables.

    ;-):cool::w00t:

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply