upgrade to 2016 causing performance problems

  • MJ-1115929 - Tuesday, February 5, 2019 12:56 AM

    MJ-1115929 - Monday, February 4, 2019 12:27 AM

    frederico_fonseca - Sunday, February 3, 2019 2:57 PM

    Just to list the options.
    to use the old cardinality estimator (CE 70) the following options are available - final result is the same insofar as it relates to CE

    ALTER DATABASE SCOPED CONFIGURATION SET LEGACY_CARDINALITY_ESTIMATION = ON;  - available if on 2016 SP1 or higher - this is the best option as it keeps all other 2016 goodies
    Trace Flag 9481 - less desired if set at server level - can be used at query level
    database compatibility level lower than 120 - looses the remaining options available at higher db levels

    if either of the above is set and the queries are still slow then other things are affecting the query - Max Dop, Cost Threshold, number of cores, memory (both max memory and memory availability vs old system), index/column stats. This assumes same volume of data.
    I would also check traceflags set globaly on newer vs old server - never know if something was done (or not done) at this level

    Dear frederico

    thanks for the  pointing workaround,  we will try this config and post back the results here

    once again thanks
    regards
    MJ

    Dear Jeff/Frederico,
    We have tried the legacy setting with compatibility mode changed to 2016, but there was no much difference, result of the queries took the same amount of time. MAX DOP is set to 4 and cost of threshold is 5, what are the other areas to cover , please advise.

    thanks
    MJ

    Start by changing the threshold to 25 or 30.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Tuesday, February 5, 2019 6:54 AM

    Start by changing the threshold to 25 or 30.

    He, he, I would start somewhere between 750 and 1000, remember that those are not relative figures but actual "cost" figures 😉
    😎
    The maximum value is 32767 and funny enough, even at that setting, things can still go parallel.

  • Eirikur Eiriksson - Tuesday, February 5, 2019 7:06 AM

    Jeff Moden - Tuesday, February 5, 2019 6:54 AM

    Start by changing the threshold to 25 or 30.

    He, he, I would start somewhere between 750 and 1000, remember that those are not relative figures but actual "cost" figures 😉
    😎
    The maximum value is 32767 and funny enough, even at that setting, things can still go parallel.

    Dear Jeff and Erikur

    Will try both 25/30  and 750/1000 and check for the performance and will revert back the results here

    thanks
    MJ

  • Jeff Moden - Tuesday, February 5, 2019 6:54 AM

    MJ-1115929 - Tuesday, February 5, 2019 12:56 AM

    MJ-1115929 - Monday, February 4, 2019 12:27 AM

    frederico_fonseca - Sunday, February 3, 2019 2:57 PM

    Just to list the options.
    to use the old cardinality estimator (CE 70) the following options are available - final result is the same insofar as it relates to CE

    ALTER DATABASE SCOPED CONFIGURATION SET LEGACY_CARDINALITY_ESTIMATION = ON;  - available if on 2016 SP1 or higher - this is the best option as it keeps all other 2016 goodies
    Trace Flag 9481 - less desired if set at server level - can be used at query level
    database compatibility level lower than 120 - looses the remaining options available at higher db levels

    if either of the above is set and the queries are still slow then other things are affecting the query - Max Dop, Cost Threshold, number of cores, memory (both max memory and memory availability vs old system), index/column stats. This assumes same volume of data.
    I would also check traceflags set globaly on newer vs old server - never know if something was done (or not done) at this level

    Dear frederico

    thanks for the  pointing workaround,  we will try this config and post back the results here

    once again thanks
    regards
    MJ

    Dear Jeff/Frederico,
    We have tried the legacy setting with compatibility mode changed to 2016, but there was no much difference, result of the queries took the same amount of time. MAX DOP is set to 4 and cost of threshold is 5, what are the other areas to cover , please advise.

    thanks
    MJ

    Start by changing the threshold to 25 or 30.

    Sure will give try the numbers and revert.

    thanks
    MJ

  • have you run a "sp_configure" against both machines (with advanced options, of course)  and compared?

    ensure everything makes sense

  • MJ-1115929 - Wednesday, February 6, 2019 4:07 AM

    Eirikur Eiriksson - Tuesday, February 5, 2019 7:06 AM

    Jeff Moden - Tuesday, February 5, 2019 6:54 AM

    Start by changing the threshold to 25 or 30.

    He, he, I would start somewhere between 750 and 1000, remember that those are not relative figures but actual "cost" figures 😉
    😎
    The maximum value is 32767 and funny enough, even at that setting, things can still go parallel.

    Dear Jeff and Erikur

    Will try both 25/30  and 750/1000 and check for the performance and will revert back the results here

    thanks
    MJ

    I've found that going that high on my systems can really put the kabosh on performance (we have a bunch of legacy code that really could use some help).  During such a change, you might want to be actively watching PerfMon like I did.  As with all else in SQL Server, "It Depends".

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Thursday, February 7, 2019 6:46 AM

    MJ-1115929 - Wednesday, February 6, 2019 4:07 AM

    Eirikur Eiriksson - Tuesday, February 5, 2019 7:06 AM

    Jeff Moden - Tuesday, February 5, 2019 6:54 AM

    Start by changing the threshold to 25 or 30.

    He, he, I would start somewhere between 750 and 1000, remember that those are not relative figures but actual "cost" figures 😉
    😎
    The maximum value is 32767 and funny enough, even at that setting, things can still go parallel.

    Dear Jeff and Erikur

    Will try both 25/30  and 750/1000 and check for the performance and will revert back the results here

    thanks
    MJ

    I've found that going that high on my systems can really put the kabosh on performance (we have a bunch of legacy code that really could use some help).  During such a change, you might want to be actively watching PerfMon like I did.  As with all else in SQL Server, "It Depends".

    In my experience, finding the code's parallelism threshold is very useful, one can of course go to the execution plan and pick up the cost figure, but bear in mind that the CTFP figure will affect the optimizer. I'm not saying that CTFP should be excessive but knowing where the cut-off is can be very helpful.
    😎
    Another option is simply to run the code with OPTION (MAXDOP [N]) and assess the differences between N(min) and N(max) 😉

Viewing 7 posts - 16 through 21 (of 21 total)

You must be logged in to reply to this topic. Login to reply