Question: performance loss on update statistics after upgrade from SQL Server 2012

  • Good afternoon,
    I'm currently working for qn SAP consultacy firm, as a junior SAP technical consultant.
    But due to my preveious work expereience most questions about SQL Server are thrown my way.

    One of of our customers "recently" (12-13 May 2018) upgraded there underlying RDBMS from SQL Server 2012 to SQL Server 2016.
    Since then there have been serious performance issues with 2 jobs.
    UPDATE_STATS_XYZ_DAILY and UPDATE_STATS_XYZ
    (XYZ being a replacement of the customers name)    
    Both run stored procedures with the same name setup on SQL Server 2012
        
    Before the upgrade the daily job took 5 hours while the other one (run on sundays) took a day and 22 hours.
    After the update the daily takes 12 to 13 hours to finish while the weekly can run over 3 full days.

    Both jobs doe the following

       -- SET NOCOUNT ON added to prevent extra result sets from

      -- interfering with SELECT statements.

      SET NOCOUNT ON;

        -- Insert statements for procedure here
         update statistics table1 with FULLSCAN
            ...
      update statistics tableX with SAMPLE 15 PERCENT
            ...
         update statistics tableZ with FULLSCAN
        

        
    With the daily only doing a select number of tables and the weekly doing all 65 in the database.
    The data is being mirrored to another server.
    There is a backup of the log files every 15 minutes and a daily full backup.

    Any help in improving performance or identifiying the a potential cause is greatly appreciated.

  • Interesting. Is the underlying hardware exactly the same as it was previously?

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • Probably related to the new cardinality estimator.

    Identify the queries that have degraded in performance, tune them or add the hint to force the legacy CE.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Phil Parkin - Wednesday, August 1, 2018 5:46 AM

    Interesting. Is the underlying hardware exactly the same as it was previously?

    The underlying hardware has remained the same.

    GilaMonster - Wednesday, August 1, 2018 5:50 AM

    Probably related to the new cardinality estimator.

    Identify the queries that have degraded in performance, tune them or add the hint to force the legacy CE.

    We will look into this direction.

  • And, since you're on 2016, you can take advantage of the query store to force plans. It won't help now, since the upgrade is done, but in the future, the way to upgrade from pre-2014 to 2016/2017/greater, is to first turn on Query Store, leave the compatibility mode to the old version, run the load for a time, then change compatibility mode. Any regressions can be dealt with through plan forcing.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • OK we have determined the change in CE was the culprit.

    However I got one  question  I can't find a satifactionary answer to.

    Why was the logic of the CE changed

  • Resender - Wednesday, August 8, 2018 7:23 AM

    OK we have determined the change in CE was the culprit.

    However I got one  question  I can't find a satifactionary answer to.

    Why was the logic of the CE changed

    The cardinality estimation engine was written for SQL Server 7 in 1998... and then never updated or changed until 2014. It was old and needed some help. The new engine is much better... most of the time. Sometimes, it's not.

    For example, the old estimator assumed that each column in a compound index added to the selectivity, not allowing for the fact that columns actually have relationships and therefore, each column added did not actually make the index more and more and more selective. So, they did a regressive calculation that made each additional column, a little less selective than the column before. This much more accurately reflects real data... in the overwhelming majority of cases, but not all. The old engine default to 1 row for any estimates outside the histogram. The new one uses an average of the histogram. Lots of other changes that make a lot of sense, but, sometimes, hurt. Same with any change in behavior.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply