Restored database from 2012 to 2017 - a table was slow as a snail to query, until I recreated it with another name - what gives?

  • Hello,
    My first post here. The title is probably hard to understand, so here's the story:

    We run 2012 in production, and I am in the processs of migrating to 2017. I got a fresh server with 2017, and backup+restored the databases from 2012 to 2017. Did lots of other DBA work like tuning the server options, users/logins, jobs, etc.

    The key to this database is a large and complex SP which performs about 20-30 subqueries. It is used for faceted searching of our data (holiday rental properties). The query, with a certain set of parameters I setup for testing, would take around 8 seconds in production. On the 2017 server is was baffling slow - around 60 seconds. I followed the execution plan, and everything seemed fine - it hit the right indexes, just as in production. However, one query (INSERT INTO #tempTable... SELECT FROM) simply took around 50 seconds to complete. It was a fairly simple query that did an INSERT INTO #tempTable.... SELECT FROM #anotherTempTable, with two INNER JOINs on two very very simple tables. These tables each have only two columns, about 3-4 million rows of denormalized data. They had the right indexes, same as in production, and I tried to both rebuild and reorganize the indexes, but nothing helped.

    Finally as a desperate measure, I made a clone of each of these two tables, named them table_name2, did an INSERT INTO table_name2 SELECT * FROM old_table, changed the SP to look at the new table_name2 instead, and BAM - the query was now performing super fast as expected.

    Can anyone explain this behaviour? I'm worried which other tables might be affected, and need to be "cloned". It's as if the file structure or indexes were just corrupt from the backup+restore I did.

  • Possibly...

    If the indexes suffered a decent amount of logical fragmentation, it's possible that the read-aheads on the new server got your during the initial load into memory.  You creating the new tables with new indexes formed the indexes with no fragmentation and the first load was much faster because of the speed at which the read-aheads occurred.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • a 39534 - Sunday, June 24, 2018 8:40 AM

    Hello,
    My first post here. The title is probably hard to understand, so here's the story:

    We run 2012 in production, and I am in the processs of migrating to 2017. I got a fresh server with 2017, and backup+restored the databases from 2012 to 2017. Did lots of other DBA work like tuning the server options, users/logins, jobs, etc.

    The key to this database is a large and complex SP which performs about 20-30 subqueries. It is used for faceted searching of our data (holiday rental properties). The query, with a certain set of parameters I setup for testing, would take around 8 seconds in production. On the 2017 server is was baffling slow - around 60 seconds. I followed the execution plan, and everything seemed fine - it hit the right indexes, just as in production. However, one query (INSERT INTO #tempTable... SELECT FROM) simply took around 50 seconds to complete. It was a fairly simple query that did an INSERT INTO #tempTable.... SELECT FROM #anotherTempTable, with two INNER JOINs on two very very simple tables. These tables each have only two columns, about 3-4 million rows of denormalized data. They had the right indexes, same as in production, and I tried to both rebuild and reorganize the indexes, but nothing helped.

    Finally as a desperate measure, I made a clone of each of these two tables, named them table_name2, did an INSERT INTO table_name2 SELECT * FROM old_table, changed the SP to look at the new table_name2 instead, and BAM - the query was now performing super fast as expected.

    Can anyone explain this behaviour? I'm worried which other tables might be affected, and need to be "cloned". It's as if the file structure or indexes were just corrupt from the backup+restore I did.

    Did you ever update your statistics after upgrading?

    Sue

  • In addition to Sue's and Jeff's comments, there was a change in the optimizer that took place in SQL 2014.   That change has taken many a query and made it perform horribly.   Most of the time, it's the poorly written queries that have that issue, but every once in a while, it knocks down a perfectly good query.   This can be easily enough tested for if you still have the original "problem" table hanging around.   There is a trace flag now that first came along in SQL 2016 that allows you to specify to use the old optimizer version (2012) on a per query basis.   I don't recall the exact syntax for it, but it should be easily found on a web search.   Chances are Jeff knows this and perhaps he can post back with the detail.   I'm in the midst of being super-busy this week and don't have the time today to go find it.   Test the query both with and without that option applied to it, and you can see if that contributed to the problem.   Even if not, it's a good option to know exists, as there may yet be future occasions for other queries to "fall down go boom" due to the changes in the optimizer.   Note that I'm not saying it's a panacea for such problems, but is often a good way to judge whether or not the query has been written optimally.

    Steve (aka sgmunson) 🙂 🙂 🙂
    Rent Servers for Income (picks and shovels strategy)

  • Since things worked fine after rebuilding the table, in this case, I'm thinking that it has nothing to do with the change in the cardinality estimator.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Tuesday, June 26, 2018 7:37 AM

    Since things worked fine after rebuilding the table, in this case, I'm thinking that it has nothing to do with the change in the cardinality estimator.

    You're probably right, but I figured at least mentioning it is usually a good idea and something useful to be aware of.   Given the relative ease of testing such a thing, it could end up providing useful insight, even if it really had no actual bearing on the problem at hand.

    Steve (aka sgmunson) 🙂 🙂 🙂
    Rent Servers for Income (picks and shovels strategy)

  • Thanks all for the tips 🙂

    I found this article online:

    https://thomaslarock.com/2017/04/upgrading-sql-server-2016-post-upgrade-tasks/

    It strongly recommends updating statistics, amongst other things. It's too late to see if it was the culprit now, since I remade the tables by hand, but I've updated all statistics, and will do so when we upgrade the production database. Can't hurt for sure.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply