Same database schema, different performance

  • I have 2 instances of the same database schema running on the same database server and I am seeing very different results when running a certain query against them. Both databases were created with the same script so the two tables involved in the query are exactly the same layout with exactly the same indexes. The only difference is the data contained on the two databases.

    The bottom line is I am running the same query against identical tables in two different databases. The query against the first database takes 9 seconds and the same query against the second database takes a looooong time. I have let it run for 16 minutes without any results. It will eventually return the correct results given enough time but the difference in performance is huge. Even more confusing is the fact that the 9 second query is running against thousands more rows than the slower query.

    The only processes running against both databases are my interactive queries. I have run analysis of the query against both databases and received zero recommendations. I have rebuilt all indexes and updated statistics against both tables on both databases. The performance of the first database improved from 11 seconds to 9. The second database still takes FOREVER.

    I have been running queries against all kinds of system tables and views trying to pinpoint the trouble but I am running out of ideas. The only difference I see anywhere is the performance. Can someone please suggest a way to find the cause of the performance difference?

  • Have you checked for blocking while the long query is running? I know you said that your interactive queries are the only things running, but I can't think of any other reason for the issue since you manually updated the stats. Have you checked the query plans?

  • Posting the execution plans will probably help, and please post the DDL and T-SQL. Seemingly insignificant data differences can cause vastly different execution plans depending on how the queries are written.

  • I had created execution plans for the query against both databases and noticed a difference, which I thought would point me to the potential problem. It turns out when I was rebuilding indexes and updating statistics I had missed one of the tables on one database. When the indexes for that table were defragged and stats were updated I got identical execution plans.

    I'm on my home PC at the moment so I don't have the execution plan to post yet. Since the query analysis and the explain plan results were identical for both databases I started looking at system tables to find a clue where the difference might be. I was having a hard time finding any promising leads so I thought I would post the problem.

    In my research I have read a few comments regarding the initial size of a datafile when a database is created from scratch. The point being if the datafile is 1MB on creation (for example) and the database grows to 40GB the tables and indexes end up fragmented in memory due to forcing SQL Server to keep allocating blocks of memory to extend the size of the datafile. This all makes sense, but both databases were created by executing a simple "CREATE DATABASE " command and executing a script to create the tables and indexes. So, both databases got the default behavior on creation. I would also expect rebuilding the indexes and updating stats to smooth over this problem.

    I should also mention the version of SQL Server I'm running does not support partitioning so nothing is different between these databases in that regard. At least nothing explicitly declared - there definitely seems to be a difference between these databases behind the scenes.

Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply