Performance Issues when upgrading from SQL 2000

  • I have recently run into an issue when upgrading from SQL 2000 to SQL 2005 (sp3) with performance. Queries and jobs that were just fine are now taking up to 3 times as long to execute. I have run optimization plans, rebuilt indexes, recompiled stored procedures and haven't been able to speed it up.

    However, the weirdest issue I came across was a simple query. The query touched 3 tables with inner joins and the largest table contains about 300k records. The query performed no aggregates, but did select DISTINCT TOP 5. The query would take forever to run. I thought the issue might be the DISTINCT or the TOP, so I took out both. The query came back in one second with all 300k+ records. I ran it several times to make sure it wasn't a fluke.

    I then put the DISTINCT TOP 5 back in and it still hung up. I changed the 5 to be 10 and it came back in one second. I kept changing the number of records to return and once I got to TOP 8, it hung again. Apparently, I can't get back less than 9 records without it hanging.

    Does anyone have any suggestions for why the upgrade could have caused major performance issues? We have rebooted the server and defragged the drives as well. The server is running win2k3 standard and has 2 2.2ghz dual core processors with 3gb ram and a raid5 disk.

  • Shot in the dark... Have you rebuilt the statistics since the upgrade?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I was also going to suggest doing a complete rebuild of all statistics, preferably with FULL SCAN.

    I know there are some areas where performance degrades on upgrading servers from 2000 to 2005 or 2008, but they're pretty exceptional. You've got what sounds like a systemic problem. My guess, statistics.

    If that doesn't work, can you post the query and the actual execution plan?

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I have updated the statistics, but I don't know if I did FULL SCAN or not. Most likely I have not.

    Query:

    SELECT DISTINCT TOP 10

    s.sale_key,

    fc.freight_carrier_name,

    isnull(os.freight_charges,0),

    isnull(os.weight,0)

    FROM sales s

    INNER JOIN locations l ON l.location_key = s.delivery_location_key

    INNER JOIN builds b ON b.transaction_key = s.sale_key AND b.transaction_type = 'SAL'

    INNER JOIN build_details bd ON bd.build_key = b.build_key

    INNER JOIN outbound_shipments os ON os.outbound_shipment_key = bd.outbound_shipment_key

    INNER JOIN freight_carriers fc ON fc.freight_carrier_key = os.carrier

    WHERE

    l.country_code = 'US'

    AND l.state = 'NY'

    AND s.status != 'CAN'

    AND b.status != 'CAN'

    AND os.status != 'CAN'

    ORDER BY s.sale_key DESC

    Execution Plan:

    Wasn't sure how to post that, so here is a link to a screengrab: http://dl.dropbox.com/u/143275/distinct-top-5-execution-plan.png

    Indexes:

    On builds: nonclustered on transaction_key and transaction_type

    On build_details: nonclustered on outbound_shipment_key

    The largest table is build_details with 311k records. All the others are under 20k.

  • I've spoken with our network admin and we update the statistics every night with the full scan.

    He also mentioned that several of our databases were highly fragmented, but the windows defragmenter can't do anything because there isn't enough contiguous space available.

    Off hours, we're going to move the database files off that drive, defragment the drive, and then move the database files back.

    In other jobs I was running, I was getting waits of PAGEIOLATCH or something like that, which I understand pops up more in 2005 than 2000.

  • Here's a video on how to post an execution plan to SSC.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I would evaluate the database with DBCC CHECKDB and also shrink it with DBCC SHRINKFILE.

    Regards,

    Iulian

  • Iulian -207023 (2/1/2011)


    I would evaluate the database with DBCC CHECKDB

    ... couldn't hurt...

    and also shrink it with DBCC SHRINKFILE.

    Um, why would you do that?

    REINDEX, sure. Shrinkfile? That's not going to help here.

    Shrinkfile, if anything, will make it WORSE.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • ron.richardson (2/1/2011)


    I've spoken with our network admin and we update the statistics every night with the full scan.

    He also mentioned that several of our databases were highly fragmented, but the windows defragmenter can't do anything because there isn't enough contiguous space available.

    Off hours, we're going to move the database files off that drive, defragment the drive, and then move the database files back.

    Sounds like a solid performance starting point. Physical fragmentation never helps. If you're on a SAN though usually it can use the writecache to deal with the defrag if you're tight on space. Depends on the model/method it's used.

    In other jobs I was running, I was getting waits of PAGEIOLATCH or something like that, which I understand pops up more in 2005 than 2000.

    Misconception due to lack of pinning tables in memory, almost a myth. PAGEIOLATCH means it's waiting on the drives to cough up data. This process really didn't change between the two versions except in some areas that I don't remember well enough to quote. In general, I wouldn't expect a significant difference here.

    The areas it did change were the opimization mechanics and statistic heuristic usage, amongst other things. It supposedly got better but it started using a few things under the hood that weren't as important in 2k, so a few rare cases got out of whack.

    This is an interesting puzzle though, one of those "The curious case about the dog in the night." Why would a query that's bound to produce LESS rows cause more issues? There's a link in my signature that can help you show your .sqlplan's. It's the optimization/indexing one.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Thanks Craig,

    and also shrink it with DBCC SHRINKFILE.

    Um, why would you do that?

    REINDEX, sure. Shrinkfile? That's not going to help here.

    Shrinkfile, if anything, will make it WORSE.

    My feeling was that it is better to index a 10MB shrink-ed file compared to 1000MB not shrinked file.

    But looks like you are right Craig, I found out some reasons why shrink is not an option from here.

    Regards,

    Iulian

  • 1) yep - don't shrink

    2) make the database big enough to contain 12-18 MONTHS of expected data growth (including indexing sizes)

    3) you MUST update ALL statistics with a FULL SCAN as part of a version upgrade. No ifs, ands or buts on that.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Hi,

    Did you check the database compatibility level?

    I the DB is still on SQL server 2000 compatibility level try changing to 2005 and then i would recommend rebuild all indexes, including clustered.

    I had some problems with some query's when I change the compatibility level of the DB from 2k to 2005, so if you don't have time to fix any query that will give some problem try this on a copy of the original DB.

  • ron.richardson (2/1/2011)


    I have updated the statistics, but I don't know if I did FULL SCAN or not. Most likely I have not.

    Query:

    SELECT DISTINCT TOP 10

    s.sale_key,

    fc.freight_carrier_name,

    isnull(os.freight_charges,0),

    isnull(os.weight,0)

    FROM sales s

    INNER JOIN locations l ON l.location_key = s.delivery_location_key

    INNER JOIN builds b ON b.transaction_key = s.sale_key AND b.transaction_type = 'SAL'

    INNER JOIN build_details bd ON bd.build_key = b.build_key

    INNER JOIN outbound_shipments os ON os.outbound_shipment_key = bd.outbound_shipment_key

    INNER JOIN freight_carriers fc ON fc.freight_carrier_key = os.carrier

    WHERE

    l.country_code = 'US'

    AND l.state = 'NY'

    AND s.status != 'CAN'

    AND b.status != 'CAN'

    AND os.status != 'CAN'

    ORDER BY s.sale_key DESC

    Execution Plan:

    Wasn't sure how to post that, so here is a link to a screengrab: http://dl.dropbox.com/u/143275/distinct-top-5-execution-plan.png

    Indexes:

    On builds: nonclustered on transaction_key and transaction_type

    On build_details: nonclustered on outbound_shipment_key

    The largest table is build_details with 311k records. All the others are under 20k.

    Something doesn't seem to add up here. You said that build_details is your big table with 300K+ records; you also said that if you remove the top/distinct clause it quickly returns all 300K+ records. But, your query has all sorts of restrictions in it: it only allows for sales from NY, US; it only allows for builds with a transaction type of 'SAL' and a transaction key which matches the sales key; it only allows for a status of 'CAN' across three different record types; and it only allows for outbound shipments. Yet, with all of these restrictions, it manages to join to every single record in the build_details table? That seems pretty unlikely. Perhaps you removed more than the top/distinct clause from the query when you ran your test?

    - Les

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply