October 12, 2010 at 2:21 pm
A few comments:
1) Post the execution plans from the 2 machines so they can be checked. This isn't only to see if they could be improved, but it's not unusual for different execution plans to be generated on different hardware and seeing them may help us find what has changed.
2) It takes as little as 3-5% change in the data profile to significantly change the execution plan and hence performance, so what was an index seek may become an index scan. Or a join that worked efficiently before is now not effective.
3) The different hardware has a significant effect on performance, so what are the specs of your machine.
Other issues are: Do you have MAXDOP = 0? The changed data could be causing the execution plan to generate a parallel query plan which is (in this case) less efficient than the old plan.
You may want to restore an old (pre problem) version of the database and see how the execution plans compare.
Cheers
Leo
Leo
Nothing in life is ever so complicated that with a little work it can't be made more complicated.
October 12, 2010 at 2:44 pm
Agreed - check your MAXDOP.
If all cores are pegged, then maxdop is set to 0 (or as many cores as you have)..this will either speed up the query, or lock up the cores and show CXPACKET waits. If you disable parallelism or force a lower maxdop on the query itself, you might end up with a better execution plan.
October 13, 2010 at 6:43 am
I agree with checking the plans to see if that turns up anything.
When you can restart the service, see if that changes the performance.
Run perfmon on the machine both before and after the restart.
And see how much memory SQL is using both right after the restart, as well as after running the problem query.
I'd tend to look at the disk activity. Disk IO and swapping can easily drive 100% CPU.
Also check for fragmentation - both internally (SQL) and on the server.
When you detatch, copy, and reattach - some things stay the same, but the new copy likely would be copied to contiguous disk space on the other machine. So I tend to disagree somewhat with this being an apples to apples comparision. Plus there could be any number of other differences in hardware, software, etc. between them. And your test sounds like just running the 1 query - I would not assume that is the only activity on the production server. Couldn't some of this other activity contribute to what's in cache?
SQL Server likes memory, and 2005 doesn't manage it the best. 2008 for both products seems much better.
1 GB - isn't that the recommended minimum to run SQL 2005 on?
Are you doing anything to cap memory available to SQL?
If the OS and SQL both end up fighting for memory, it gets ugly.
I'd look into page file usage too.
A quick fix - that won't be popular - would be shut down the server and add a GB of RAM.
I'll start ducking - I can see the objects starting to fly towards me.
Greg E
October 13, 2010 at 10:45 am
I changed the memory SQL srver was using from the max (2GB) to 100MB, and the query now takes around 175ms.
Thank you everyone for you suggestions and comments.
October 13, 2010 at 12:25 pm
Just watch for when SQL may really need the memory.
You can easily think you solved 1 problem, but will hit the wall somewhere else.
It does sound like paging may have been a problem, so keep an eye on it.
Greg E
Viewing 5 posts - 16 through 20 (of 20 total)
You must be logged in to reply to this topic. Login to reply