I have an issue that is completely baffling I need a guru's guidance / help.
so a little bit of an over view first.
We currently have a 2005 sp3 server in place at the moment its serving a number of databases, its spec is around 24 gig of RAM, 2 2.6 quad core CPU's and its disk are in a raid 5 config.
this server is has no SQL optimization its almost a default install.
Now we have a new server that is being built it has SQL 2008 r2 Sp2,
2 Latest X series processes (they are far better than that in the Sql 2005) and 48 gig of RAM it has 3 disk sets 1 raid 1 for the OS, raid 10 for both the log and data drives... MUCH better overall
This new server is in its testing phases before its to go to production, but I am in a situation where I can put it into production as it's performance seems dismal in comparison to the SQL 2005 box. I need some understanding as to why.
I can tell you from I/O testing that the new 2008r2 server from a hardware point of view blitz's the old server.
But the odd thing is when hitting a website that is on the same web server, there is a constant 3 second addition when requesting the website.
So to confirm that it is 100%the db that is the cause I did a SQL trace on the new server, I found a query that has a significant duration, I then pulled that query and ran it on both 2005 and 2008
2005 does it a second faster than 2008, If I look at the execution plan I can see that:
On an insert to the temp table on 2005 it has a 0% cost to
But yet on my SQL2008 box its got a 93% cost.
So to me it looks like temp DB is caching this, I need to understand how I can get this same performance if not better from the darn TempDB... I am dying for any help / guidance on this one.