I guess I could summarise my question by asking if a query plan is created based on available resources.
Here is the long winded version.
We have a database which we regularly restore to our development environment.
Recently a piece of code has become very sluggish in the development environment. The dev environment behaves well on the whole.
This database is identical in both environments in that the SQL version is the same, the indexes and statistics are maintained weekly.
The only difference is production is 4 CPUs and 32GB RAM where as the dev is 2 CPUs and 16 GB RAM.
Now if we were talking a few seconds difference I would put it down to specs but the difference is instant for production and around 1 minute for dev.
Little else is happening in Dev, there is no locking/blocking/stress.
I look at the actual execution plan for the same query on both servers and there is a stand out difference. I can send screenshots/XML if it helps?
Basically production uses a key lookup with a 100% cost where as dev is using a "hash mash (inner join)" for 53% cost and a few repartition(?) steps (query performance is not my strong point)!
On dev the EP tells me I should create an index to reduce the time by 89% whereas production just spits it our in under a second.
Both tables have the same number of rows, the same indexes, keys everything!
Would this change of execution plan be based on the available CPU/RAM at the time the query ran. I'm at a complete loss to understand the difference in EP and performance across the 2 servers.
I've even had the VM guy check under the hood to make sure there is no disk/VM reasons.
The perplexing thing is that all other queries/functions I've looked at in this DB/Application behave almost identically bar a few milliseconds