Query slow after VM migration (same install)

  • I have a 'Process Cube' workload that has been running the same speed for several months. The server is virtual and no best practices have been followed with regard to disk. Mostly because it's been hitting the processing target time, so that part hasn't been getting attention. There are logical disks to separate the parts for administration, but everything's on the same one EqualLogic SAS array underneath, shared with a variety of other machines. I'd like to leave that out of it since we were not trying to solve anything before migration.

    Now using Veeam, I've migrated the (exact) VM offsite to a new host with new storage, and the same process is taking 50% longer. Looking at the metrics from Perfmon and from the storage interface, there doesn't seem to be disk pressure. Latencies are respectable <= 10ms, and disk use goes very low for much of the operation. CPU and memory are never capped.

    I have a copy of the VM at both sites now -- one on the old ESXi 5.1 hosts and one under 5.5 where I migrated.

    The 5.5 host is running as the same CPU family as before via EVC Mode setting. The VM was untouched, so nothing at all different about the system or app versions. But vSphere is at 5.5.

    Does anything come to mind about moving between comparable hosts and moving from vSphere 5.1 to 5.5 that would lead to increased CPU time for queries? What I see when I profile on each is the same number of reads for the long-running query to process, but the 5.5 guest taking more time on CPU to complete.

    Using Windows 2008 R2, SQL Server Standard 2008 R2.

    Looking first just for high level ideas in case anyone's run into something similar. I can run whatever tools or monitors are necessary to go deeper if it comes to it.

  • We have seen something similar but not for sql due to energy saving settings on the host. No details at hand. We had to change bios settings if I remember correctly.

  • Can't hurt to look. C-States, maybe?

  • As I understood it there are Power Management and Performance options for CPU and Memory. The details depend on the BIOS and are not easily generalized (so our system specialist assures me). In our case tweaking these setting helped improve the performance significantly.

  • I was able to set the BIOS on this particular host to a "Max Performance" profile and get back my previous CPU times.

    This was a strange issue to troubleshoot since everything else in the loop behaved exactly as expected and there was no CPU pressure from a PerfMon perspective.

    The only eccentricity was that things took too long. Which sounds like a "my Internet is slow" kind of complaint, but that was literally the symptom. Things just spent too long on CPU.

    Measures of 'READY' from the VMware perspective were also high before the change and are now in line. This was the only tipoff from a metrics perspective.

    Guidance online for this waffles between whether to have C1E and other C-States settings enabled - even between VMware and Dell recommendations. In our case, it was better with all of it disabled.

    Test, Test, Test, right?

  • Thanks for informing back on the issue and glad you got it fixed.

    Bouke

  • I had the same thing happen on a Dell server. The mainboard was replaced and some of my jobs took twice as long to complete. Bios was not set on new mainboard for max performance. :hehe:

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply