Stairway to SQL Server Virtualization Level 3 - Building the Ideal VMware-based SQL Server Virtual Machine

  • Hi David,

    Thanks for your article, this confirms our current setup for us. However, with one major exception.

    We have configured multiple vCPU with just 1 core per socket (after we had a lot of latency issues on the SQL servers).

    The theory being:

    If you configure two or more cores per socket, the guest can only get CPU time if that number of cores come available to the guest. This means more of a scheduling effort for the host. Especially if there are more guests doing the same.

    We have reconfigured all our guests databases, build hosts, etc, to have more vCPUs with just 1 cores per socket. All complaining by devs and users stopped.

    And yes, this is all dependant on the licensing model fro your SQL server. But it works for us 🙂

    BB, Arjen

  • I was hoping to see a discussion on the impact to the host server of adding too many vNICs, vDisks and virtual disk controllers to VMs. Is this something you plan to cover?

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • In the 3rd article in the series you talk about using the EFI option (which you mention is discussed in article 2), but I can't find any mention of it in article 2.

    Did I miss that?

  • In some cases, this can work well. However, in others, where the host is pushed for CPU scheduling, or the workload is more NUMA aware and absolutely thrashing memory, configuring more cores per virtual socket can help with performance. The host might spread out the workload between multiple NUMA nodes, and it might actually hurt performance because of all the cross-NUMA boundary memory lookups.

    But, it's all workload and system load dependent, so the fact that you have found this is a good thing. Just keep your eye on the performance there and if you find that the workload grows pretty large, experiment with the performance footprint by adjusting the vNUMA configuration.

  • Those discussions are not necessarily unique to VMs. You can only add up to four vSCSI controllers per VM on both VMware and Hyper-V, and the only way to go beyond that is to add AHCI controllers on a VMware environment, which are measurably slower than the vSCSI ones. The overhead from too many is generally overshadowed by the performance penalty of NOT scaling these out if you really need that performance impact.

    For network adapters, it's the same physical or virtual. Too many is tough to quantify, and I've had to to do a lot for various security means. Usually people don't add a lot of network adapters to a VM or physical server unless they need it for different VLAN access, so it's more of a necessity. Do you have a scenario where people have added waaaaay too many vNetwork adapters than needed for their use case?

  • Nope you didn't miss it. It's a VMware specific option that I have started checking recently. I think in the long term it keeps the VMs more up-to-date with BIOS trends, and I suspect that future applications will leverage UEFI BIOS types for security-related tasks.

  • Hello, When will the level 4 be released?

  • Learn SQL Server from experienced professional instructors. Courses at convenient locations or from your home or office. Enroll for MS SQL Server certification training. Learn database administration, querying, reporting, data implementation with SQL Server 2016, 2014, 2012 and 2008 training courses.

    Microsoft SQL Server training & certification from NetCom Learning will help you acquire database management skills needed to succeed.

  • By using virtual disk, your sharing store with Servers and controllers. Best to use raw mapped storage so you are not sharing data writing and reading, especially with Data and log disks. Depending on the SAN your using, VNX has some really good features to separate the load, and controller types. - william

  • Good article, we are also a VMWare environment and thus applicable. I noticed the example selects a single storage location, yet spins 8 drives out of this over 3 SCSI ISs. Knowing that the storage & r/w access on the SAN is finite whether it is 1 or 8 drives, I am curious to learn how the performance is improved with the additional drives generated?

  • This was removed by the editor as SPAM

  • This was removed by the editor as SPAM

  • This was removed by the editor as SPAM

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply