Windows Server 2012 and Hyper-V

  • Comments posted to this topic are about the item Windows Server 2012 and Hyper-V

  • What I like the best about Hyper-V 3.0 as well as VMWare 5.0 and 5.1 is the fact that Guest OS NUMA-awareness is finally available.

    Before these versions, having a virtualized SQL Server with more than 8 Cores simply didn't do much to improve performance. Now I can tell Hyper-V or VMWare to have 2,4 or even 8 NUMA Nodes available in a guest OS with say 16 Cores. 🙂

    Now is the time to virtualize!!!

  • ... were designed to make things look good, and there’s a good marketing presentation on the capabilities. I’m sure the actual implementation isn’t as easy or smooth as in the talks.

    What?? Are you saying that marketers are not truthful??? Oh. Oh. The enormity of this new paradigm. :w00t: 😛

    <><
    Livin' down on the cube farm. Left, left, then a right.

  • Yeah, all of our new servers are required to be virtual for the last two years unless there's a 'business reason' for not doing it - and the only 'accepted business reason' I've heard is when a vendor says they won't support it for an app we're already relying on.

    Physical doesn't makes sense right now, but I'm concerned that once everything is virtual we will start to see new problems. A wrong setting is easy to spot, but a "less than optimal" setting in a vm farm may take a while to show up and even longer to determine what the problem is and fix it.

    Our experience right now is the 'glossy sales literature' type - everything's easy and wonderful.

  • We've had our SQL Servers virtualized for a while now and 85% of our environment as a whole. There are a few that are not virtual. We really haven't seem to much performance loss from the bulk of our virtualized database servers.

    We have learned however that in a virtual environment best practices are even more important. It's important to build the VM with performance in mind. 64k allocation unit sizes for volumes for sql files, etc. Its also important as a DBA to understand how your virtual platform works. Datastores in VMWare for example can take your server(s) down.

  • Brandon Leach (10/18/2012)


    We have learned however that in a virtual environment best practices are even more important. It's important to build the VM with performance in mind. 64k allocation unit sizes for volumes for sql files, etc. Its also important as a DBA to understand how your virtual platform works. Datastores in VMWare for example can take your server(s) down.

    Brandon, could you elaborate on what you mean by "datastores in VMWare ... can take your server(s) down"? Do you mean something more than a corruption/error/deletion in the datastore will, of course, destroy the server (since the server essentially is the vmdk etc. file)?

    Rich

  • rmechaber (10/18/2012)


    Brandon Leach (10/18/2012)


    We have learned however that in a virtual environment best practices are even more important. It's important to build the VM with performance in mind. 64k allocation unit sizes for volumes for sql files, etc. Its also important as a DBA to understand how your virtual platform works. Datastores in VMWare for example can take your server(s) down.

    Brandon, could you elaborate on what you mean by "datastores in VMWare ... can take your server(s) down"? Do you mean something more than a corruption/error/deletion in the datastore will, of course, destroy the server (since the server essentially is the vmdk etc. file)?

    Rich

    A datastore is a storage layer underneath the OS. OS Level Volumes exist in this datastore. We may have a couple different OS volumes for one or more servers in a single datastore. We've had instances where a datastore filled up and caused a VM to shut down even though it was showing plenty of free disk space on its volumes.

    It also depends on whether the datastores are thin or thick provisioned. Thick provisioned means the space is allocated ahead of time and can mitigate the above issue. Thin is more grab it as you need it. Either way I like to monitor the growth of the datastores my servers use.

  • ....and my former company was one of those vendors. We'd grant our blessing on application servers being virtual, but we insisted on SQL Server being on a physical box. I don't know if there had been extensive testing to validate this requirement, or if someone had a bias. I'm suspecting the latter...

  • Brandon Leach (10/18/2012)


    rmechaber (10/18/2012)


    Brandon Leach (10/18/2012)


    We have learned however that in a virtual environment best practices are even more important. It's important to build the VM with performance in mind. 64k allocation unit sizes for volumes for sql files, etc. Its also important as a DBA to understand how your virtual platform works. Datastores in VMWare for example can take your server(s) down.

    Brandon, could you elaborate on what you mean by "datastores in VMWare ... can take your server(s) down"? Do you mean something more than a corruption/error/deletion in the datastore will, of course, destroy the server (since the server essentially is the vmdk etc. file)?

    Rich

    A datastore is a storage layer underneath the OS. OS Level Volumes exist in this datastore. We may have a couple different OS volumes for one or more servers in a single datastore. We've had instances where a datastore filled up and caused a VM to shut down even though it was showing plenty of free disk space on its volumes.

    It also depends on whether the datastores are thin or thick provisioned. Thick provisioned means the space is allocated ahead of time and can mitigate the above issue. Thin is more grab it as you need it. Either way I like to monitor the growth of the datastores my servers use.

    Thanks Brandon. I did some checking with our sysadmin, and it looks like VMWare added datastore alerts to version 5.0 to identify potential problems like this.

    Rich

  • rmechaber (10/19/2012)


    Brandon Leach (10/18/2012)


    rmechaber (10/18/2012)


    Brandon Leach (10/18/2012)


    We have learned however that in a virtual environment best practices are even more important. It's important to build the VM with performance in mind. 64k allocation unit sizes for volumes for sql files, etc. Its also important as a DBA to understand how your virtual platform works. Datastores in VMWare for example can take your server(s) down.

    Brandon, could you elaborate on what you mean by "datastores in VMWare ... can take your server(s) down"? Do you mean something more than a corruption/error/deletion in the datastore will, of course, destroy the server (since the server essentially is the vmdk etc. file)?

    Rich

    A datastore is a storage layer underneath the OS. OS Level Volumes exist in this datastore. We may have a couple different OS volumes for one or more servers in a single datastore. We've had instances where a datastore filled up and caused a VM to shut down even though it was showing plenty of free disk space on its volumes.

    It also depends on whether the datastores are thin or thick provisioned. Thick provisioned means the space is allocated ahead of time and can mitigate the above issue. Thin is more grab it as you need it. Either way I like to monitor the growth of the datastores my servers use.

    Thanks Brandon. I did some checking with our sysadmin, and it looks like VMWare added datastore alerts to version 5.0 to identify potential problems like this.

    Rich

    Glad to help. Sounds like you also have an opportunity for some cross training too! I've found that I work much more closely with our sysadmins with our virtual database servers than when they were physical. If you have time try to get a basic working knowledge of your virtual platform. It will go a long way towards keeping your virtual database servers available and performant.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply