Sizing New Servers

  • Comments posted to this topic are about the item Sizing New Servers

  • I very often deal with high-end SQL Server systems: These are almost never virtualized, because there are only drawbacks, not gains that can be made from virtualizing machines that utilise all the processors in a server. Furthermore, when you're dealing with FusionIO cards, the relationship between the cards, the slots they go into and the processors controlling them because much more important.

    With Enterprise Edition, the server is the cheap component - it's the licensing cost that really hits your client in the pocket. Therefore processor choice is very important: It pays to utilise high frequency, fewer core CPUs with greater amounts of L1, L2 and L3 cache to get the maximum bang for the buck.

    The memory is very important too: Very often stuffing the server with the maximum amount of memory is sub-optimal: There is a maximum memory bandwidth (will be documented) when a given number of slots are filled - going beyond that will not increase the memory bandwidth -- and in some servers, necessitate a drop in the clock frequency of all memory when the optimal number of slots are filled.

    Tuning HBAs is another example of a need to control the hardware - there is an optimal Queue Depth that controllers should be set to, which changes depending upon the SAN storage that sits behind it. This needs testing to squeeze out the optimal combination of low latency and high bandwidth.

  • I very often deal with high-end SQL Server systems: These are almost never virtualized, because there are only drawbacks, not gains that can be made from virtualizing machines that utilise all the processors in a server. Furthermore, when you're dealing with FusionIO cards, the relationship between the cards, the slots they go into and the processors controlling them because much more important.

    With Enterprise Edition, the server is the cheap component - it's the licensing cost that really hits your client in the pocket. Therefore processor choice is very important: It pays to utilise high frequency, fewer core CPUs with greater amounts of L1, L2 and L3 cache to get the maximum bang for the buck.

    The memory is very important too: Very often stuffing the server with the maximum amount of memory is sub-optimal: There is a maximum memory bandwidth (will be documented) when a given number of slots are filled - going beyond that will not increase the memory bandwidth -- and in some servers, necessitate a drop in the clock frequency of all memory when the optimal number of slots are filled.

    Tuning HBAs is another example of a need to control the hardware - there is an optimal Queue Depth that controllers should be set to, which changes depending upon the SAN storage that sits behind it. This needs testing to squeeze out the optimal combination of low latency and high bandwidth.

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply