November 9, 2017 at 5:56 am
Hi
another consolidating servers question.....
We're looking at consolidating OLTP SQL servers onto new hardware running a hypervisor and our main concern is CPU. This is because it's so expensive to license cores so we have to keep the number of cores to a minimum.
It's been suggested that we can simply capture the perfmon stat MSSQL:Databases transactions/sec for our current sql instances, find our max transactions over the instances and simply find hardware that can comfortably cope with that number of transactions per second.
This seems very simple to me. I'm really happy if that will act as a good base for sizing the CPU (we're going to chuck loads of memory at it!) but is it realistic should we be looking at other perfmon stats?
Thanks for the replies
Alex
November 9, 2017 at 8:37 am
I'd be a lot more interested in knowing what kind of CPU usage you are already experiencing, and Perf Mon stats would certainly be useful in that regard. If you consolidate workloads, you need to understand that it's NOT just additive, as once you consolidate multiple workloads, those workloads would then have to share cores with the other workloads, and the question is just how much "in parallel" processing any given application, including SQL Server, makes use of. If you don't have some feel for that, you might run into workload mixes that "don't play nice together"... I'd really want to know what each "workload" really needs in terms of not only what does it use now, but how fast is it growing over the next 5 years... Slapping two high growth workloads together that are more likely to grow in CPU than RAM, might be rather problematic. As expensive as the core licensing is, you can't really afford to not have enough horsepower, and limping along causes other problems that can be much harder to solve in a cpu constrained world. Hopefully, that at least gives you some things to think about that clearly indicate it usually isn't all that simple...
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
November 9, 2017 at 8:50 am
I completely agree but I'm not in control of the purse strings! If the price comes in too high there will be no upgrade, so I have to tread carefully.
Any suggestions for sizing the CPU usage. at mement my plan is to capture
"\Process(_Total)\% Processor Time",
"\Process(sqlservr)\% Processor Time",
"\Processor(_Total)\% Processor Time",
"\SQLServer:databases(_Total)\Tracked transactions/sec",
"\SQLServer:databases(_Total)\Transactions/sec",
"\SQLServer:databases(_Total)\Write Transactions/sec",
"\SQLServer:General Statistics\User Connections",
"\SQLServer:General Statistics\Transactions",
"\System\Processor Queue Length",
"\Process(_Total)\% Processor Time",
"\Process(sqlservr)\% Processor Time",
"\Processor(_Total)\% Processor Time",
"\SQLServer:databases(_Total)\Tracked transactions/sec",
"\SQLServer:databases(_Total)\Transactions/sec",
"\SQLServer:databases(_Total)\Write Transactions/sec",
"\SQLServer:General Statistics\User Connections",
"\SQLServer:General Statistics\Transactions",
"\System\Processor Queue Length"
Over 24 hours for all the servers we're looking to consolidate and then ingest into a sql database for analysis, so I can see the max workloads at any give time of day.
November 9, 2017 at 8:56 am
alex.palmer - Thursday, November 9, 2017 8:50 AMI completely agree but I'm not in control of the purse strings! If the price comes in too high there will be no upgrade, so I have to tread carefully.Any suggestions for sizing the CPU usage. at mement my plan is to capture
"\Process(_Total)\% Processor Time",
"\Process(sqlservr)\% Processor Time",
"\Processor(_Total)\% Processor Time",
"\SQLServer:databases(_Total)\Tracked transactions/sec",
"\SQLServer:databases(_Total)\Transactions/sec",
"\SQLServer:databases(_Total)\Write Transactions/sec",
"\SQLServer:General Statistics\User Connections",
"\SQLServer:General Statistics\Transactions",
"\System\Processor Queue Length",
"\Process(_Total)\% Processor Time",
"\Process(sqlservr)\% Processor Time",
"\Processor(_Total)\% Processor Time",
"\SQLServer:databases(_Total)\Tracked transactions/sec",
"\SQLServer:databases(_Total)\Transactions/sec",
"\SQLServer:databases(_Total)\Write Transactions/sec",
"\SQLServer:General Statistics\User Connections",
"\SQLServer:General Statistics\Transactions",
"\System\Processor Queue Length"Over 24 hours for all the servers we're looking to consolidate and then ingest into a sql database for analysis, so I can see the max workloads at any give time of day.
24 hours won't necessarily capture the less frequent but potentially large things, like quarterly bulk loads, or other similar large, but infrequent, data volume heavy hitters. Keep that in mind... I'd be a lot more comfortable if I had that data for the last couple of months, so that I can have at least a shot at seeing growth patterns..
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply