• Most of the figures you mention are helpful in working out a theoretical performance model, but your organisation uses SQL Server to give actual performance to the business.

    My advice is to use SQL Profiler to find the most important queries to your business and how long these take to run. You should end up with a list of about 30 queries. Aim to get these queries into a .sql file so you can run them whenever you want and see how they perform - you are looking for minimum, maximum and average response times.

    While you are doing this, capture the average and peak CPU usage as recorded by Windows, for at least 14 days. Don't worry too much about virtual to real conversion, this is probably not relevant unless it is planned to change this ratio after the number of cores is reduced.

    You now have the figures you need to decide how many cores can be eliminated while you remain within acceptable peak and average CPU usage, plus the performance figures you need for post-change testing. Only the core count should be changed, memory and everything else must remain the same otherwise you will not know which factor affected the performance.

    Allow the systems people to change the cores, then test performance over the next few days using your stored queries. If the queries still perform much the same as before, then reducing cores has not harmed the business. If there are problems, work with your management and the systems people to find an acceptable compromise.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara