December 25, 2014 at 11:06 am
I'm going to consolidate/upgrade to 2014 from 2005 and 2008 R2.
I got this guidance. I'm confused. I will not say anything but ...
We are proposing significant change, going from 4 VM’s and 1 Physical server to 2 VM’s. Yes, we will have more resources that include increased memory capacity, faster clock speed CPU’s, higher output storage for improved I/O etc., but this does not guarantee improved performance as you know.
I would like for you to take each current server being utilized, outline the specs and resources associated and build a monitored time study to determine how it performs over a several day period based on its role.
(For example, the cyp_report server has 4 processors, 16 GB Ram, etc. Over a period of XX, the max processor utilized was XX, the sustained usage was XX, there were multiple cache reads that exceeded XXX indicating I/O was the bottleneck, etc. The peak memory usage was driven or caused by XXX. By doing XXXXX, we can reduce the memory spikes by xx%.)
Once we have this type of analysis, we can justify and deliver a better platform based on the requirements. We chastise developers for coding and figuring out requirements as they go and It feels like we are doing the exact same thing.
Once we have a complete analysis, we will build a spec box on the new platform with a subset to data and provide a proof of concept for the data team. I am confident this approach will enable us to stay aligned with them and reassure them we are making the move for improvements, not just for the sake of upgrading.
No comprende.
Please critique. :w00t:
Happy Hanukkah, Merry Christmas and have a Happy New Year.
For better, quicker answers on T-SQL questions, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
December 26, 2014 at 4:36 am
It was specified that a small dataset used to test against a machine with a full dataset.
I have known for years that that is a no go.
Does anyone know of any documentation to support this?
For better, quicker answers on T-SQL questions, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
December 26, 2014 at 5:28 am
Even the best Capacity Planners, the people who specialise in doing capacity planning as their paid job for many years, reckon any estimate they give is at best accurate to +- 25%. As part of the work you do, you need to be realistic on how accurate your estimates might be.
How much staff time do you think you and your employer should devote to an estimate that may be +- 40% away from what is actually needed, assuming you think you can get to that level of confidence.
My approach has been to try getting management to have the right expectations. They need to be aware of likely accuracy, the impact if the figure is out by 40% (too much or too little kit purchased, etc), the risks involved and the mitigation plan.
For SQL consolidation, I try to get three sets of figures (CPU, Memory, IO) to produce an initial sizing estimate. I would then move workload to the new box, one server or even one DB at a time, and monitor performance. One great advantage of doing this in a virtual environment is you only need a small amount of swing kit to start with, and as you move workload off old SQL instances you can reallocate CPU and memory to the new instance.
To get the CPU figures I get the peak and average CPU usage on each of the old servers, over at least a 14-day period, and rationalise this to a GHz requirement. eg for a 4-CPU server running 2.1GHz with 75% peak and 37% average this gives 6.3 GHz peak and 3.4 GHz average. These figures should be adjusted to account for virtual to real CPU loading if you are not running SQL at 1:1. You then add these figures for all the servers to be consolidated to get your new server GHz requirement. Finally, work out your target CPU loading % and calculate the number of cores your new server may need (adjusting for virtual to real usage).
Do the same for memory and IO. For memory took for actual and target bufferpool size, plus any memory used by other processes that will be moved to the new server. IO can be the hardest to deal with, as the allocation of storage to the new server may use a different hardware path in your storage subsystem to the old servers, but if anybody thinks a server consolidation can be done without heavy involvement of the storage administrator they are missing a vital part of risk mitigation for the project.
You also need to plan how to do the consolidation with minimum disruption. My preferred approach is to set up DNS Vanity Names (aka DNS Aliases) to abstract the actual server environment from the applications, and then change all application connection strings to use the vanity names. This allows a DB to be moved to a new server and at the same time repoint the relevant vanity name so there is no actual change made at the application level. This also allows for fast fallback (reset the vanity name) if your regression testing has a show-stopper problem. Expect to take more time both in planning and implementation in dealing with minimising disruption than in sizing and building the new servers.
This process should give you a relatively quick way to size your consolidated servers, but as I said above you only need a small amount of swing kit to start the actual consolidation. Make sure the new server has more than the expected CPU and memory for the application you plan to move, then do the move and check performance. Before you move the next application, reallocate CPU and memory no longer needed on the old server to the new server. Over time the old servers get decommissioned and your new server gets all their resources. If you find your capacity planning was 40% above actual needs then you return capacity to the pool for other uses. If your planning asked for less than is needed, you can pause the consolidation until more kit becomes available - this aspect is an important part of risk mitigation.
At the end of the day you have to choose the right approach for your organisation. Trying to test everything before you commit to a change may sound good, but only if the change you test and the change you make are identical. My advice is to accept that any sizing you calculate is unlikely to be better than +- 40% of what is needed, and plan the project to cope with what actually happens rather than with what you expect to happen.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
December 26, 2014 at 7:30 am
Thank you Mr. Ed. 🙂
For better, quicker answers on T-SQL questions, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
December 26, 2014 at 7:37 am
Mr. Ed,
I just got licenses ignite.
I'm stumbling through it but not as fast as I would like.
How do I leverage it to measure the metrics that was recommended?
I noticed that Page Life Expectancy and Memory utilization are high.
It seems that that issue is with Bulk Inserts (data warehouse).
Thanks.
For better, quicker answers on T-SQL questions, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
December 26, 2014 at 8:57 am
Welsh Corgi (12/26/2014)
I noticed that Page Life Expectancy and Memory utilization are high.
As one would generally like to see.
It seems that that issue is with Bulk Inserts (data warehouse).
What issue?
The piece you initially posted looks like a good plan. Benchmark each server first, identify what resources it needs, what utilisation of resources is normal and then analyse and determine whether or not the consolidation project is going to meet requirements. But overall, whoever wrote that has a good idea, maybe chat with him for details.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 26, 2014 at 9:04 am
Thanks Gail. 🙂
For better, quicker answers on T-SQL questions, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
December 26, 2014 at 9:08 am
Once we have a complete analysis, we will build a spec box on the new platform with a subset to data and provide a proof of concept for the data team.
Does a subset of data do it? I thought that it would skew the results.
For better, quicker answers on T-SQL questions, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
December 26, 2014 at 9:20 am
A full set would be preferable, but since that's several steps (and probably weeks of work) away, it can be discussed and debated while getting the needed baselines.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 26, 2014 at 9:36 am
We do not have a storage manager and as a result that may be contributing to a lot of problems that we are experiencing.:unsure:
For better, quicker answers on T-SQL questions, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
December 27, 2014 at 1:20 am
Using ignite monitoring tool the waits a mostly Bulk Inserts. I believe from SSIS Packages loading the Data Warehouse and Staging.
For better, quicker answers on T-SQL questions, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
Viewing 11 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply