Blog Post

Capacity planning for new hardware

,

I get asked this question a lot: When getting a new application and database, what kind of hardware do I need?

The answer is easy: it depends.

It depends on

  • the load you will put on the SQL Server instance.
  • on the SLA and performance numbers you need.
  • the uptime you need to maintain (RPO/RTO factor in here)

There are other factors, but essentially the server needs enough hardware to handle the workload in the time you need it to handle things.

It doesn’t depend on

  • the number of users
  • the number of databases
  • the size of the databases*

A slight asterisk on the last one. The size of the databases matter for the space you need to buy, but they don’t necessarily affect the RAM or CPU you need. The number of users and databases can contribute to load, but those numbers of a vacuum don’t affect the hardware. I have a test machine with dozens of databases, and it generates no load. Why not? No workload, or not much of one.

The same thing applies to users. If the users do a lot of work, making changes, querying large data sets, it can be a loaded database. However I had a database one time that had 5,000 clients. It was updated by agent software on desktops in our company every hour. However each update was a singleton update to a specific row, and it was comfortably hosted on a 2CPU, 2GBRAM Standard SQL Server 2000 instance.

How do I plan?

Photo Jan 03, 8 50 26 AM

This is a tough question in many ways. I have searched all over, and asked questions, and even written a SQL University week on it (overview, disk, other). There are two main scenarios here, but they both get similar treatment.

You have to test your workload against hardware and see what happens. It can be a simulated one, or if you have an existing instance being upgraded, take a real workload.

There are numerous replay tools or testing tools, but the bottom line is you have to simulate the way you use SQL Server on actual hardware. You can do some extrapolation, but don’t expect it to be a linear change.

For example, if I have a dual core CPU of xx type, with 1GB of RAM and a 2 drive R1 array, I can’t assume I’ll get twice as much performance with a quad core of the same type with 2GB of RAM and 2 R1 drives. It almost becomes an art to examine the memory usage, the IOPS you are generating, and the percentage of CPU. As you scale up, the usage of those three values might changes and shift. More RAM can reduce IOPS and CPU, especially in reads, but not necessarily. If you have a 1TB table you’re constantly summarizing, you might not get much better performance going from 1 -> 2GB.

Ultimately it’s a bit of a guess for most of us. Fortunately most of our workloads can be handled by hardware since most databases are relatively small. Test, make some guesses, and go a little big, especially on RAM and CPU. Those are the hardest to add later, from my experience. It seems people expect disks to fill up and we need more space, but it’s often harder to approve CPU/memory upgrades, especially if it requires newer motherboards.

I wish I had a better answer, and there are numerous articles that don’t seem to me to do a better job, but read as much as you can, and try to learn how others view their systems. Then make your own guesses about what is best for your system.

Filed under: Blog Tagged: sql server, syndicated

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating