Server Standards - Part 1, Hardware
I wrote a little while back about Coding Standards. There were two parts to the series, Naming Conventions - Part 1 and Formatting - Part 2. As I was searching for other areas that I've implemented standards, it occurred to me that the server itself is a place where I have implemented standards and have seen great benefits for new users who follow behind me.
Don't let me mislead you that I've come up with all these myself, some are ideas that made life easier for me that I adopted when I started working within that particular environment. These articles are more of a consolidation of those items that are nice to standardize on servers. This article will deal with hardware, the next part will deal with the Server configuration itself.
I know, I know, most of you are on a budget and when you buy hardware, you buy the best deal you can get. I understand and I appreciate it since I've been in the same situation before, but here are a few things to keep in mind while minding your budget.
First, buy quality hardware for your servers. A clone from the local computer shop on the corner is fine for desktops, but not for servers. More people depend on servers and it's not worth any amount of downtime for parts that are not certified. They may not be any better, but at least someone else's neck is on the line. Over the years, I've worked with lots of hardware, from 8088s to the latest XEON and P4 processors along with all types of miscellaneous hardware. A few things I've seen is that the parts from the name brand vendors fail less than those from clones. I don't have scientific numbers and I'm sure there are cases where the clones have worked fine, but the name brands are "lights out", forgotten in the back room pretty much everywhere I've been.
You also don't want to be in the position of explaining the production system is down because of a memory error from the $29.95 stick of RAM you installed yesterday. The big boys do spend a little more to burn things in and it's worth it (to me at least) to pay them for it. I'm happy to scrimp and save on desktops, never on servers.
Secondly, buy consistently from the same brands. Whether this is the server (HP, IBM, etc.) or even a local guy. Having consistent hardware makes all the difference in the world when you get into a bind. Having your QA/Development systems and the production systems running with similar parts can be a real lifesaver. Not that the scale would be the same, but let me share an experience with you.
I worked at a small company, tight budget, me and a couple others being jack-of-all-trades, etc. However, when we were purchasing systems to run our database servers, we decided to buy Dell servers. Now I'm not endorsing Dell over the other major brands, but pointing out that we decided to buy a single CPU Dell server with the Dell RAID card and an Intel NIC. About 6 months or so later, we were ready to buy the production server and again bought a Dell server with the same RAID card and the same NIC. At the time there actually were slightly newer versions of the various cards, but I decided to be consistent and buy the same hardware whenever possible.
Why? Well early one morning I got a call from the boss. The early people in the office couldn't access the db server. As usual, I was dragging and just heading out the door. Needless to say I hurried to work and found the RAID array dead. Shortly thereafter we determined the NIC card was dead, or at least had a dead port. I quickly downed the development server and moved the NIC to the production server and was back up in a much quicker time than if I'd had to find another card, drivers, etc. This was the NT 4.0 days when most drivers still came on disk. I know that these days the OS is much better about finding new drivers and dealing with change, but the quicker you can get back up with the fewer changes, the more stable your environment will be and the happier your clients and management (and hopefully your wallet at Christmas time) will be.
The other big reason for being consistent with the hardware is that many of today's devices have software that go with them. SSL Accelerators, load balancers, multi-port NICs, etc. It's hard enough to learn to configure a device when you are taking time to install it let alone in a crisis. The fewer programs you have to learn and less tricks to remember, the better off you will be. Is it worth another 10% in NIC throughput if your blood pressure rises for a few hours when you have an issue? It might be, but in all my environments, I take stability and simplicity over the last little bit of performance.
The last thing I can say about hardware is that buy what you need. For tomorrow, not for today. Just as you'd spend a little time to design a slightly better solution in anticipation of future changes or needs, do the same for hardware. Argue, plead, cajole, whatever you need to get a little extra to insulate yourself from future growth. The incremental increases in hardware don't often cost too much these days. Going from 1GB to 2GB of RAM is a small price to pay for some more overhead on the server.
I'm sure everyone wants to know what they should buy and I'd love to have a hardware sizing diagram, but it's really a crap shoot. I've seen servers die from 30 users that scream with 1000. There are so many factors, it's hard to know how to plan. I don't want to digress into a "how do I size my server?" here, but do some testing and then allow for error and buy a little more.
But keep buying according to your standards.
There are lots of other reasons to standardize on hardware. Support costs are lower (internally and from the vendor), price breaks, better customer relations, etc. I'm sure there are some of you that will have more and post them below in the comment section.
One last thing to also keep in mind with hardware is this. You want to spend some time and make a good design for your system and application. But a good architect/DBA/programmer will cost you US$70,000+ a year. That's a couple thousand a week on average. A couple thousand goes a long way in a server, so optimize and design well, but then throw some hardware at the problem.
As always, I welcome comments and look for the next part in this series that looks at server configuration.
©dkRanch.net January 2003