Do We Need a Raised Floor?

  • I know as DBAs that we often don't need to worry about the data center / server room/ computer closet, but still, it's something that we may have influence in or provide some help to the sys admins. I know in many of my smaller companies, I've been the guy with the most experience and have been used as a sounding board for the network guys.

    American Power Conversion (APC) sponsored a webinar recently that I attended about whether or not we needed to build raised floor data centers. Their focus was obviously on their products, but it mentioned the problems inherent with heat buildup in newer infrastructures, particularly those using blade servers. t seems from their studies that so much heat is being produced that the traditional raised floor environment doesn't provide enough cooling to each individual rack, resulting in hot spots that can drastically lower equipment life or even cause problems while the computers run.

    I was slightly skeptical of this, but while leaving lunch with some friends I ran into a guy that works at the Viawest data center where SQLServerCentral.com is hosted. He confirmed that it's becoming issues and that is one of the reasons that all the wiring at the data center is run in overhead trays. And it's still something they are concerned about and working on.

    APC, of course, has a solution, racks that contain their own cooling, arranged in a particular set of two rows with a "hot zone" between them that captures hot air and returns it to the room as ambient temperature air. I never got an answer to the "cost" issue of replacing your current racks against the supposed $800/rack cost of the raised floor, so I cannot comment on whether their solution has merit.

    Still I know it's a problem and one that we had to address while I worked at JD Edwards. Our inventory system for managing racks had to be modified to handle power and heat parameters for the servers as those items were becoming scarce and allocation decisions needed to be made. Surprising that a huge data center like we had would run into those issues, but with > 1,000 servers, a dozen or so AS/400s and multiple SANs, I guess we were pushing the envelope for the design. After all, when the room was built there probably wasn't a server under 4U.

    Now we're getting more than one per U in blade situations.

    Steve Jones

  • I'm a big fan (pardon the pun) of proper enviornmental control in server rooms. It is hard though to find proof of heat / temp damage causation, without it being a selling tool for cooling equipment.

    Where I gather my information to conclude it is actually an issue is from tech specs on hard disks, in that they put the operating temps, some from 5-45 degrees C in some cases.  Then have looked at some review sites where they measured some of those drives average and peak operating temps.

    You'd be surprised how many of them edge up to 43 + degrees with little work.

    Not to mention I have started putting Thermistors on all my deployments where the air control is negilable, and found that over the last few years some interesting effects can be seen.  A general sluggishness can be seen on machines where any one part of them reaches 39 degrees.

    I was wondering if any one has seen such events, or have there own stories of power quality, heat disapation realted events? (Its not from a clock slowing feature of any of the mobos for I turned them off for a period just to see the sluggish feel transpire.)

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply