"fixed server" or Blade

  • I am thinking of long term strategy for server farm. My question is, from experience, why did you choose modular blade server over non-modular server or vice versa.

    At the moment the thought is, blade allows you to grow the server WRT cpu, memory and IO connectivity as required ( with limits of course). Non-modular system is more limiting than the blade and therefore short life spans.

  • Blades are a neat way to shove a load of servers together in one manageable place. However, they have limitations and shared topologies.

    IMHO, if you're SQL server is mission critical get a decent box for it don't put it on a blade.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Fixed servers have limitation too. The blades, from what i understand, can be "grown" to a greater extent as the SQL server demands increase and it is therefore possibly a longer term solution. That is the key part of the question. I am looking for real life experience to affirm or disprove this idea.

    Any takers?

  • AnzioBake (1/16/2013)


    Fixed servers have limitation too.

    Such as?

    Don't tell me that a machine such as an HP DL585 has disadvantages over a "pizza box" blade server!

    [Quote]AnzioBake (1/16/2013)


    The blades, from what i understand, can be "grown" to a greater extent as the SQL server demands increase[/quote]

    they allow a finite amount of hardware (HBAs, NICs, etc), which IMHO is not scalable.

    AnzioBake (1/16/2013)


    I am looking for real life experience to affirm or disprove this idea.

    Any takers?

    I have real life experience and I'm not a blade fan which is what I think you're looking for!

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • okay, so maybe my understanding is incorrect

    as examples:

    Fixed server: 4 sockets, 8 memory slots, dual NIC, dual IO connectors

    Blade: Four server slots, each with above

    When server become hardware bound, for the dl585: purchase complete new server, move db's over, reconfigure apps etc

    For blade, add new server board, configure to incorporate into current server

    What am I misunderstanding?

  • AnzioBake (1/17/2013)


    okay, so maybe my understanding is incorrect

    as examples:

    Fixed server: 4 sockets, 8 memory slots, dual NIC, dual IO connectors

    Blade: Four server slots, each with above

    Invalid comparison, a blade server is a single unit not 4 combined!

    Fior a start compare the CPU types that are supported between the 2, the DL585 supports a much higher spec CPU than the blade does.

    The number of expansion slots available to your blade as opposed to a DL585 for example will be vastly different. The DL585 G7 has around 11 x PCI-E slots and can handle up to 1 TB of memory and 8TB internal SAS storage, try fitting that lot into a blade!!

    Now take the blade, base this on the higher spec BL680C and not the inferior BL460C

    Expansion slots = 3, max memory = 128GB, 2 internal drive bays

    Hmm, what's more scalable here?

    AnzioBake (1/17/2013)


    When server become hardware bound, for the dl585: purchase complete new server, move db's over, reconfigure apps etc

    For blade, add new server board, configure to incorporate into current server

    What am I misunderstanding?

    Add new board?? You neglect to cite the part where you reinstall the OS as the hardware totally changes when changing the mainboard, if your DL585 only has 2 CPUs and 64Gb RAM and you add extra then you don't need to reinstall the OS.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Here's my understanding of blade systems (bear in mind, I have no direct experience with blades)

    Each individual blade in the system, is a "stand-alone" server which only shares power and connectivity with the other blades in an enclosure. So if you start with a single blade with lets say 2x CPU / 32GB RAM / 2TB HD, adding a second blade does not give you the equivalent of 4x CPU / 64GB RAM/ 4TB HD, but instead gives you 2 servers.

    If you have an application which can be load-balanced, and the know-how to set up such a thing, you'd get some benefits, but the complexity goes up.

    The goal of blades is lots of individual machines cranking on data (Web farms, rendering farms, SETI@Home nodes, etc) where each is a stand-alone system that can function without the rest.

    I believe there may be VERY specialized systems that you can "hot-add" processors or RAM to, but they're horrifically expensive.

    TL;DR:

    I agree with Barry, get a beefier single server which can be upgraded if need be later (add another CPU, more RAM, more HDs)

    Jason

  • jasona.work (1/21/2013)


    Each individual blade in the system, is a "stand-alone" server which only shares power and connectivity with the other blades in an enclosure.

    Correct, they share power cooling and networking

    jasona.work (1/21/2013)


    So if you start with a single blade with lets say 2x CPU / 32GB RAM / 2TB HD, adding a second blade does not give you the equivalent of 4x CPU / 64GB RAM/ 4TB HD, but instead gives you 2 servers.

    Correct again, most blades only have enough space for 2 int HDDs and rely on external storage. Problem is you only have space for maybe 1 HBA and so you're limited on SAN connectivity too.

    jasona.work (1/21/2013)


    The goal of blades is lots of individual machines cranking on data (Web farms, rendering farms, SETI@Home nodes, etc) where each is a stand-alone system that can function without the rest.

    Exactly, they're designed to save on space and power and make a group of servers a more manageable unit, however they're not ideal for all situations.

    jasona.work (1/21/2013)


    I believe there may be VERY specialized systems that you can "hot-add" processors or RAM to, but they're horrifically expensive.

    Windows 2008 provides hot add capability for memory and CPUs.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • From what I have read the Cisco UCS sytem as example does exactly that:

    Start with one, add a new blade, configure and you have the extra processing power on the same server. That is why I raised this question.

  • I also believe that it depends on ones environment. At a previous employer, the principal database servers reside on blade servers attached to a high performance SAN. The mirror servers are stand-alone rack servers with DASD. This setup provides more than enough horse power and availability for the LOB (Line of Business) applications. HR, Finance, SIS (not to be confused with SSIS) each have their own servers and the 32 GB of RAM on the blade servers is more than enough to support the applications now and in the foreseeable future.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply