Server Hardware Sizing

  • Hi,

    I am trying to find out if there are general server hardware sizing guidelines available. My requirements are

    Sql Server 2005 for a data warehouse

    Size going to be around 750 MB - 1 TB

    20 - 30 users simultaneously logged in and querying the data

    I can do the database design, but need guidelines and help around the actual hardware requirements (RAID level with controllers , NIC Configuration, CPU, Mem, cluster, san etc . Tried to look around but could not find anything worthwhile.

    Thanks in advance.

  • General requirements - Go big!

    Seriously it's hard to guess on hardware until you know what the load is. Is this a brand new system? Any prototyping to get an idea of reads/writes/transactions/sizes of queries?

    Raid - I've used 5 for years and been successful, but there are good reasons to not use anything other than 1 or 10. http://www.baarf.com

    NICs - Can't imagine needing more than 1 these days. They're so fast. I guess a spare would be handy, but I've never really worried about it.

    CPU - I'd go quad core these days, doesn't make sense to go less, but are you 32 or 64 bit? Do you have SSIS packages, linked servers, etc? 64-bit can be flaky in some places. However 64-bit usually wants more on chip cache over speed. Hard to tell on this load if 2 or 4 would work. Or if you'd need 8. Depending on budget, I might go a 4way with 2 CPUs for now. Or an 8way with 4 CPus, giving me room to expand.

    RAM - more is better. If you can get 8 or 16GB, get it. Even with 32bit.

    Clustering - this is about downtime. Do you need it? It's expensive?

    Make sure your version of SQL supports this. Most higher end systems require Enterprise, so be sure that your company is buying that.

  • we use HP hardware where i work

    Proliant DL 380's are quad core and go to 32GB of RAM. they just release a DL 580 that will take up to 256GB of RAM.

    for storage HP has the MSA 70 that takes the 146GB SAS drives and the MSA 60 that will take SATA 750GB and SAS 300GB hard drives with up to 12 drives per enclosure

    last year i set up a server with RAID 0 + 1 and it was pretty quick. but people complained about the cost. now we are just staying with RAID 5 and haven't really noticed any performance difference. the new hard drives are very fast and we have so much that the sheer number of spindles is probably making up for any degradation from writing the parity bit

    and unless you absolutely have to install 32 bit, i would install the 64 bit versions of windows and SQL over 32 bit. with 64 bit you can migrate your licenses to faster hardware as long as there is driver support. with 32 bit you are really limiting yourself

    i would ask for the following

    DL 580 with 32GB of RAM (make sure you get the most densest RAM chips available so you won't have to junk them later when you upgrade)

    2 quad core CPU's

    P400 RAID controller will be included

    get the backplane so it can have all 16 internal hard drives

    DVD-RW

    P800 RAID controller

    MSA60

    12 300GB SAS hard drives

    16 146GB SAS hard drives

    use the internal hard drives for OS, paging file, tempdb and logs

    use the external drives for the database and index storage

    the new SAS storage you can piggyback something like 8 MSA's to each other per RAID controller. i think it comes out to around 100TB of storage max

    it takes up to 4 quad core CPU's but i'm not sure if windows 2003 enterprise will let you use all of them

  • you do want to go 64bit, forget x32. As for the rest I agree with Steve it's just difficult to know. I will disagree about NIC's there should never be one of anything - single point of failure, you'd want at least two nic's probably teamed.

    I've just spec'd a DW server - x64 4 x quad core ( although I'd prefer dual core AMD ) 64gb ram 42 15k disks + 10 internal sas. Note that most SAS disks only have a 10k spin speed so 15k scsi's will give you slightly better throughput and response.

    I only have an estimated 100Gb of data at this time.

    However it's really difficult to work it out - we were doing a data load test this afternoon - seems there's maybe a bit of a scaling issue - after 3 hours we'd managed about 10% load , run a 4 way quad core at 90% cpu solid and done the most io I've ever clocked on a query, 4.5 billion logical reads.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Hardware sizing is as much art as science but you need to ask some serious questions about the expected performance of the server before picking a particular server - you mentioned a TB size warehouse, does that include 3-5 years of data growth (or are you going to be keeping a fixed amount of data in the warehouse?), 20-30 concurrent users or 20-30 sometime users? Tools that are going to be used to access the data? Backups? Analysis Services? User expectations as to query performance? Loading/ETL vs. query? It may be you need to have a loading/updatable instance and a reporting instance. The list is long.

    As for the differences between a DL380 and a DL580 (or 585), if scalability over time is an issue I'd take a serious look at buying a 580/5 instead of a DL380 - you don't need to load it up (e.g. put only 2 procs in it for now) but it's a lot easier to pop a couple of additional procs/more ram into a machine that already has the extra slots than it is to migrate to a bigger machine (unless you're going to VM).

  • colin Leversuch-Roberts (2/25/2008)


    you do want to go 64bit, forget x32. As for the rest I agree with Steve it's just difficult to know. I will disagree about NIC's there should never be one of anything - single point of failure, you'd want at least two nic's probably teamed.

    I've just spec'd a DW server - x64 4 x quad core ( although I'd prefer dual core AMD ) 64gb ram 42 15k disks + 10 internal sas. Note that most SAS disks only have a 10k spin speed so 15k scsi's will give you slightly better throughput and response.

    I only have an estimated 100Gb of data at this time.

    However it's really difficult to work it out - we were doing a data load test this afternoon - seems there's maybe a bit of a scaling issue - after 3 hours we'd managed about 10% load , run a 4 way quad core at 90% cpu solid and done the most io I've ever clocked on a query, 4.5 billion logical reads.

    why AMD? their current lineup is years old and Intel finally has some good products on the market

    i would also stay away from the old style SCSI disks even if they are 15k. SAS disks have bandwidth per hard drive. the SCSI disks is shared. the hard drive might spin faster, but the backend bus is a piece of junk and that is where your delay will be.

    HP has 300GB 15k SAS drives now

    love x64, but i swear we've had more weird problems with it than we ever did with windows 2000. blue screens from driver problems and MS even released several hotfixes to fix storage problems with windows 2003 SP2

  • Seldom mentioned, but keep your seek times to a minimum by NOT grossly oversizing your hard drives. Keep your costs down as well. If you can feel comfortable with a 36GB drive for whichever files, then don't get a larger one.

    Tom Garth
    Vertical Solutions[/url]

    "There are three kinds of men. The one that learns by reading. The few who learn by observation. The rest of them have to pee on the electric fence for themselves." -- Will Rogers
  • Thank you all for all the replies and suggestions.

    I am not a hardware person and on SQL Server forums generally hardware is not discussed from what i could find.

    As for some of the questions raised by Old Hand which are very valid,

    The 1 TB size will include about 3 years worth of data and for a data mart and not the warehouse. That is going to be planned couple of years down the road.

    There will be a max of 20 concurrent users and also worth mentioning that the data load needs to happen every 15 - 30 min intervals throughout the day.

    Business Objects is going to be the front end toolset (WEBI and Crystal) and the ETL Tool (Data Integrator).

    This is going to be used mostly for canned reports to begin with and analysis coming later. The user expectations in regard to the query performance are not that high at this point.

    Backups and DR scenario is something i have never dealt with on this scale and size and would appreciate some input.

    Although the hardware ( the architecture, the choices and the vendors) change very often, there ought to be some general guidelines to start sizing the server just like you have for a database configuration especially if you are starting off with a new requirement.

    Thanks again for your input.

  • yup the SAS drives are coming along now, as I've already posted elsewhere don't muddle bandwidth with io performance - it takes a lot of sql server io to swamp any bus, but it's very easy to become io bound. The scaling of sql server servers is a difficult one, consider the difference between parallel and serial processing in your database - a quad core ( which isn't very good as it shares cache ) generally has a lower clock speed so you'll probably lose out on raw processer vs parallel - I persoanlly think we've got carried away with cores and we're applying far too many to our workload because we can. I had a discussion about this the other day and suggested we might want to go dual quad to reduce license costs - after all we're actually considering an 8 way box here ( in last years terms ish ), however we'll go quad x quad - I'd prefer 4 x dual to get fater clock speed. Not going to go the amd intel argument - just consider that x64 is based on amd technology not intel. There are some interesting things about hypervisor for w2008 too which favour amd.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply