Help on building a new Server

  • I am in the process of building a new server to run our SQL Server databases (we have many TB's of data). I am looking for some help on what I should request for hard drives. My thought was to have 5 logical drives:

    Drive 1: System drive

    Drive 2: Database drive for DB's with a lot of read/writes

    Drive 3: Database drive for other DB's and TempDB

    Drive 4: Log files

    Drive 5: Backups and other storage

    Would this be a good setup? What type of disk (SSD or other) and RAID should I use for each drive? Keep in mind I'm trying to get the most bang for the buck. I don't have an unlimited budget, and I want to spend a lot on RAM.

  • Are your disks on a SAN or physically attached to the machine?

    SAN disks in a good SAN should be set up in RAID by the SAN controller and should be transparent to you.  The other fun part with SANs is that you generally don't need to ask for separate drives as the SAN controller won't usually give you 1 drive when you ask for a chunk of disk.  And even if it does, it could change that based on disk use.

    If you have physically attached disk, you want to try to keep all of the high I/O operations on a single disk where possible.  What I mean is 1 disk per high I/O database.

    As for the type of disk and RAID, that depends again on your disk I/O, capacity requirements and budget.  If you find your disk I/O currently is not a bottleneck, then increasing the disk I/O (such as from moving from 10,000 RPM HDD to SSD) may not be the most bang for your buck.  And RAID-ing your disks for performance may not be that beneficial.  On the other hand, if your bottleneck is disk I/O, then moving up to SSD and RAID-ing for performance may help.

    Without knowing a bit more about where your bottlenecks are, it is hard to know what would be a "good setup".  It may be partitioning your data across multiple disks would be the best bet, or it may be better to have everything on one disk.

    Depending on how much swap space you need, you may want a second disk for that.  But be cautious about the C drive swap space; some things will go wonky on you if you disable swap space on the C drive...

    My only concrete recommendation would be to put your backups on a different computer altogether.  Don't want that PC to have some weird bug that causes all drives to die on you.  Had that happen on a personal PC... power supply was too small and eventually caused 1 drive to spark on the case which fried all drives on the controller.  At work, we had a 3rd party tool handling our filesystem (wasn't NTFS, it allowed for a single disk to share read/write access across multiple physical servers.  It was used for failover) and the tool did a bad write on one of the 3 servers and corrupted the entire disk.

    My recommendation is to have the backups on a different PC (offsite if possible) so they are more secure.  If your PC holding the backups explodes, you have no backups.

     

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • The disks would be local, physical disks.  We have a lot of very large databases, and then there are archives for those databases.  At one point we may be copying millions of rows from one database to another (or from one table to another inside the same database).

  • I think the setup is likely OK, but you ought to look IO load on the existing dbs and run HammerDB or something else to be sure you're getting better performance on the new system.

    I like local backups on a separate drive, but make sure you have a process to copy them off. This gives you quick local restores for "oops" issues, but a copy for real DR.

  • With them being physical disks, I think that setup looks like it should be good.

    Your next question about the types of disk (SSD or Other) and RAID, that really depends on your business requirements.  If current disk IO is not a concern, I would go with the same type of disk you are using currently and the same RAID setup (if any).  If current disk IO is causing a bottleneck and performance issues, I would look at getting faster disks and/or implementing RAID 0 or RAID 10.  Note that adding in RAID will require more disks.

    The other thing is you mention having an archive database.  I would assume this one would need less performance.  So you may want to get a mix of disk types.  Like get SSD for your log files and heavy-hitting database files, and get a 7200 RPM disk for the archive.  Based on the size of the databases, I would expect you would want to have fast-ish disk for your backups, but not much point having the backup drive faster than the database drive.  You may get decent performance with a few 7200 RPM disks in RAID 10 for your backups.  I would recommend RAID 10 over RAID 0 for performance (even though RAID 0 is a bit faster) because you get a bit of fault tolerance.  RAID 0 has no fault tolerance so if any drive dies, all the data is toast.  With RAID 10, if 1 disk fails, you still get the performance benefit until you replace the disk.  I am not sure what your knowledge on RAID is, but the wikipedia articles on RAID and nested RAID explain things quite well.

    As Steve suggests, running HammerDB will give you a good benchmark on overall performance improvements.  The only problem with that is that you first need to have the new system in place so you can run that tool on both the old one and the new one.  If it doesn't perform as well as you hoped, you may need to throw more money at it to get faster CPU, RAM or disk.

    Therefore, I would look at your bottlenecks with SQL Wait Stats and Windows PerfMon.  No point throwing a lot of money at the disk only to find out the CPU was spiking to near 100% constantly or finding out that SQL was using all the memory and paging to disk or the opposite problem of the CPU sitting idle most of the day and RAM use only getting to 10-15%.  HIGH CPU/memory is bad as you get performance hits.  Idle CPU and RAM is bad because you spent money on a thing that didn't need to be improved.  With the high memory you will end up getting things paged to disk and no matter how fast the disk is, it will always be substantially slower than RAM so paging to disk will be very bad.  Finding the bottleneck would be my first step, but finding out that disk was the bottleneck and putting all your eggs in that basket may not give the performance improvement you are looking for either.  For example, if you have an 8 core 2GHz processor and a bunch of really slow 5400 RPM disks with 8 GB of DDR3 RAM, upgrading JUST the disk to an SSD will get you better read/write performance, but you are likely going to hit a bottleneck on both CPU and RAM as well.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • This was removed by the editor as SPAM

  • mikejs11 wrote:

    Drive 1: System drive

    Drive 2: Database drive for DB's with a lot of read/writes

    Drive 3: Database drive for other DB's and TempDB

    Drive 4: Log files

    Drive 5: Backups and other storage

    I would put TempDB on a separate but fast drive. It may be better to use some less fault tolerant drive for TempDB like RAM disk or RAID 0 array.

    Don't forget to format all disks which will store SQL files with 64KB cluster size.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply