Disk sizes for SQL

  • I have 4 terabytes of data and need to spread them amoung the following disks -

    Is there a formual to determine how much space should be allocated to each?



    C: (RAID1) Boot & OS

    D: (RAID1) Pagefile


    E: (RAID10) SQL System DBs

    F: (RAID10) SQL Databases

    G: (RAID10) SQL Logs

    H: (RAID10) SQL Backups

  • According to the layout you show, it will all be on F, as that is designated as SQL Databases......which would be what holds data.....

    What is the division of the 4 TB? is it all Database? Are the backups part of the 4 TB? Is 4 TB just what you have available in space, and you want to divide it up between the volumns?

    And was there a particular reason that all the SAN volumns were set up as RAID 10?

  • 4 TB for everything including backups.

    Our total data is about 150 gig to date, but I want to make sure I have enough space every where..

    I read some articles on the raid and although some of them could be a raid 5 or 1, if I can have 10, why not?


  • Raid 10 is good, and don't let anyone tell you otherwise!

    It might be beneficial to consider an additional lun for data, in case you ever wish to use multiple file groups.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]

  • Our typical format is:

    Backups Drive = 2 X Data Drive

    Logs Drive = 1/2 X Data Drive

  • I don't argue that RAID 10 is great....I wish I could do that......I just wondered at the logic of it, as I get the "it's not cost effective" argument.....If someone came up with an argument to dispute that, I would be interested in it....

    And our strategy is to keep 7 days backups on the backups drives......generally, people do not restore yesterday's backup files, but one from a couple days back, especially if the weekends are taken into consideration......and getting the backup files back from tape is such a hassle, and takes too long...

    Another thing to think about, is how much slack space should be accounted for at any given time.....most of the time, people think about how much space is needed for the data files, and how much is needed for the backups, but no-one considers the space needed to keep the suspect database, restore from tape a file to become the new database, and then restore it....when it comes time to actually restore a backup, while keeping the suspect db, and your current backups, you'll need free space for it in backups drive, where it's dumped back from tape, and in the data drive to restore it....or you'll be shuffling files to make it happen......

  • First off I don't know the size or details of the drives so I can't say what is truely best. I also included Reporting Services databases in case you are using that service.

    C: and D: RAID 1 looks right, however I would do like so.


    C: (RAID1) Boot & OS ?? Pagefile

    D: (RAID1) Applications

    ?: Pagefile -- consider just using C: or if you want to splurge by a solid state drive (RAM Drive).


    E: (RAID1) SQL System DBs and their logs (Master, Model, Msdb, ??ReportServer) -- Not many transactions and few changes. Maybe RAID 5 but these should have the fewest transactions of all and very few lookups.

    F: (RAID5) SQL System DBs (TempDB, ReportServerTempDB) -- These are scratchpads more or less and are rebuilt. Most likely best as RAID10 but this doesn't need to be a very big array.

    F: (RAID5) SQL Databases/File Groups (Low Transaction DBs/File Groups, Static Tables DBs/File Groups, ??ReportServer) --Things with few changes because you're not as concerned with performance this also wastes fewest resources for a solid DR. Not RAID 1 though as may have a high number of reads.

    G: (RAID10) SQL Databases/File Groups (High Transactions/File Groups)

    H: (RAID5) SQL Logs -- These are written sequentially so you really don't have the high read/write scenario RAID 10 is aimed at.

    ?: (RAID5) SQL Logs -- Again Sequentially written so depending on how many DBs you have you might want to split them onto several smaller arrays.

    I: (RAID5) SQL Backups -- Maybe RAID 10 but seems awfully expensive for file storage without a lot of transactions. Might even consider using a local RAID 5 array instead of SANS incase your SANS has to be replaced and you might get cutoff from your file backups.

  • Raid 5 has effectively a 75% overhead for writes, for pure reads it is ok. If you need performance don't use raid 5, simple as that. I cringe every time I see anyone suggest using raid 5, especially for transaction logs - just check out the microsoft recommendations. I've worked with performance database servers since sql 6.0 and have whenever possible always used raid 10, you must forget this space issue, it's spindles and i/o's second which count - read the microsoft sql 200 tuning book for how you do all the sums.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]

  •     I'm new to the SQL stuff and come from an exchange backround. I got some great performance improvement from taking the public folder databases and putting them on separate servers.


    I think you should look at having separate volumes for each user DB and if you can look into separate DB servers for each DB if you are going to have a lot of reads and writes. For example if replication is going and people are trying to read from another DB on the same server you may get a lot of complaints.

  • I will conceede I wasn't think straight on Logs I would split again RAID5 low transaction RAID10 High Transaction.

    I do know there are performance gains with RAID10 but I also feel it is rediculous to go overboard unneccessarily. RAID5 for low transaction items provides plenty of performance especially if the majority of the time it sits or is read access. RAID5 offers conservation of the resources so you can put more to better ues elsewhere.

    But high transaction lots of writes to reads and a significant amount of the time in use really should be RAID10. Also if you have 10GB drives and you need 40GBs for the low transaction stuff (making sure you leave a window for more stuff assume 30GB) in RAID10 you have an array of 70x2 (so you use 140 GB wasting 70) in RAID5 instead you use 70+1 (80 GB with only 1 wasted for parity) And the Oher 6 can then be used for 30 GB of high transaction available space. But I would suggest keep at least a couple of drives around collecting dust or sitting in a test server so you hav them at a moments notice if something does fail.

    I worked in a group previously that felt if it was laying aroun it needed to be in production so each tim we had a drive in our RAID5 I was one edge (and pretty miffed) because it would take up to a week to get a tech ou with a replacement drive. So please consider not commiting everythin you have just because you can.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply