Search for wicked fast direct attach storage

  • After working and managing many different SAN's ( i am not going to name them but you can guess, these are top 5 ) i am sick and tired of dealing with networking changes and having army of people lined up to add a shelf. I was super happy with FIO direct attached storage, scaling was challenge so we went to SAN, i am back in market for another solution. Here are few things i am looking for:

    i) Should be SSD

    ii) Should be direct attached with  at-least 4 X 12 Gb fabric, if NVME then better.

    iii) Should not involve any n/w gear ( i know my point ii meant the same, this tells how much i do not want SAN 🙂 )

    iv) Should be scalable upto  reasonable storage i.e. 1 PB and performance.

    I do understand some of these things are dependent on the server also, i have decent sized physical servers ( 196 Cores, 3 TB RAM etc). I wanted to know if any of you have implemented a solution similar to what i am looking for please let me know.

  • Have you looked at Pure Storage? They do stuff that seems like magic honestly.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • hmm this somehow doesn't make sense to me, maybe you can clarify a bit:

    i) Should it be SSD or NVMe? Because ii) you talk about NVMe

    ii) 4x12 Gb/s fabric alone contradicts your Statement in ii) and invalidates iii) completely because fabric always involves network gear.

    Secondly for DAS (Direct Attached Storage) the highest interconnect speed you can have is SAS 12Gb/s, wether you use SAS spindles, SATA HDDs or NVMe Storage, this is equal to the SFF-8644 Interface. OCULink which is used for NVMe U.2 Drives (SFF-8639) would be faster but I have not seen any DAS yet having such external interconnect.

    So if you have a 2U Chassis like a Supermicro CSE-216 or NetApp DS2246 and you have 2 SFF-8644 Interfaces usually one is for redundancy / expansion and the other provides bandwidth which means if you want to be able to top out 4x12Gb/s interconnects you'll need at least 4x Chassis for this (4 - 8 HE) and you will need to be able to provide PCIe 3.1 Host Connectivity with x16 lanes from one port, 2 if you're going for redundancy.

    Let's put it this way: I highly doubt you'll be able to provide 2x PCIe 3.1 x16 Ports from your nice little server, asides that I can get 50+ Gb/s Host Connectivity with Infiniband, RDMA and whatsoever quite easily.

    You could go DAC (direct attached cable) with Infiniband but at those cable prices you could just add an IB switch inbetween and serve more than one host with your fabric again. Unlike the DAS Approach above based on SF-8644 you would need some sort of computer within your Infiniband Host and you would likely expand your storage via DAS SFF-8644 Storage boxes again. Currently without going for any High-Density Storage boxes that way you get around 360 TB of NVMe Storage per 2u Chassis (24x15 TB). High Density Configurations can have up to 90 drives in 4u which would be enough to reach your 1PB target I assume.

  • You say it should scale to 1PB and so this sounds like some hardware that's pretty important to the company.  And, it sounds like you have something to do with some data that's also pretty important to the company.  You say...

    i am sick and tired of dealing with networking changes and having army of people lined up to add a shelf

    That implies that either the company has a problem with realizing what's important or the infrastructure team does.  In either case, THAT is what needs to be fixed.  Everyone has to remember that they're actually working for the same team.

    For hardware, you don't just need one SAN... you need TWO so that if one catches fire, you can remove power from it and the other one should already be online and handling the load.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Adding to what Jeff said: Preferably  the second SAN would be in a different DC, a fire wouldn't stop just before the secondary SAN unit sitting above the one burning down. 😉

    You could go VSAN or S2D and just have half of your hosts be in a different DC but that would require an amazing interconnect or replicate your VSAN to a secondary DC asyncronously. Asides that you need more IOPS and bandwidth if you replicate storage to meet or exceed your requirements, so a 100k IOPS SSD will most likely deliver less IOPS when used in a VSAN configuration.

  •  

    I have used and personally managed Pure Storage few years ago, i should say there performance is pretty good. However we walked away back then because at-least then it didn't do deduplication inline and would actually dedup after the data has landed. Compression significantly reduced performance. I should say if i had to go with SAN pure would probably be my first choice.

  • well there is enough Software Defined Storage, as mentioned before VSAN and Storage Spaces Direct (S2D) are some of the Options amongst others like Starwind, you don't need to go for a SAN again but SDS brings it's own implications just like SAN does.

  • curious_sqldba wrote:

    ..if NVME then better.

    I haven't seen any SAN with NVMe yet. If someone knows a model with NVMe please share that.

  • Netapp A800

    Dell PowerMax 2000 / 8000

    Dell aswell offers HCI Ready Building blocks and Azure Stack HCI

    HPE Nimble

    Supermicro has a few options for SDS

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply