hmm this somehow doesn't make sense to me, maybe you can clarify a bit:
i) Should it be SSD or NVMe? Because ii) you talk about NVMe
ii) 4x12 Gb/s fabric alone contradicts your Statement in ii) and invalidates iii) completely because fabric always involves network gear.
Secondly for DAS (Direct Attached Storage) the highest interconnect speed you can have is SAS 12Gb/s, wether you use SAS spindles, SATA HDDs or NVMe Storage, this is equal to the SFF-8644 Interface. OCULink which is used for NVMe U.2 Drives (SFF-8639) would be faster but I have not seen any DAS yet having such external interconnect.
So if you have a 2U Chassis like a Supermicro CSE-216 or NetApp DS2246 and you have 2 SFF-8644 Interfaces usually one is for redundancy / expansion and the other provides bandwidth which means if you want to be able to top out 4x12Gb/s interconnects you'll need at least 4x Chassis for this (4 - 8 HE) and you will need to be able to provide PCIe 3.1 Host Connectivity with x16 lanes from one port, 2 if you're going for redundancy.
Let's put it this way: I highly doubt you'll be able to provide 2x PCIe 3.1 x16 Ports from your nice little server, asides that I can get 50+ Gb/s Host Connectivity with Infiniband, RDMA and whatsoever quite easily.
You could go DAC (direct attached cable) with Infiniband but at those cable prices you could just add an IB switch inbetween and serve more than one host with your fabric again. Unlike the DAS Approach above based on SF-8644 you would need some sort of computer within your Infiniband Host and you would likely expand your storage via DAS SFF-8644 Storage boxes again. Currently without going for any High-Density Storage boxes that way you get around 360 TB of NVMe Storage per 2u Chassis (24x15 TB). High Density Configurations can have up to 90 drives in 4u which would be enough to reach your 1PB target I assume.