The reason for my post is to get others opinions, thoughts, and any facts regarding the placement of SQL Server data and tlog files in a SAN Infrastructure and monolith Storage environment. I've searched and read a lot of different articles and technical docs on SANs and Storage (HP, Dell, IBM, Hitachi) but haven't picked out anything definitive that answers the questions I have - though, in fairness it may be my fault for not fully understanding what I'm reading.
*caveat: I am not an expert in the workings of a SAN infrastructure and Storage management systems, but I have a reasonably good grounding of roughly how they work. I will apologise now if I have named any part of the SAN and Storage incorrectly and am happy for anyone to correct any mistakes if it help with the learning and understanding of the answer(s) to my post. Thx 😀
The setup consisted of the migration of 4 stand-alone vitrual Servers (4GB Mem, dual CPU) with single SQL Instances onto a powerful physical Clustered Server (2-node, Act/Pas - 128GB Mem, 64 Core) with multiple SQL 2008 R2 Instances. At the time, all of the Instances had light workload and each had between 8 - 10 DBs, with the largest DB no bigger than 2GB. There was likely to be additional DBs added to the instances over time and it was expected that, as the web presence was developed further, the current DBs would grow both in size and in use, and this had to be catered for in the design and implementation.
The new physical Server was connected to a high-end SAN Fabric with Tiered Storage - SSDs, 15K SCSI HHDs, 7K SATA HDDs. The Storage layer was presented of one large monolith virtual array from which the SAN Admins carved out storage-based LUNs to present to the Servers, with which we would create Windows volumes. Capacity was good - less than half of the storage array was being used, though this was being ramped up as they finished the migration from the old SAN fabric and storage.
So, technically, even if we separated the SQL data files from the tlog file onto their own separate volumes, at the storage-level these files would be spread across the same physical discs within the storage array - it's worth noting here that DB backups were sent off to a completely separate Storage Array.
Within the Server, we made use of Monted Volumes (MV). Each Instance was given its own 1GB Container MV and within that we presented a further 50GB MV which housed the data and tlog files. As-and-when needed, based on collected perfomance and capacity metrics, we could present further MVs to separate out the specific data and/or tlog files.
What I wanted to understand was, based on the fact that these files all share the same underlying storage - I'm not looking from the durability perspective, just performance...
(1) how important, performance-wise, is it to separate out these files?
(2) is there a benefit in having the data and tlog files on separate windows volumes (ie, storage-based LUNs), based on the fact that all the data is spread across the whole of the storage array anyway?
(3) will having separate volumes (ie, storage-based LUNs) mean that there will be separate paths for the data to travel to and from the storage array through the SAN fabric, will this help with performance, and if yes, how?
If anyone has any links that they wouldn't mind sharing, to help me understand this, that would be fantastic. 🙂
I hope this makes sense and I look forward to your thoughts and comments.
Thanks in Advance