Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

Placement of SQL Server Data and TLog files in a SAN Infrastructure and Monolith Storage Environment Expand / Collapse
Author
Message
Posted Wednesday, April 2, 2014 3:35 AM


SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 5:09 AM
Points: 215, Visits: 1,356
Hi Everyone,

The reason for my post is to get others opinions, thoughts, and any facts regarding the placement of SQL Server data and tlog files in a SAN Infrastructure and monolith Storage environment. I've searched and read a lot of different articles and technical docs on SANs and Storage (HP, Dell, IBM, Hitachi) but haven't picked out anything definitive that answers the questions I have - though, in fairness it may be my fault for not fully understanding what I'm reading.

*caveat: I am not an expert in the workings of a SAN infrastructure and Storage management systems, but I have a reasonably good grounding of roughly how they work. I will apologise now if I have named any part of the SAN and Storage incorrectly and am happy for anyone to correct any mistakes if it help with the learning and understanding of the answer(s) to my post. Thx

The setup consisted of the migration of 4 stand-alone vitrual Servers (4GB Mem, dual CPU) with single SQL Instances onto a powerful physical Clustered Server (2-node, Act/Pas - 128GB Mem, 64 Core) with multiple SQL 2008 R2 Instances. At the time, all of the Instances had light workload and each had between 8 - 10 DBs, with the largest DB no bigger than 2GB. There was likely to be additional DBs added to the instances over time and it was expected that, as the web presence was developed further, the current DBs would grow both in size and in use, and this had to be catered for in the design and implementation.

The new physical Server was connected to a high-end SAN Fabric with Tiered Storage - SSDs, 15K SCSI HHDs, 7K SATA HDDs. The Storage layer was presented of one large monolith virtual array from which the SAN Admins carved out storage-based LUNs to present to the Servers, with which we would create Windows volumes. Capacity was good - less than half of the storage array was being used, though this was being ramped up as they finished the migration from the old SAN fabric and storage.

So, technically, even if we separated the SQL data files from the tlog file onto their own separate volumes, at the storage-level these files would be spread across the same physical discs within the storage array - it's worth noting here that DB backups were sent off to a completely separate Storage Array.

Within the Server, we made use of Monted Volumes (MV). Each Instance was given its own 1GB Container MV and within that we presented a further 50GB MV which housed the data and tlog files. As-and-when needed, based on collected perfomance and capacity metrics, we could present further MVs to separate out the specific data and/or tlog files.

What I wanted to understand was, based on the fact that these files all share the same underlying storage - I'm not looking from the durability perspective, just performance...

(1) how important, performance-wise, is it to separate out these files?
(2) is there a benefit in having the data and tlog files on separate windows volumes (ie, storage-based LUNs), based on the fact that all the data is spread across the whole of the storage array anyway?
(3) will having separate volumes (ie, storage-based LUNs) mean that there will be separate paths for the data to travel to and from the storage array through the SAN fabric, will this help with performance, and if yes, how?

If anyone has any links that they wouldn't mind sharing, to help me understand this, that would be fantastic.

I hope this makes sense and I look forward to your thoughts and comments.

Thanks in Advance
Post #1557381
Posted Wednesday, April 2, 2014 3:47 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Monday, July 21, 2014 2:56 AM
Points: 2,603, Visits: 2,061
As far as I know if you do have different set of LUNS assigned to different drives than it will be useful; else even though you do have different drives I/O will be same because they are of same set of LUNS.

HTH


---------------------------------------------------
"Thare are only 10 types of people in the world:
Those who understand binary, and those who don't."
Post #1557387
Posted Wednesday, April 2, 2014 5:18 AM


SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 5:09 AM
Points: 215, Visits: 1,356
free_mascot (4/2/2014)
As far as I know if you do have different set of LUNS assigned to different drives than it will be useful; else even though you do have different drives I/O will be same because they are of same set of LUNS.

HTH


Hi free_mascot, thx for reply...

If I understand you right, your suggesting that the different LUNs are assigned to different physical drives (as in the physical storage discs - maybe as a RAID array). If this is the case, then this is immaterial, as per my post, the drives are not separately carved out at the physical storage-level. All physical drives are mapped (in groups through Meta LUNs) and presented as 'one massive' virtual storage array. LUNs are then created from this virtual storage array pool and presented to any Server that needs Volumes, based on size of volume required and not spindle-count, with the ratio 'one' Windows Volume mapped to 'one' LUN.


I hope this makes it more clear as to the setup I'm describing.
Post #1557405
Posted Thursday, April 3, 2014 7:14 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Today @ 6:36 AM
Points: 1,025, Visits: 1,096
Hi,

You have Tiered Storage - what process decides which storage is used? Does the SAN automatically move frequently-accessed data to SSDs, or do you have 3 classes of LUN (one for each storage tier)?

You would need to talk to your SAN guru to find out how each Physical Node of the new Cluster is connected to the SAN; if you have a true Switched Fabric, then you need to look at the distribution of the fibre channel switches.

For the rest, I agree with you. The individual physical disks, or spindles, are so far removed from the Drive Partition visible in the OS, that fine-tuning the setup seems to be fruitless. I separate mdf/ldf/tempdb from the C: drive, to minimise the potential for consuming all space on the system drive, but apart from that, I have so little influence over the performance of the SAN that further tweaking is not worth it.
Post #1557960
Posted Monday, April 14, 2014 3:14 AM


SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 5:09 AM
Points: 215, Visits: 1,356
Andy sql (4/3/2014)
Hi,

You have Tiered Storage - what process decides which storage is used? Does the SAN automatically move frequently-accessed data to SSDs, or do you have 3 classes of LUN (one for each storage tier)?

You would need to talk to your SAN guru to find out how each Physical Node of the new Cluster is connected to the SAN; if you have a true Switched Fabric, then you need to look at the distribution of the fibre channel switches...


Hi Andy,

Thanks for your response and sorry its taken so long for me to reply - been on holiday with no access to technology...it was magic.

With regards to the Tiered Storage, there's an algorithm within the SAN Management System that runs every night and promotes/demotes blocks based on usage patterns, plus some 'pinning' that the SAN Admins do for particular blocks they manually tune. Whilst it is good in most cases, there are times when blocks get promoted/demoted too late/early which can affect performance. But in the grand scheme of things, this is not so important in our case.

As to talking to the SAN Guru, this is where the problem lies...the SAN Admins just administer and are not really into the why's-and-wherefore's of performance with the SAN environment. This is why I came here to try and reach out to some gurus in the hope of getting opinions to the help my understanding of how we should be looking to setting up these environments, which are so different to the DAS setups and the 'old' SAN environments - and politics mean I cannot get an outsider SAN specialist in to help with this.

One thing I was looking at was how the paths to each of the Volumes (ie, the LUNs, as we map one LUN to each Windows Volume), from the HBAs through the SAN Switch, and into the Storage area can affect things. So, for instance, is it better, performance-wise, to have, say, mdf/ndf files on separate LUNs to the ldf files, because they route through different paths to the Storage? I'm trying to understand if these sorts of things matter, and it's difficult to test myself as I don't have sole access to a SAN environment to test myself.

Hopefully, some SAN Guru will come across this thread and point me in the right direction.
Post #1561386
Posted Monday, April 14, 2014 3:52 PM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Yesterday @ 4:23 PM
Points: 1,194, Visits: 2,209
As your SAN LUNS span the same physical drives, it won't much useful in separating the files out. But nothing wrong in separating them as it leads to ease of administration.

To begin with, you should have done I\O stress testing before moving on to the new cluster just to make sure if it can handle the workload.

Also check the Storage Cache, HBA Queue Depth and if SAN Zoning has been enabled. If the SAN Storage LUNS gets shared with other servers with different I\O patterns then it will lead to poor performance. You should have a dedicated SAN storage attached to your SQL Server to have good performance.

And the path to the storage won't be much different for the data travel.

--
SQLBuddy

Post #1561683
Posted Tuesday, April 15, 2014 4:11 AM


SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 5:09 AM
Points: 215, Visits: 1,356
Hi SQLBuddy,

Thanks for your input. Unfortunately, I don't think it has been of help. Please see my comments, below...

sqlbuddy123 (4/14/2014)

...
To begin with, you should have done I\O stress testing before moving on to the new cluster just to make sure if it can handle the workload.
...

This is a large SAN environment that doesn't just service one Cluster and was already 'live' and running production workloads when I got involved. Capacity of the SAN and Storage, both storage-wise and performance-wise, were good, running at 1/2 capacity, as I mentioned in the OP, and all of the Instances had light workloads. An attempt to 'stress test' was done by another DBA, but it was evident that this was next to useless because the SAN Admins would not let him 'stress' the system - for obvious reasons...it was already in production. The opportunity to carry out solid testing had been missed at the beginning of deployment of the SAN environment and so all we had to go on was performance metrics we could gather and to ensure we captured and analysed baselines both before and after deployment of each addition to the environment to understand the effect.


sqlbuddy123 (4/14/2014)

...
Also check the Storage Cache, HBA Queue Depth and if SAN Zoning has been enabled.
...

Info on these areas is out there and was done based on that info. It's not so easy to find info on the questions I put in the OP.


sqlbuddy123 (4/14/2014)

...
If the SAN Storage LUNS gets shared with other servers with different I\O patterns then it will lead to poor performance.
...

As stated in my OP, LUNs were matched one-to-one with a Windows Volume. There was no sharing, and again this info is out there, so no problems here.


sqlbuddy123 (4/14/2014)

...
You should have a dedicated SAN storage attached to your SQL Server to have good performance.
...

Nope...as I mentioned in the OP, things have moved forward from the days of 'dedicated' SAN storage and have moved to a design where the Storage layer is made up of many Disc LUNs pulled together as MetaLUNs, and presented as one large monolith Virtual Array from which the SAN Admins carve out storage-based LUNs to present to the Servers, such as our SQL Cluster.


sqlbuddy123 (4/14/2014)

...
And the path to the storage won't be much different for the data travel.
...

This is the bit I am interested in, but hasn't helped me much in understanding the impact. Sorry.
Post #1561799
Posted Tuesday, April 15, 2014 4:57 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Today @ 6:36 AM
Points: 1,025, Visits: 1,096
Difficult to say if, performance wise, it would be better to split the mdf/ndf files from the ldf files. It would depend on how the database is used (eg. mainly reads or mainly writes?). But the discussion here is more about SAN performance, and knowing where the bottlenecks are.

In the absence of a SAN guru, an alternative approach, also suggested by SQLBuddy, would be to run a stress test tool to get an idea of the real-world situation. Depending on whether you like your SAN Admin, you may want to check with them first!

The simplest, command-line tool I know is SQLIO from Microsoft:
http://www.microsoft.com/en-us/download/details.aspx?id=20163

Brent Ozar has a nice, concise intro to SQLIO:
http://www.brentozar.com/archive/2008/09/finding-your-san-bottlenecks-with-sqlio/

To more accurately test real-world MSSQL activity, try SQLIOSim (which replaced SQLIOStress)
http://support.microsoft.com/kb/231619

Or the slightly nicer GUI tool CrystalDiskMark
http://crystalmark.info/download/index-e.html#CrystalDiskMark

SQLIO should at least tell you if your LUNs have all been created equally; and also tell you if you can expect better performance by splitting the database files across multiple LUNs. With tiered storage, tests could get more difficult to interpret.
Post #1561807
Posted Tuesday, April 15, 2014 6:03 AM


SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 5:09 AM
Points: 215, Visits: 1,356
Hi Andy sql,

Andy sql (4/15/2014)

...
In the absence of a SAN guru, an alternative approach, also suggested by SQLBuddy, would be to run a stress test tool to get an idea of the real-world situation. Depending on whether you like your SAN Admin, you may want to check with them first!
...



I like my job, and my reputation as a good DBA, too much to risk bringing the company's SAN environment to its knees and costing it a lot of money ...it's a financial-based company and an outage would be in the 100's of thousands, and possibly millions, of £££££'s. So, sorry, but there is no way I would do such a thing. And for anyone else reading this - by all means try it at home, but please don't do this on a production system without preparing your CV!!!

As for SQLIO/SQLIOSim/IOmeter/etc., I'm fully aware of these tools and what to use them for...and in fact SQLIO was the tool used to try and 'stress' the SAN environment by another DBA - which of course the SAN Admins (there's 3 of them and they're big and scary) would not allow. Because I don't have the perfect opportunity of a SAN available to carry out my own tests I was hopeful that a SAN guru would be able to help, and possibly point me to good quality information to help answer my questions, especially Q.3.

Thanks anyway.

Post #1561828
Posted Tuesday, April 15, 2014 6:20 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Today @ 6:36 AM
Points: 1,025, Visits: 1,096
There is no way that a single SQLIO exe, running on a single server, should be able to bring down your SAN!

If you know that you are not allowed to run a stress-test tool, then an alternative is to simply copy a large MDF file between partitions, and measure how long it takes. Not exactly scientific, but is a real-world situation.
Post #1561834
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse