I saw a demo of this the other day from a Dell rep. He claimed that Compellant's system architect said that performance tuning these tiered SANs is "as easy as clicking checkboxes and then going to drink beer".
I had one question for the Dell rep, and he did not have an answer for me since he was not a SQL Server guy. I asked him how tools that try to use heuristics to automatically discover bottlenecks, like the PALTools
on Codeplex, could still work in this environment. Obviously, PALTools could still show you non-Disk bottlenecks (CPU, Memory), but I am trying to wrap my mind around how it would report on disk bottlenecks.
I am still a "Junior DBA" (really, I am a software developer who was asked to be the performance guru on our SQL Servers), so I apologize if this question is naive or stupid.
The geek in me finds this tiered data storage thing fascinating.
I've not used the PALTools. For the most part, I use the tools provided by Microsoft (Perfmon, Performance Dashboard) and the various UI's provided by Compellent to evaluate system and SAN performance so I can't offer you a comparison.
The Compellent UIs offer an almost overwhelming array of ways to view and evaluate SAN performance and their associated heuristics. You don't look it operations on a per spindle basis. You look at operations in other terms, like how many IOops (IO operations) your SAN can perform relative to how many IOops your SAN is executing, and how much data throughput is occurring relative to the SAN's maximum data throughput in Bytes per second, both on the front end and back end, to determine how the SAN is being used and where the stress points are.
What determines the maximum number of IOops for your SAN? Primarily the number and rotational speed of the disks you have on each Tier. For instance, if Tier 1 storage is 15K rpm Fiber Channel drives, and you have 32 of them, that will be your limiting factor, since all data is written across all 32 spindles. If you have 64 spindles in Tier 1 storage, then the number of IOops your SAN can handle is doubled (relative to the maximum number of IOops for 32 spindles). If you have 16 spindles in Tier 1 storage, then the number of IOops your SAN can handle is cut in half (relative to the maximum number of IOops for 32 spindles).
The same logic would apply to Tier 2 or Tier 3 storage (usually 7000+ rpm SATA or SAS drives), except that the disks' rotational speeds are slower than 15K rpm disks so the maximum number of IOops is lower.
Additionally, there are limitations imposed on the front end of the SAN by the maximum number of packets your iSCSI connnections can handle. There are also limitations imposed on the back end of the SAN by the speed of its internal communications.
In our production environment, these technical aspects are rarely considered because everything is so fast. The most important considerations are setting up data progression properly so that rarely used data that originally was stored on Tier 1 storage is slowly migrated to Tier 3 storage. This keeps your most frequently accessed data on your highest speed Tier and your most infrequently accessed data on your lowest speed Tier.
And, of course, you must have efficient queries and appropriate indexes to support efficient use of your SAN. THIS is still the most important aspect of system tuning that I do. If your queries are causing clustered index scans of large tables, this may appear to be a disk bottleneck when in fact it is an improper database related implementation.
I can't say that tuning a Compellent SAN is as easy as drinking beer (that's pretty easy and comes naturally to me) but it's not rocket science either. It's pretty straightforward once you understand the concepts upon which the Compellent SAN is designed and operates.