We're currently under going an exercise to re-design the storage structure of our production DB. This is a system that as been in place for 5+ years without any adjustments and i'm looking for some feedback on what i've come up with so far.Thanks in-advance for taking the time to read and/or respond!Summary
We are wondering 2 things:
• Recommended number of files per filegroup
• Does our design make sense or is it completely crazy and will it cause untold amounts of headache
Design calls for a total of 16 files across 4 filegroups (spread over 2 pools of disks – SSD/SAS) as outlined below.
With that said, let the giant walls of text begin!A quick overview
Windows Server 2008 Enterprise
SQL Server 2008 SP2 CU5 Enterprise
Dual socket, 8 core (X7560) (32 cores total w/ hyperthreading)
512 GB RAM
2 Node Active/Passive Cluster
IBM v7000 8 x 300 GB SSD in Raid 5
SAS disks with tempDB
Read/Write ratio roughly 50/50
On average 2,000 batches per second
DB size is between 1.6-1.8 TB (depending on retention rates for log tables)
DB layout: 4-5 Filegroups with 1-2 files in each, all contained on a single Volume (taking up the full raid5 SSD array mentioned above)Our current issues
• Filegroups are a mess
• We are constantly battling with maintaining the size of the DB on just SSD Disks.
• We cannot leverage SAN flashcopy due to restriction of 64MB max copy speed per LUN/volume (we only have 1!)
We’re also looking to move upto half or more of the DB ‘down’ to SAS as its log tables that do not have the need of insanely fast READ IO.
Ok, so now to the Visio picture below... it shows Pool, Volume, LUN, Filegroup Layout and with any luck gives a good sense of what we’re curious about.
So basically, we know we have 2 tiers/levels of storage… SSD and SAS. We also know that we want to have the base tables/clustered indexes in one file group, while we have the non-clustered indexes in a second group.
This gives us 2 x storage levels and 2 x filegroups. For a total of 4 Groups (excluding the Primary).
We set the SSD tier as default, and name the filegroup as such.
** One note here. Even though both the Tier1_Default and Tier1_Index filegroups are on the same set of physical disks, we feel there is benefit in having the indexes separate. If we wanted to move them to a different set of disks later, etc… We are at least in that position to do so.
So now we have our 4 user-data File Groups, we need to decide on how many Files per FG.
Are there a recommended number of files per Filegroup –some mathematical formula to # of cores/etc…?
We understand that typically multiple files are used to spread out the load across different disks but in our case all these disks are the same ‘set’.
However, we have an alternative reason here as well, as the 3rd point above. We are using IBM v7000/Storwise SAN units and are looking to take advantage of FlashCopy technology. The problem is, it replicates on a per LUN/Volume basis and the max throughput is a mere 64 MB/second.
So to combat this, we are looking to spread our data across a ‘high’ number of LUN’s in order to take full advantage of our storage infrastructure.