SAN Performance Question - Utilizing LUNs

  • That makes sense. Thanks for the explanation. I'll stay away from the recommendation our SAN guy is making about utilizing LUNs. Better safe then sorry.

    Dave

  • emc says that for performance they recommend to scale out instead of using dedicated disks

  • You wouldn't happen to have an EMC link that mentions this?

  • well put Matt - there is some sort of mystique about SAN's , especially put about by certain vendors on how suddenly the performance of disk arrays is different on a SAN, I've now been working with SAN storage since I guess it first started, well at least 6 or 7 years ago anyway.

    In a nutshell shared luns will bring you grief - it takes time to educate users and vendors ( some ) to this way of thinking. I figure there's confusion between zoning and sharing - but that's another matter. Top point for emc sans as their engineers/consultants seem to miss this, is to use Diskpar from microsoft to align the partitions - according to my sources performance effect is 25% - 30% difference ( e.g. better ) The other thing is to configure buffers/queue depth on your HBAs and switches - there's no hard and fast rules on this but ti needs to be higher than a file server setting. It goes without saying you should use redundant networks with load balanced hbas - so ideally you want 2 pairs of hbas in your server, split between buses and through two seperate switches to your storage. You might want to check out Hitachi Data Systems for white papers, although its their storage they have some nice docs 9 sorry no links currently to hand ) They're also the most switched on company and most approachable ( my personal experience ). I've had some very informational discussions with some of their guys ( UK ) and I'm grateful for their time to explain stuff to me - snia courses are good as they give you an overview - best week I ever spent on a course and as I'm a contratcor I have to pay for my own training and I don't get paid when i do a training course!!

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • DBADave (2/29/2008)


    You wouldn't happen to have an EMC link that mentions this?

    you can search for the white paper on their site, forgot where it is

    or pm me with your email and i'll send it to you. pretty sure i have a copy on my laptop

  • They sent me some white papers and I'm going to have a call with an EMC SQL rep tomorrow.

    Thanks

  • This is a great thread.

    Every one has seen the pain point of SAN or System admin designing storage by size and not IO, Disk latency, Bytes transferred, and work load.

    We have tray one LUN 1 disk 1-4 1_0 raid 8 millsec response time; SQL transaction logs.

    Some one created LUN 2 same disks raid 5. They wanted you to move a very heavy used table with allot of page splits and millions of records inserted a day on to LUN 2. What would happen to LUN 1 performance? As long as they both would not compete for the same disk time; nothing. But if they both pounded the disk at the same time. LUN one that had transaction log and was a sequential writes is now random writes.

    You need to know disk utilization and or if you can load balance your process across the time line so that they do not compete with have to access the disk at the same time.

    It the art of performance tuning, That what they forget to tell you.

  • Are you saying you have a RAID 10 and RAID 5 running at the same time on the same set of disks? Is that even possible?

  • DBADave (3/7/2008)


    Are you saying you have a RAID 10 and RAID 5 running at the same time on the same set of disks? Is that even possible?

    In the same LUN - yes, it's possible, when the LUN is a "virtual" LUN made up of several slices. It's possible to have the slices be different RAID levels.

    Notice I didn't say "smart". It's incredibly stupid IMO. Nothing like having your SQL Server operate like a car with water in its gas line.....

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • You say virtual LUN so does this mean the RAID is a software RAID and not hardware. I always thought a set of disks could only have one hardware RAID, but I can understand how it is possible to have multiple software-based RAIDs for the same set of disks.

    Dave

  • No - a virtual LUN is an aggregation of several "physical LUN's". Physical LUN (more or less) = tied to a single physical RAID group.

    It's the same thought as a locally-attached "expanded volume". As in - if you have a volume directly attached, and it starts running out of disk space, you can attach an entirely new disk set, create a new RAID group, and "append" it to the existing volume. Your OS volume is then composed (in the background) of multiple physical RAID groups. Each raid group has its own RAID level, but nothing requires that both RAID groups be of the same RAID level.

    Like mentioned before: a LUN is composed of one or more slices of space from one or more RAID groups. Because the various slices might come from different groups of disks, one slice might be RAID 10, one other might be RAID 5.

    Hardware vs software RAID only has something to do with the controller - each disk set can only have one raid level. What CONTROLS the group might be hardware or software.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • You can create some very complicated LUN on a SAN I might have 2 disk from tray 1

    5 disk from tray 3

    3 disk from tray 7

    to make a raid 1_0 and the disk are used by other LUNs. That why some times it hard to figure out the performance problems are.

    her a better example.

    I have 4 drive raid 1_0, I create 4 LUNS 36 GB a pieceand have 4 db and put the transaction log on each LUN

    I have four database to batch process data.

    db 1 from 1:00 am to 2:00am.

    db 2 from 3:00 am to 4:00am.

    db 3 from 5:00 am to 6:00 am

    db 4 from 8:00 am to 11:00 am

    Every thing has 8 millsec response time

    after that they do nothing user may run some report throught out the day.

    The DBs can have there transaction log on the same set of disk, No contention .

    If i have this.

    db 1 from 1:00 am to 6:00am.

    db 2 from 3:00 am to 4:30am.

    db 3 from 3:30 am to 6:00 am.

    Her you may start see any where from 8 millsec response time to 100 millsec response time.

    you can see the access pattern is heavy from 3:30 am to 4:30 am. All 3 DBs are using there corresponding LUNs that are mapped to the same disk at the same time.

    It all depend on all the factors.

  • It's the same thought as a locally-attached "expanded volume". As in - if you have a volume directly attached, and it starts running out of disk space, you can attach an entirely new disk set, create a new RAID group, and "append" it to the existing volume.

    I want to make sure I am on the same page, or at least in the same book :). In your last replay you referred to attaching an entirely new disk set. I understand the multiple RAID configuration in this case because you have the original physical disks associated with one RAID and the new physical disks associated with another, yet all part of the same volume. What I thought was said earlier is you can have 6 disks in a set with all 6 being defined by LUN #1 as a RAID 10. LUN #2 is then created off of the same set of disks, but is defined as RAID 0. Therefore you have two RAID configurations on the same physical disks. Is this what you are saying is possible? If so, I can understand why it could create a large amount of overhead on some systems based purely on the spindle movement trying to service two RAIDs accross the same physical disks. The seek time would not be good.

    Thanks again, Dave

  • DBADave (3/7/2008)


    I want to make sure I am on the same page, or at least in the same book :). In your last replay you referred to attaching an entirely new disk set. I understand the multiple RAID configuration in this case because you have the original physical disks associated with one RAID and the new physical disks associated with another, yet all part of the same volume. What I thought was said earlier is you can have 6 disks in a set with all 6 being defined by LUN #1 as a RAID 10. LUN #2 is then created off of the same set of disks, but is defined as RAID 0. Therefore you have two RAID configurations on the same physical disks. Is this what you are saying is possible? If so, I can understand why it could create a large amount of overhead on some systems based purely on the spindle movement trying to service two RAIDs accross the same physical disks. The seek time would not be good.

    Thanks again, Dave

    No - what I'm saying is that "Virtual LUN #1" is made up of slices (meaning - a portion, but not all of the disk space), some of which are from RAID Group #1 (which is RAID 10), and some of which are from RAID group #2 (which is RAID 5).

    Remember - a common school of thought you will find in SAN config is that LUN's don't usually start out being "given" all of the disk space in a given RAID group, so you get slices of those RAID groups.

    A Raid group (i.e. a group of physical disks) can only have one RAID level at any given time. If you want to reorganize a RAID 1+0 into RAID 5 - the RAID group gets deleted and started over.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Now I got it. Thanks.

Viewing 15 posts - 16 through 30 (of 31 total)

You must be logged in to reply to this topic. Login to reply