Changing Locations of DataFile

  • No work as I have it documented myself already 😉

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry Whittle (10/8/2012)


    No work as I have it documented myself already 😉

    Fairy Snuff;-)

  • Thanks Paul 😉

    Perrry if you have documented it.. please attach na;-)

    ************************************
    Every Dog has a Tail !!!!! :-D

  • Perry where r u???? 🙁

    Say na what will happen if we do implement this???

    What is the use of doing this???????

    i wanna send email to my seniors so that they can give me a go-ahead for the same :w00t:

    ************************************
    Every Dog has a Tail !!!!! :-D

  • Hi

    the links above will do you just fine, post back if you're still stuck

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • runal_jagtap (10/11/2012)


    Perry where r u???? 🙁

    Say na what will happen if we do implement this???

    What is the use of doing this???????

    i wanna send email to my seniors so that they can give me a go-ahead for the same :w00t:

    Hi Runal,

    Do you currently have all your files on 1 physical disk on the server?

    The best setup for SQL is to have multiple disk drives. If this is a virtual server then each virtual disk needs to be on a different volume on your back end storage, if possible on different filer heads as well.

    We have our Servers set up in the following way(They are Virtual so I know you may not have enough physical disks on your server to accomplish this):

    C: Drive - Operating System - (Filer01 - Volume 1)

    D: Drive - SQL Installation - (Filer02 - Volume 2)

    F: Drive - Page File - (Filer01 - Volume 3)

    G: Drive - System Database Data Files - (Filer01 - SQLVolume 1)

    H: Drive - System Database Log Files - (Filer02 - SQLVolume 2)

    I: Drive - User Database Data Files - (Filer02 - SQLVolume 3)

    J: Drive - User Database Log Files - (Filer01 - SQL Volume 4)

    K: Drive - Temp Database Data Files - (Filer01 - SQL Volume 5)

    L: Drive - Temp Database Log Files - (Filer02 - SQL Volume 6)

    This gives us the best possible disk I/O Performance for the Infrastructure we have....

    I am in the enviable position of also being a Storage and Virtual Infrastructure Administrator at our company, although my primary role is DBA.:-D

    The above drives must be seperate and not partitions of a single physical disk to get the best possible performance.

    The reasons for splitting out the files like this is to give you the best possible Disk I/O for performance.

    You can simplfy this if you don't have enough disks, but try to keep the Temp DB files away from the User Database files to give better I/O Performance, if possible.

    If you only have 1 Physical Disk on your server then moving the files will not really give you any performance gain, you need to be moving them onto seperate disks.

    I hope the above is good enough to answer questions your manager may ask as to why you want to move these files.

    😉

    Paul

  • Hi Paul,

    thanks for providing the answer 🙂

    Currently we have 5 servers...

    For example..

    1 server have 20 databases almost 20GB each.. now the tempdb files & user datafiles & log files is on same server.....

    same is for other servers...

    Now if i ask the team, they will provide me additional servers....:-D

    should i ask for it & implement what you have written 🙂

    ************************************
    Every Dog has a Tail !!!!! :-D

  • Paul Clark-418949 (10/11/2012)


    C: Drive - Operating System - (Filer01 - Volume 1)

    D: Drive - SQL Installation - (Filer02 - Volume 2)

    F: Drive - Page File - (Filer01 - Volume 3)

    G: Drive - System Database Data Files - (Filer01 - SQLVolume 1)

    H: Drive - System Database Log Files - (Filer02 - SQLVolume 2)

    I: Drive - User Database Data Files - (Filer02 - SQLVolume 3)

    J: Drive - User Database Log Files - (Filer01 - SQL Volume 4)

    K: Drive - Temp Database Data Files - (Filer01 - SQL Volume 5)

    L: Drive - Temp Database Log Files - (Filer02 - SQL Volume 6)

    Problem is all these volumes are carved out of 1 huge aggregate on each filer are they not?

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • runal_jagtap (10/11/2012)


    Hi Paul,

    thanks for providing the answer 🙂

    Currently we have 5 servers...

    For example..

    1 server have 20 databases almost 20GB each.. now the tempdb files & user datafiles & log files is on same server.....

    same is for other servers...

    Now if i ask the team, they will provide me additional servers....:-D

    should i ask for it & implement what you have written 🙂

    What works for one won't necessarily work for another. Are you using NetApp Filer storage technology or some other SAN device??

    You need to perform your own tests on your own systems and provide the spec that works best

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry Whittle (10/11/2012)


    Paul Clark-418949 (10/11/2012)


    C: Drive - Operating System - (Filer01 - Volume 1)

    D: Drive - SQL Installation - (Filer02 - Volume 2)

    F: Drive - Page File - (Filer01 - Volume 3)

    G: Drive - System Database Data Files - (Filer01 - SQLVolume 1)

    H: Drive - System Database Log Files - (Filer02 - SQLVolume 2)

    I: Drive - User Database Data Files - (Filer02 - SQLVolume 3)

    J: Drive - User Database Log Files - (Filer01 - SQL Volume 4)

    K: Drive - Temp Database Data Files - (Filer01 - SQL Volume 5)

    L: Drive - Temp Database Log Files - (Filer02 - SQL Volume 6)

    Problem is all these volumes are carved out of 1 huge aggregate on each filer are they not?

    I didn't want to confuse the issue too much, but no there are multiple Aggregates that all the vmfs Volumes are spread over. When building a Virtual SQL Server, I try to make sure the Virtual disks are spread across volumes on seperate Aggregates across seperate Filer heads

    Our setup is as you can imagine quite alot more complicated than the above as our total server farm is almost 99% Virtualised on ESXi5 with Clustered NetApp Backend Storage.

    I was trying to simplify things a bit, showing a single server with regards to better performance with physical disks(Or seperate RAID Groups) which will give you better performance.

    Obviously the absolute best performance disk IO wise would be to have a seperate physical disk(or RAID Group) for each physical SQL File, but you are going to have to make a trade off somewhere.

  • Perry Whittle (10/11/2012)


    runal_jagtap (10/11/2012)


    Hi Paul,

    thanks for providing the answer 🙂

    Currently we have 5 servers...

    For example..

    1 server have 20 databases almost 20GB each.. now the tempdb files & user datafiles & log files is on same server.....

    same is for other servers...

    Now if i ask the team, they will provide me additional servers....:-D

    should i ask for it & implement what you have written 🙂

    What works for one won't necessarily work for another. Are you using NetApp Filer storage technology or some other SAN device??

    You need to perform your own tests on your own systems and provide the spec that works best

    Hi Runal,

    I agree with Perry, what I have given you above is a very simplified version of best practice on our SQL Servers. But it doesn't give an overall view of our total infrastructure.

    In simple terms it's easier to talk about physical servers. Where good practice would be to have a server with multiple RAID groups. If you substitute the volumes in the description of one of my servers for seperate RAID Groups you will have a server with good Disk IO performance, and this would mean that Disk IO is less likely to be a bottle neck on the Server.

    Perry's question about what Storage Appliance you are running is also very pertinent. From what you have asked I'm assuming maybe wrongly that your Servers are Virtual?

    1.) If so what Virtualisation Technology are you using? VMWare, Microsoft Hyper-V

    2.) What Storage Appliance are you using? NetApp or some other Vendor

    3.) How is your Storage Appliance connected to your Virtual Environment? iSCSI, NFS....

    4.) How is your storage carved up? IE how many Aggregates, how many volumnes on each Aggregate, how many virtual machines on each volume

    5.) Is your Storage Appliance committed to the Virtual Environment or is it also used for Shares etc?

    These are just some of the questions you need to ask. I think it may be wise to get a good handle on exactly what you are trying to acheive.

    I think you need to ask yourself why do you want to move the files?

    Is it because you have seen a bottleneck in performance caused by high Disk IO?

    If this is the case then just moving the files to somewhere else on the same RAID group won't really help...

    Here is a link to a Best Practice Article on TechNet that may give you a better understanding of disk configurations for SQL Server.

    You also need to answer the 5 questions above, and do a bit of research on the various aspects of your infrastructure.

  • Paul Clark-418949 (10/12/2012)


    Perry Whittle (10/11/2012)


    Paul Clark-418949 (10/11/2012)


    C: Drive - Operating System - (Filer01 - Volume 1)

    D: Drive - SQL Installation - (Filer02 - Volume 2)

    F: Drive - Page File - (Filer01 - Volume 3)

    G: Drive - System Database Data Files - (Filer01 - SQLVolume 1)

    H: Drive - System Database Log Files - (Filer02 - SQLVolume 2)

    I: Drive - User Database Data Files - (Filer02 - SQLVolume 3)

    J: Drive - User Database Log Files - (Filer01 - SQL Volume 4)

    K: Drive - Temp Database Data Files - (Filer01 - SQL Volume 5)

    L: Drive - Temp Database Log Files - (Filer02 - SQL Volume 6)

    Problem is all these volumes are carved out of 1 huge aggregate on each filer are they not?

    I didn't want to confuse the issue too much, but no there are multiple Aggregates that all the vmfs Volumes are spread over. When building a Virtual SQL Server, I try to make sure the Virtual disks are spread across volumes on seperate Aggregates across seperate Filer heads

    Our setup is as you can imagine quite alot more complicated than the above as our total server farm is almost 99% Virtualised on ESXi5 with Clustered NetApp Backend Storage.

    I was trying to simplify things a bit, showing a single server with regards to better performance with physical disks(Or seperate RAID Groups) which will give you better performance.

    Obviously the absolute best performance disk IO wise would be to have a seperate physical disk(or RAID Group) for each physical SQL File, but you are going to have to make a trade off somewhere.

    Ah you've deviated away from the Netapp recommendation then 😉

    The last time i worked with a Netapp solution I seem to remember they recommended you create one aggregate and allow Netapps WAFL to handle the storage access.

    How do you present your LUNs to the VMs are they via iSCSI or FC?

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry Whittle (10/12/2012)


    Paul Clark-418949 (10/12/2012)


    Perry Whittle (10/11/2012)


    Paul Clark-418949 (10/11/2012)


    C: Drive - Operating System - (Filer01 - Volume 1)

    D: Drive - SQL Installation - (Filer02 - Volume 2)

    F: Drive - Page File - (Filer01 - Volume 3)

    G: Drive - System Database Data Files - (Filer01 - SQLVolume 1)

    H: Drive - System Database Log Files - (Filer02 - SQLVolume 2)

    I: Drive - User Database Data Files - (Filer02 - SQLVolume 3)

    J: Drive - User Database Log Files - (Filer01 - SQL Volume 4)

    K: Drive - Temp Database Data Files - (Filer01 - SQL Volume 5)

    L: Drive - Temp Database Log Files - (Filer02 - SQL Volume 6)

    Problem is all these volumes are carved out of 1 huge aggregate on each filer are they not?

    I didn't want to confuse the issue too much, but no there are multiple Aggregates that all the vmfs Volumes are spread over. When building a Virtual SQL Server, I try to make sure the Virtual disks are spread across volumes on separate Aggregates across separate Filer heads

    Our setup is as you can imagine quite alot more complicated than the above as our total server farm is almost 99% Virtualised on ESXi5 with Clustered NetApp Backend Storage.

    I was trying to simplify things a bit, showing a single server with regards to better performance with physical disks(Or separate RAID Groups) which will give you better performance.

    Obviously the absolute best performance disk IO wise would be to have a separate physical disk(or RAID Group) for each physical SQL File, but you are going to have to make a trade off somewhere.

    Ah you've deviated away from the Netapp recommendation then 😉

    The last time i worked with a Netapp solution I seem to remember they recommended you create one aggregate and allow Netapps WAFL to handle the storage access.

    How do you present your LUNs to the VMs are they via iSCSI or FC?

    😛

    For most of our VM's it is just 1 aggregate as you are correct NetApp best practice....

    But I wanted the separate RAID Groups for the SQL Volumes (I know WAFL shouldn't need you to do this but we ran some tests in our test environment and found slightly better performance....)

    When we 1st set up our virtual environment we were using iSCSI LUNS , but as the technology got better we migrated to FC vmfs volumes.

    This was also when I started virtualising all our SQL Servers, which originally were physical clusters using iSCSI LUNS using SnapDrive with SnapManager for SQL, very cool technology for physical SQL Servers, but very big storage cost, size wise as obviously the volume a LUN sits in is almost double the size of a vmfs volume, bigger if you have a high snapshot reserve.(Didn't explain that very well but you no doubt get the idea)

    We almost doubled our storage capacity over night going from iSCSI to FC, and using vmfs rather than LUNS. Didn't last long though. It's always a struggle trying to get more storage. People seem to think it's unlimited.

    Yes of course you can have that big beefy server with 20 Hex Core CPU's and God knows how much Memory, I'll just run it on fresh air shall I. lol :w00t: 😀

  • Paul Clark-418949 (10/12/2012)


    😛

    For most of our VM's it is just 1 aggregate as you are correct NetApp best practice....

    But I wanted the separate RAID Groups for the SQL Volumes (I know WAFL shouldn't need you to do this but we ran some tests in our test environment and found slightly better performance....)

    When we 1st set up our virtual environment we were using iSCSI LUNS , but as the technology got better we migrated to FC vmfs volumes.

    This was also when I started virtualising all our SQL Servers, which originally were physical clusters using iSCSI LUNS using SnapDrive with SnapManager for SQL, very cool technology for physical SQL Servers, but very big storage cost, size wise as obviously the volume a LUN sits in is almost double the size of a vmfs volume, bigger if you have a high snapshot reserve.(Didn't explain that very well but you no doubt get the idea)

    We almost doubled our storage capacity over night going from iSCSI to FC, and using vmfs rather than LUNS. Didn't last long though. It's always a struggle trying to get more storage. People seem to think it's unlimited.

    Yes of course you can have that big beefy server with 20 Hex Core CPU's and God knows how much Memory, I'll just run it on fresh air shall I. lol :w00t: 😀

    I know what you mean, users think that not only is the storage unlimited but so are the host resources.

    You're not using Raw device mappings for any of your SQL Servers then?

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry Whittle (10/12/2012)


    Paul Clark-418949 (10/12/2012)


    😛

    For most of our VM's it is just 1 aggregate as you are correct NetApp best practice....

    But I wanted the separate RAID Groups for the SQL Volumes (I know WAFL shouldn't need you to do this but we ran some tests in our test environment and found slightly better performance....)

    When we 1st set up our virtual environment we were using iSCSI LUNS , but as the technology got better we migrated to FC vmfs volumes.

    This was also when I started virtualising all our SQL Servers, which originally were physical clusters using iSCSI LUNS using SnapDrive with SnapManager for SQL, very cool technology for physical SQL Servers, but very big storage cost, size wise as obviously the volume a LUN sits in is almost double the size of a vmfs volume, bigger if you have a high snapshot reserve.(Didn't explain that very well but you no doubt get the idea)

    We almost doubled our storage capacity over night going from iSCSI to FC, and using vmfs rather than LUNS. Didn't last long though. It's always a struggle trying to get more storage. People seem to think it's unlimited.

    Yes of course you can have that big beefy server with 20 Hex Core CPU's and God knows how much Memory, I'll just run it on fresh air shall I. lol :w00t: 😀

    I know what you mean, users think that not only is the storage unlimited but so are the host resources.

    You're not using Raw device mappings for any of your SQL Servers then?

    No we're not, all our VMs use Virtual Disks from NFS Stores on the ESXi5 hosts.

    All we've done is that the SQL NFS Stores are on separate Aggregates. We are in a fairy good position where we can create big enough Aggregates to have enough spindles for the performance gain.

    If we didn't I would have left WAFL to deal with it and just had the 1 Aggregate for all so that there were plenty of spindles for the performance. With a schenario where I criss-crossed the Virtual Disks so in our environment where we have 2 clustered filer heads. I would have 2 Aggregates.

    Data Files - Aggr1 Filer01

    Log Files - Aggr2 Filer02

    Temp Data Files - Aggr2 Filer02

    Temp Log Files - Aggr1 Filer01

    The above is actually how we have our test environment SQL Servers set up in simple terms, as we don't have enough spindles to gain any performance.

    I shouldn't have started this should I....:w00t:

    I hate simplifications as if you over simplify it's wrong lol.... :hehe:

    You're totally right about the resources on the hosts as well. We tried to set up a VM Cost sheet here with a basic cost for a standard server:

    1CPU,

    4GB RAM,

    50GB Disk

    With VRanger Weekly Backup

    Windows Server2008r2

    Base Cost £1500.00

    Then extra:

    +1CPU - £200.00

    +1GB RAM - £100.00

    +10GB Disk -£100.00

    So every time a new server is authorised a cost for the VM goes against the Project. When the time then comes around where we need an extra host or more disk shelves we should in theory have the money that was signed off as part of all the projects......

    In theory this seemed like a good idea, in practice the money appears to be virtual as well. :w00t:

Viewing 15 posts - 46 through 60 (of 69 total)

You must be logged in to reply to this topic. Login to reply