To answer that question, my experience has been that the more spindles you can involve, the better the performance. A lot of SAN folks will argue against that but I'm not sure why because it actually does make a lot of sense.
To wit, I normally try to separate MDF and LDF files to their own physical sets of spindles as well as setting up TempDB on it's own set of spindles. If I can, I'll set it up so that the MDF/NDF files of TempDB are on separate spindles from the LDF files but, no matter what, I try to put TempDB on it's own drive(s) so I can configure it differently than all of the others.
Of course and as a sidebar, that's not always possible with the ridiculously sized hard disks they have now. It was so much easier to right-size and get more spindles involved when disk size was much smaller. For example, I just can't see dedicating a 300GB drive to the LDF files of a system that won't grow to more than 600GB across multiple databases. It was a little tougher on electricity and cooling but it even allowed for faster disk replacement if one went bad because the system didn't have to rebuild so much.
is pronounced ree-bar and is a Modenism for R
First step towards the paradigm shift of writing Set Based code: Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
Although they tell us that they want it real bad, our primary goal is to ensure that we dont actually give it to them that way.
Although change is inevitable, change for the better is not.
Just because you can do something in PowerShell, doesnt mean you should. Helpful Links:
How to post code problemsHow to post performance problemsForum FAQs