To be totally honest, this will likely be a futile exercise.
First, unless you can guarantee that the MDF files will be located on separate physical disks, there will be little to be had in the area of performance gains, if any at all.
Second, even if such a thing IS guaranteed, if there's no planning as to how the data is to be split across the separate MDF files according to how the data is used, there may still be little or nothing to gain.
Third, if you bring other physical disks into play and SQL Server isn't the only thing using those disks, you could actually end up slowing SQL Server down, A LOT!
Last but not least, if you have SSDs, it's not going to matter so much. It'll turn out that most of what you done is just add complexity.
There are reasons, such as "Online Piecemeal Restores", to have multiple MDF files but performance usually isn't one of them.
But, don't listen to me... in fact, don't listen to anyone on this subject. Instead, do a test because "One good test result is worth a thousand expert opinions". You might end up saving yourself a whole lot of work that has little or no ROI. If it DOES result in some sort of wild increase in performance, you should write an article about what you did so others can test it and then, maybe, implement what you've done.
Shifting gears a bit to something that you SHOULD listen to is my favorite bit of advice. It's my mantra, in fact. "Performance is in the code... or not". If you're having performance issues, you should determine which code is actually causing the problem and fix it. It takes more time than the sometimes silly hardware and file "fixes" that people propose but it's usually MUCH more effective and has a MUCH higher ROI. Fixing your top 10 code problems will typically cause your server to operate much more efficiently.
And, remember... it's not usually your longest running queries that are the performance issue. It's usually the cumulative effect of queries that run tens of thousands of times per hour.
And example of the above, we had a query that ran in "only" 100 ms (the "only" is in quotes because, for what it did, I thought that was horrible). The problem was that it would recompile each and every time it ran and it did run tens of thousands of times per hour. The recompile time varied from 2 seconds to 22 SECONDS (that's NOT a typo... SECONDS!!!) EVERYTIME IT RAN!.
When we fixed the query so that it wouldn't recompile every time it was executed, it also started running in just several milliseconds.
is pronounced "ree-bar
" and is a "Modenism
" for R
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
"If you think its expensive to hire a professional to do the job, wait until you hire an amateur."--Red Adair
"Change is inevitable... change for the better is not."
When you put the right degree of spin on it, the number 3|8
is also a glyph that describes the nature of a DBAs job. 😉
How to post code problems
Create a Tally Function (fnTally)