Hi, I'm inheriting a series of databases that have grown beyond 100GB yet only have a single datafile. I've been tasked with breaking this up into both more manageable chunks as well as more performant access. To add mystery, this is Sharepoint on a server with a couple dozen content DBs.
Let's start with WSS_Content_A Database. 100GB in size, 1 datafile.
If I simply add a second datafile and rebuild indexes, almost nothing happens. I believe 8GB moved over to the new datafile.
If I add two new data files with the correct sizeMB and use DBCC SHRINKFILE('WSS_Content_A',emptyfile), I wind up with two evenly matched data files that use proportional fill very cleanly across them. BUT... fragmentation is nearly 100% afterwards. This process takes about 3 hours, then I spent another amount of time to rebuild indexes. Further, I have that pesky initial datafile that isn't going anywhere. Not ideal.
If I add 1 new (temporary) datafile the size of the original, use the above command to empty all contents into it, shrink the original datafile to half its original size, then create a new (third) datafile the same size as the first, I can run emptyfile on the temporary/intermediate datafile with proportional fill still doing a bang up job of evenly distributing the writes across the two files while, interestingly, not committing any fragmentation during the job. I simply alter database remove file on the temp file when complete and I have two datafiles that seemingly accomplish all of my goals.
Veterans, do you have any comments/wisecracks/suggestions? If I use two temporary datafiles instead of one, it improves performance to the point that this exercise takes about 6 hours @ 100GB. This has all been done on a test server with 'old' E5420 CPUs and 16GB RAM - I'm hoping that Nehalems and Westmeres kick this up a notch.