• Jeff Moden (3/17/2014)


    First, having the 80 files won't degrade performance. It won't enhance it, but it won't degrade it.

    Too many data files can degrade performance. What too many is depends on a bunch of factors.

    See http://www.sqlskills.com/blogs/paul/benchmarking-do-multiple-data-files-make-a-difference/, where adding files improved performance up to 8 files, then degraded it after that point.

    It fact, it could reduce backup times if your automation checks to see if there have been inserts, updates, or deletes and backing up only those files that have actually experienced a change.

    That's feasible with lots of filegroups, not so much with lots of files in a small number of filegroups. Can greatly complicate restores, especially if the backups are purely based on changes. Log backups will have to be retained for long periods (from the least recently backed up file right up to present) to allow for a restore.

    It will also make dropping a customer easier in the future if the system was setup to tolerate just dropping a file.

    If the database had been set up with a small number of files per filegroup and one customer per file group, then yes (kinda, still have to empty the filegroup before dropping the files), but in this case with 40 files per filegroup across two filegroups, that's not going to be the case.

    Last but not least, if will also allow for "piecemeal" restores if that was a consideration in the grand design of things.

    Again, if there were lots of filegroups, yes. Less useful when there's loads and loads of files in a single filegroup as all files in a filegroup have to be restored for the filegroup to be available.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass