• Steve Jones - SSC Editor (5/5/2015)


    I'm with Gail, somewhat on this. It's not that I think it's harder for me to restore or recover, but it is slightly less readable and confusing.

    For me it's less about the readability and more about the value.

    Yes, I could do this, I could split a database into multiple filegroups, but then if the client came to me and asked what benefit they would see, I'd be hard pressed to give an answer.

    Because just putting tables on filegroup 1 and indexes on filegroup 2 is not going to improve performance. Splitting a DB for performance reasons requires that IO is the bottleneck, that the split does spread the load out, that the files have separate IO paths or at least separate drives not just slices from the same SAN lun and that just putting the entire DB onto SSD isn't a better solution. Tuning queries and adding memory also have to be considered

    And because splitting for easier restores is hard and requires a good understanding of what tables are used together to ensure that a partially restored DB is usable. It may also require longer log backup retention. If it's required, fine, but there has to be some analysis and work done first.

    Splitting for easier checkDB isn't something I'd suggest, because I can restore elsewhere and run checkDB or do CheckTable in sets instead without moving the DB around and corruption should be so incredibly rare that I'm not going to base a design on the need to restore part of my DB because of corruption. And besides, there's always the page restore option if it's just data/index pages affected.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass