Restoring a db with more data files??

  • I have a database that I need to move to another server. The server admin is giving me 4 500gb drives. I need to restore the 1.2 gb db to multiple files across the 4 drives. Is this possible? For as long as I've been doing SQL, I've never tried this.

  • A restore recreates the DB exactly as it was at the time of backup. You can move the files around, you cannot create new files as part of a restore.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Yea, I pretty much knew that, but hoped maybe there was a way to expand the file #. I'm not sure how I'm gonna do this if I'm forced to use multiple drives.

    Looks like I'll have to take the 2 files the db exists in now and find a place to expand it to 4 files before moving it to the new server. Gonna be a painful move.

  • duke-347723 (3/16/2010)


    I have a database that I need to move to another server. The server admin is giving me 4 500gb drives. I need to restore the 1.2 gb db to multiple files across the 4 drives. Is this possible? For as long as I've been doing SQL, I've never tried this.

    How big are the current drives? 1.2Gb spread out across 2TB is quite a bit of space. Since the db is rather small, it would seem you could split it out into multiple files in the current locations and then perform the backup and restore with move file options.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • CirquedeSQLeil (3/16/2010)


    duke-347723 (3/16/2010)


    I have a database that I need to move to another server. The server admin is giving me 4 500gb drives. I need to restore the 1.2 gb db to multiple files across the 4 drives. Is this possible? For as long as I've been doing SQL, I've never tried this.

    How big are the current drives? 1.2Gb spread out across 2TB is quite a bit of space. Since the db is rather small, it would seem you could split it out into multiple files in the current locations and then perform the backup and restore with move file options.

    This db is on the larger end for our environment. In fact it's the largest. We've let the devepors make a mess of the structure too.

    Not big enough. That's why I'm exploring options. There are 2 1.5 tb sets and a 500gb set with little free space. I may have to see about temporarily attaching some san drive to accomplish the move.

    How would you go about creating 4 somewhat evenly divided db files so as to have free space in each?

  • tsduke03 (3/16/2010)


    CirquedeSQLeil (3/16/2010)


    duke-347723 (3/16/2010)


    I have a database that I need to move to another server. The server admin is giving me 4 500gb drives. I need to restore the 1.2 gb db to multiple files across the 4 drives. Is this possible? For as long as I've been doing SQL, I've never tried this.

    How big are the current drives? 1.2Gb spread out across 2TB is quite a bit of space. Since the db is rather small, it would seem you could split it out into multiple files in the current locations and then perform the backup and restore with move file options.

    This db is on the larger end for our environment. In fact it's the largest. We've let the devepors make a mess of the structure too.

    Not big enough. That's why I'm exploring options. There are 2 1.5 tb sets and a 500gb set with little free space. I may have to see about temporarily attaching some san drive to accomplish the move.

    How would you go about creating 4 somewhat evenly divided db files so as to have free space in each?

    So this must be a 1.2TB database?

    I would start by adding one file at a time. I do not have the reference for this, but as memory serves, if you rebuild the clustered indexes for your tables - it will redistribute the data between the files in the filegroup. It won't release unused space though - so you have to proceed cautiously and have a gameplan of which tables to do and in what order.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Sorry all. My first post should have said 1.2 tb database.

  • You can definitely add more files to the Primary Filegroup without a problem. The issue is moving the data from the existing file to the new file.

    The best way to accomplish this is to rebuild your clustered indexes. Since a clustered index is the physical table, you just drop the index where it is and have it rebuilt on another file. Pick and choose your tables carefully so each file ends up being about the same size (or at least smaller in size than the space you have on the four drives). You might look into putting ReadOnly tables together to help with performance too.

    Now, I've never actually done this myself, so I can't tell you the T-SQL syntax to use, but ISTR that it is in Books Online. So do a little research before hand.

    And if you have a bunch of heap (non-indexed) tables, add a temporary PK / Clustered Index to them so you can do this physical data move.

    Also, you can add multiple Filegroups with a file in each group, but the problem with this is that only one can be the Primary at a time and unless your other three groups are ReadOnly filegroups, this could cause you problems when trying to add new data. Again, do more research on this issue as I'm fuzzy on the particulars.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • I was afraid this was going to be my only path. I don't know if it's worth it or if I should convince the server admin to change the 4 500gb luns to be 1 2tb drive.

  • You do realize you can get better performance out of your DB if these LUNS really are separate drives (physically speaking) and you apportion out your tables properly, right? This might actually be a boon for you.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • I have better solution here instead of going with complex steps-

    Create the the empty target database as per the files/filegroup design you want. (Spreading files across multiple drives)

    Copy all objects etc.

    Then just use import/export wizard to move the data.

    For 1.2 TB of data it will not take more than 2 hours.

    Even you can parallelize this import/export from multiple machines to reduce the data export time.

  • That's actually a pretty good idea. The only thing to keep in mind, though, is that it will potentially stripe the data across the files or shove them all in file 1 instead of keeping whole tables together but putting them on different files.

    And if you create PKs on the objects ahead of time, the import / export wizard won't work because of the whole Identity Insert problem.

    Actually, the more I think about it, this solution is just about as much work as the rebuilding of indexes.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Oh, absolutely I do. But mgr's seem to think the time of getting stuff delivered is more important than doing things right and and the quality of the work. Drives me nuts. I spend 10 times the effort afterwards because of this poor philosophy. So far they have given me a small downtime window to move test to prod. Which is about 3.5 tb of db's.

    The 500gb luns are spread acrossed two meta luns now. Even with one drive in windows they be using the same number of sets. I wonder what the perforance difference actually would be? Besides performance future disk expansion is the other reason we wanted to go with 500 as the size.

    We just need to go make our case and hope for the best. There are some big *** tables in this 1.2 tb database so cluster rebuilds will take some time. Too bad there is a "spread the love" command to evenly distribute the files.:hehe:

    I appreciate the feedback! I've lurked in here a long time just not posted much.

  • I've thought of the import option, but I would think it would take more than a couple hours when it includes populate indexes as well. And the hold pk/fk issue would be a pita.

  • Brandie Tarvin (3/17/2010)


    You can definitely add more files to the Primary Filegroup without a problem. The issue is moving the data from the existing file to the new file.

    The best way to accomplish this is to rebuild your clustered indexes. Since a clustered index is the physical table, you just drop the index where it is and have it rebuilt on another file.

    That process is at the filegroup level. Thus he would need to create more filegroups for that method.

    Otherwise, the rebuild of a clustered index will spread the data out amongst files in the same filegroup.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply