How to split one mdf file into multiple file

  • hi i have 2 tera database with 1 MDF = 1.9tera and 1 LDF = 100GB

    is there a way to split MDF into multiple files? like

    1 mdf with 200GB

    and 9 ndf's with 200GB?

    or what are my possible options in splitting the database into multiple files?

    thanks

  • Personal Advice: Take a Chain saw and you know what to do:

    Now Serious: Cant you get rid of any historically records/data into a seprate database.

    or you can create multiple ndf files and move the tabels/objects around as needed.

    Better yet create seprate Filegroups and move the secondary ndf files into those filegroups...

    but a way to manage these are to figure out what imp and which is less accessible or probaby dead data.. so move it accordingly into ndf's and filegroups. even indexes.

    Can you shrink the database?

    Curious: how much time does it take to backup the one database.

  • actually its a new server that was turn over to me.. which has NO(you've read it right) backup at all..

    and looking forward, i'm planning to chunk that one .mdf into multiple files or database maybe?!? so that backups would also take less time and so improve performance..

    in this case, i would like to solicite additional info from everyone with the same scnario or at least has better idea...

    thanks..

  • You would have to create new files and then move the objects into those new files. BOL can help with this.

  • One option is to add additional files to the primary file group. SQL Server will automatically spread data pages around as they are rewritten, but you have no control over which objects are in which file. Actually, all objects end up being spread across all files. This makes the files smaller but doesn't change your backup/restore procedure. You should see better performance (with minimal administration effort), especially if the files are on independent drives.

    Another option is to add filegroups, each with at least one file. Now you can place entire tables in specific filegroups, and put nonclustered indexes in other filegroups. You have a lot of control and can see a lot of performance benefits, but all these actions are manual and will make your job more complicated. This also gives you the option of moving static tables into their own filegroup and setting it to read-only, then you can look into reducing your backup time by using filegroup backups and partial restores.

    If you want to address backup performance, you can try striped backups. You can specify multiple files in the BACKUP command and write them in parallel. There is no relation between the number of backup files specified and the number of mdf/ndf files being backed up, you have to experiment to find out how many backup files gives you the best performance. You could, for example, back up your 2TB mdf file to five bak files, or split the database into 10 data files but back them up to one bak file. Ideally these backup files are on separate drives and I/O channels from the drives holding the mdf/ndf files. One issue you'll have to overcome is that this functionality is not currently supported by point-and-click maintenance plan wizards, you'll have to script it yourself.

    Just to be clear, specifying multiple destination files in one BACKUP command (BACKUP DATABASE x TO DISK=file1,file2,...) is not the same as running separate "BACKUP DATABASE x FILE=f TO DISK=file1" commands.

  • paksyeet (6/24/2009)


    actually its a new server that was turn over to me.. which has NO(you've read it right) backup at all..

    and looking forward, i'm planning to chunk that one .mdf into multiple files or database maybe?!? so that backups would also take less time and so improve performance..

    in this case, i would like to solicite additional info from everyone with the same scnario or at least has better idea...

    thanks..

    No Backups? lol. As everyone says here, you need to keep updating ur resume waiting for the time to come when it may crash.

    Anyways, On a serious note, You can remove ur archive data to different databases or (if you're sure they wont change in future) to a read-only filegroup. You can do piecemeal backups here.

    You can split your database in multiple filegroups and backup ur filegroups on a weekly basis, and may be differential filegroup backup daily, besides ur log backups.

    I suggest you plan your restoration strategy asap and based on that split up your database onto multiple filegroups.



    Pradeep Singh

  • Add a group and Then add a file and assign that group So Simple

  • 1) if you have too few spindles in your IO subsystem, multiple files can actually degrade performance

    2) moving data to new filegroups is done via a clustered index, specifying what filegroup you want the clustered index created on.

    3) watch your tlog file - big table moves will make big log files. size it appropriately before you start to avoid OS file fragmentation

    4) how will these files speed your backup?

    5) I highly recommend you get some professional help on board for a day or two. You are asking questions that someone dealing with terabyte-sized databases should already know. There are likely a BUNCH of things done improperly or suboptimally here by whomever left you this database!

    6) I also highly recommend you get a copy of HyperBac to help with backup times and size. Wonderful product!

    Disclaimer: I have a close relationship with Hyperbac, use their products and recommend them to my clients. Also, if you care to you can mention TheSQLGuru sent you you will get a discount and my daughter will get a few coins for her college fund. Contact javen@hyperbac.com

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • If your real issue is backup performance, then the best thing you can do will be to use a third party backup product, like SQL LiteSpeed or Redgate Backup. Or that other one already mentioned.

    The compressed backups will usually run much faster and will take up much less space, so it will be a lot easier to manage. With a database that size, space for backup files may be be hard to come by.

    If you are using SQL Server 2008 Enterprise Edition, then backup compression is already built-in at no extra cost. The upcoming SQL Server 2008 R2 has backup compression built-in for Standard Edition if you can wait for it and can upgrade to 2008.

    In any case, you need to get some database backup going right away; you don't want to end up as another sad story here:

    It's dead, Jim

    http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=67782

  • Simple steps

    1. Create new file group say "Secondary" obviously

    2. Create New file under this file group

    3. If you have big objects, then create new object with some suffix with "SECONDARY" option default is PRIMARY

    4. Copy data from existing table to new table and delete the OLD table and rename new to original table.

  • Fo Example

    CREATE TABLE dbo.Table1

    (

    <columns_in_primary_key, , c1> <column1_datatype, , int> <column1_nullability,, NOT NULL>,

    <column2_name, sysname, c2> <column2_datatype, , char(10)> <column2_nullability,, NULL>,

    <column3_name, sysname, c3> <column3_datatype, , datetime> <column3_nullability,, NULL>,

    CONSTRAINT <contraint_name, sysname, PK_sample_table> PRIMARY KEY (<columns_in_primary_key, , c1>)

    )ON PRIMARY]

    The new table script like _ _ _

    CREATE TABLE dbo.Table1_Temp(

    <columns_in_primary_key, , c1> <column1_datatype, , int> <column1_nullability,, NOT NULL>,

    <column2_name, sysname, c2> <column2_datatype, , char(10)> <column2_nullability,, NULL>,

    <column3_name, sysname, c3> <column3_datatype, , datetime> <column3_nullability,, NULL>,

    CONSTRAINT <contraint_name, sysname, PK_sample_table> PRIMARY KEY (<columns_in_primary_key, , c1>)

    )ON [SECONDARY]

  • You don't need to create a new table and move data over to move a table into a new filegroup.  Just create a clustered index and assign it to the new filegroup.  The data table will move all the data into the new filegroup.  It takes less time and can be moved back by dropping and recreating the clustered index in PRIMARY again

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply