Move table between filegroup : NO Primary keys

  • Does anyone have a scriptable solution?

    I have inherited a 978g database with 600 tables. I want to bring in filegroups to

    1) sperate indexes

    2) sperate the 100 largest most heavily used tables and their indexes

    I will be moving the filegroups to different sets of disk to increase over all throughput.

    I know if they had I could use

    alter table tbl1 drop constraint pk with move to NewFileGroup

    With out PK/Unique constraint, I can't use "move to".

    Any one done this???

    I am looking at this as the 1st step to introduce Partitioning on the largest of tables with 100M rows or more.

    Thanks,

    Joseph

  • I dont know how is work with almost 1 TB, and i dont know if have a better solution, but if you create a clustered index in this tables using ON filegroup clause sendo to another filegroup ?. And after dropped the clustered index. Lets say you dont have a identity column or PK/UQ, create a new identity column, create a CI in this column and after drop the CI and the identity column. If you have a identity better just create CI in another filegroup and dropped in the end. Obviusly after you dropped all Fk´s..etc...

    BOL says

    Note: Because the leaf level of a clustered index and its data pages are the same by definition, creating a clustered index and using the ON filegroup clause effectively moves a table from the file on which the table was created to the new filegroup. Before creating tables or indexes on specific filegroups, verify which filegroups are available and that they have enough empty space for the index. It is important that the filegroup has at least 1.2 times the space required for the entire table.

    I think in a large table this process will be very expensive..

    I have one script (design to SQL2k) and sincerely i never used so i dont say its works, but you can see what the script do and test with small tables.

    http://education.sqlfarms.com/education/ShowPost.aspx?PostID=59

    I dont know its the better way in VLDB´s.

    $hell your Experience !!![/url]

  • thank you for your reposne.

    I did now about the pk/unique clustered index. I have a problem in creating on with the content of the tables. To get a unique index, some would be almost every row in the table.

    I have further researched and agree with several comments about "Why would MS do this? How come you can't move to a filegroup WITHOUT a pk/unique contraint?"

    I have posted it on MSDN, but I have not had any responses.

    thanks,

    Joseph

  • can't you just do the following:

    build a new table on your secondary file group with the same structure,

    insert into it,

    rename the old

    rename the new?

    isn't that effectively what you need?

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • You can move this table withou creating clustered indexes, you can create a new table in the new filegroup, insert de data with insert/select , drop the old table and rename the new table with the old name.

    But this is much more expensive to sql server and in your case i think it is extremely unwise because the number of the rows.

    Remember, to you move in your table without clustered index, you only create a identity column (if not exists) , create a CI send to another filgreoup and when its done DROP the column and the index. The table returns to your original state.

    $hell your Experience !!![/url]

  • Thanks for the info.

    I had thought of inserting, but just 1 table took a very long time.

    I had not thought of the indentiy column. I will give that a try.

    thank you for your adivce,

    Joseph

  • Lets say you have a table with this structure

    Create table Test (code int, name varchar(50))

    1 - Without nothing (Clustered IndexI,PK, identity)

    you can create a new identity column and send to newfilegroup and after drop the index and identity column

    alter table Test add DId bigint identity(1,1)

    then you create the CI to another filegroup

    CREATE CLUSTERED INDEX [MyCI]

    ON [Test]([Did])

    ON [newfilegroup]

    DROP INDEX [Test].MyCI

    alter table Test drop DId

    1 - Without nothing (Clustered IndexI,PK) but your table alread have one column identity call Myidentity

    then you create the CI in theis column to another filegroup and after drop de index

    CREATE CLUSTERED INDEX [MyCI]

    ON [Test]([myIdentity])

    ON [newfilegroup]

    DROP INDEX [Test].MyCI

    simplifying, identity column is an auto increment. So you do not have to enter data when it is created.

    I suggest before you do somenthing, read about identity columns in BOL.

    $hell your Experience !!![/url]

  • Thank you, I have started to test this and it seems to be the best path. I am working through the script on the develpment box right now.

    Joseph

  • Whether you copy the data to a new filegroup with an INSERT/SELECT, or move it by creating and then dropping a clustered index, the entire table has to be read and written. There is no real difference in the amount of I/O involved. In addition, if your existing table has small rows and little fragmentation (i.e many rows per page and little free space), the step of adding the identity column could cause page splits and double the size of the table in the primary file group before you even start to move it.

    What you need to consider is how the move will be logged. Trying to move the table with a single INSERT/SELECT will require logging each row moved. This will put a huge strain on your transaction log, possibly filling the log drive and stopping all activity in the database. The same problem occurs with the identity + index technique, all actions will be logged as a single transaction.

    You also need to give some thought to your disaster recovery options in case you screw this up. Take a full backup before and after moving the table. Has anyone ever tried to do a complete restore of this monstrosity? Can you schedule a maintenance period to do this with no interference? The best case would be if you can kick all the users off the system and put the database in single user mode for the duration.

    If you can afford to switch to simple recovery mode, you can move the table by using INSERT/SELECT in a loop to move a reasonable number of rows at a time. The definition of "reasonable" varies, but could be between 1000 and 50000 rows. This technique will have a minimal impact on logging, although every row copy is logged there is little chance of running out of log space. This will run much faster on a large table than trying to do it all at once.

    The least amount of logging (and therefore fastest) method would be to dump the data to a flat file with BCP, and then use BCP to import it into the new table. If a number of conditions are met (the database is in single- or bulk-logged recovery mode, the target table is empty or has no indexes, see BOL for details) the data will be imported with minimal logging. An added benefit is that you have an external copy of the table in case something blows up.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply