Rebalancing data in file groups. Need help

  • I am looking for the easiest way of rebalancing data across multiple files.

    Instead of creating a secondary filegroup and then dropping and recreating all indexes in the database which is going to take ages (we have a lot of tables and indexes), I am trying to just add more files to the primary file group and then rebalance data evenly between these.

    I guessed that adding the new files to the primary file group and then rebuilding all indexes on a table should redistribute the table over these multiple file groups evenly. This is not the case though. It does rebalance data a bit but I still end up with the majority on the first file that existed.

    I have attached the script I am running that I'm running for anybody to take a look at as maybe it is something in the create database/file statements that is the issue.

    Basically what I am seeing is to start off with the table is 160MB. I then add the file groups and rebuild all indexes on the table. The first file is then about 100MB and each of the three other files are about 20MB. I would expect them all to be the same size.

    If I am completely barking up the wrong tree on how to achieve this please tell me.

    Any help would be great thanks.

  • ...creating a secondary filegroup and then dropping and recreating all indexes in the database which is going to take ages (we have a lot of tables and indexes)...

    adding the new files to the primary file group and then rebuilding all indexes on a table ...

    You are doing physically same things in both of this cases, but why in the 1st case it will take ages, and in the 2nd case not?

    You cannot control data distribution between files (this is SQL Server's internal business), but you can control data between filegroups.

  • Because in the second case I can just run an ms for each table and rebuild all. That takes 20 seconds to write as opposed to scripting out hundreds of drop and recreate statements for all the indexes in the database.

    As a third option I've seen you can create additional files in primary and then run shrinkfile empty to move the data evenly among files. The problem I've found with this is that it distributes more data to the original primary file and less to the new files in the primary file group. Any ideas?

  • william.rees.howells (4/28/2014)


    I am looking for the easiest way of rebalancing data across multiple files.

    Instead of creating a secondary filegroup and then dropping and recreating all indexes in the database which is going to take ages (we have a lot of tables and indexes), I am trying to just add more files to the primary file group and then rebalance data evenly between these.

    I guessed that adding the new files to the primary file group and then rebuilding all indexes on a table should redistribute the table over these multiple file groups evenly. This is not the case though. It does rebalance data a bit but I still end up with the majority on the first file that existed.

    I have attached the script I am running that I'm running for anybody to take a look at as maybe it is something in the create database/file statements that is the issue.

    Basically what I am seeing is to start off with the table is 160MB. I then add the file groups and rebuild all indexes on the table. The first file is then about 100MB and each of the three other files are about 20MB. I would expect them all to be the same size.

    If I am completely barking up the wrong tree on how to achieve this please tell me.

    Any help would be great thanks.

    Because you have existing files in the filegroup with data they will not be filled at the same rate, SQL servers proportional fill will kick in and write extra data to files that have more free space, so what you are doing will not entirely work. Easier just to rebuild into a new filegroup IMHO and experience.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Have to go with Perry on this, create a secondary file group then migrate the data, if you have a clustered index its even easier just drop and recreate the clustered index on the new file group and SQL will do the rest.

    Also if you can put the secondary group on a different disk that you should be able to minimise read write clashes, as one disk will be straight read while the other is a straight write.

    On large tables you might benefit from moving the non-clustered indexes into their own file group and disk with their own growth rates.

    _________________________________________________________________________
    SSC Guide to Posting and Best Practices

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply