• hi Gail,

    Thank you very much for writing back so quickly.

    <<I liked your solution for physical fragmentation (rebuilding the indexes into a secondary partition)

    That is what we normally do when we work on our 46 TB DataWareHouse database. When we change the clustering key of one of the larger tables, it is done into a new file group, in order not to increase the size of the database permanently.

    The current problem databases are seeing a new row to each of 600 tables every 10 minute. The rows are quite large; almost 3 kB, so we can only fit 3 rows on a page. The tables are clustered by a DateTime column, so we do not see logical fragmentation as such.

    But we do see "Extent Fragmentation"; for every 3 rows/page * 8 pages/extent (which is 24 rows) in a query, we need to find another Extent.

    The only way I see around that is to create 600 file groups, and then database growth is a bit of problem.

    I was worried if there were SGAM, GAM pages that got fragmented, when we grow the database in 1MB bits.

    That seem not to be the case.

    Thank you for your help.

    Best regards,

    Henrik