• I just am having trouble how to reorganize a table that size and get away with just 24 gigs of tlog activity.

    When I look at the table and count the column sizes, I get 132 bytes. If there are a billion of those, then its going to be 132 gigs right? And that's not including the overhead SQL would use in maintaining this table.

    To expect a rebuild or reorg to finish with only 24 gigs, that means your process can only write less than 25 percent of those pages, is that realistic? Maybe in the best case if the table is not that fragmented, but for me by nature, I'd probably expect the worse case. If I saw a table that size and needed to rebuild the clustered index and only had a budget of 24 gigs of log space I'd probably be a bit worried there.

    Of course, I'm a bit of a pessimist. Also I'm probably not the expert on this either, so theres that to consider also! It would be cool to learn how to reorg or rebuild a table that size and only spend 24 gigs of disk updates so I would be interested in hearing any theories here, maybe somebody knows about the internals and can shed some light on the issue!