• fluffydeadangel (12/5/2012)


    Doing a recreate is an interesting idea and will be addressed. I don't have the exact queries that are being run, but I was an application user at one point. I know it does massive ranges in multiple ways. A request has been put in for the form queries that are being run. I am currently trying to do the defrag index by index. doing the whole table (just this table) fills up the log file that is 400GB, fills out the drive that this one table is stored on, and still doesn't complete. and the fill factor, setting it to something more common like 80 fills up the drive (as 20% more space would). I'm going to be requesting more storage, though for now I'm working within my confines.

    I greatly appreciate the advice and am looking at all the avenues being presented.

    Doesn't the fact that you have 80% fragmentation mean that you're already using a significant amount of space over and above what is actually needed ? using a fill factor of 80 would only increase your footprint by 25%, so I am not sure how that would cause the drive to be filled up. I understand the log filling up, but switching to a simple recovery model may be another option to try during the index rebuilds.

    I understand and appreciate the point of seeks vs. scans, but the massive amount of space saving(in some environments) itself may be enough to justify the defrag. I know that SAN space is pretty expensive and once you do the math and see the cost you are incurring with the bloated table(at least about 400 GB, perhaps?) you can make that call. I know that in our shop, another 400 GB can be quite a bit $$ in savings 🙂