• I've got a data mart that's growing by about 3 gig a day and it drives me nuts.  I've only got about 400g to work with (without paying my IT dept a huge amount for a bigger space... but that's another story)

    Anyway, the entire system has be redesigned to save some space and increase speed.  Now, this is only for people looking to save space on massive data sets. 

    We have inventory data, coming in each day  date-store-product... and 6 othe items cost,price,model_stock, on_hand, in transit, sales qty.

    So, rather than keep a column with the date, we made a table for each day.  When running a query, we simply build the result set in a temp table as we go.  (35,000,000 recs a day)

    That saved 35,000,000 date fields each day.  The next step was to rename the columns and append them to each day.  So, we ended up with store-product-mon_cost, tue_cost, wed_cost, etc...  That saved us from repeating the storeID and productID 36,000,000*6*2 times.  Believe it or not, it makes running a query on a full week of data VERY fast.  Oddly enough, the only difference has been in difficulty, not speed.  Now that we've got compiled search procedures, it's a snap.

    In the end, even if we had more space, we ended up with a much more effective system - even being able to export entire weeks at a time to backup systems -

    So, if you're running into the wall, remember that you can redesign some or all of the DB in an unconventional format, keeping the active portion in a standardized layout.