• There are several models to choose from when you consider data warehousing. Some physical models are "purists" and would never conceive of having a child to a dimension table as it would violate the concept of a pure star schema design. There are times I believe that you can do this without the penalty, especially if the DBMS involved is good at exploiting single-column indexes with logical ANDing/ORing. Another model difference would be whether you are going to perform "typical" data warehouse reporting or whether you are going to design for data mining (either predictive analytics or to have the newer data warehouse appliances such as Neteeza).

    In several industries, your fact table can have an extremely long row sizes (my experience was with financial services, but the insurance industry is the same way). In the logical model, you will have logical sub-typing of fact table. That is, you can have core information that is the same for the transaction, but there are differing/variable extensions to that transaction record. Not even mainframe DB2 can handle that type of length without going to a wasteful 32K page size. This is especially true when you consider that most of the variations will be filled with blanks. I know that database compression will take these out very efficiently, but most of the time a DBA will have to determine whether these variations to the transaction record can be split out to their own fact table. Then of course, you will have to make sure that the users will NEVER join one fact table to another fact table. If they do, then performance will be in the toilet and there will be nothing you can do about it.

    Data warehousing/BI will do more with date sensitive data so it will be incumbent on the physical designer to design/partition in such a way the query will not be plowing through unnecessary rows. Sometimes the 80/20 percent rule comes into play. Don't design for 100 percent of the queries. If 80 percent are processing against the last three years of data, give optimum design performance in your partitioning for these. If 20 percent are crazy, then let them have sucky performance (or purchase their own data mart).

    Vertical and horizontal table partitioning will probably play a greater role DW/BI. Indexing will probably play a greater role too since the heavy repetitive SQL write operations will not be occurring as in OLTP.