SQLServerCentral Article

Populating Fact Tables

,

Populating Fact Tables (Data Warehouse Loading Part 3)

Fact tables are normally loaded from transaction tables, such as order tables. Or from transactional files, such as web logs. Hence the number of rows to update in a load is much larger than in dimensions. The core of loading fact tables is to change the natural keys into surrogate keys. Let's start with the basic steps, then continue with a few of important consideration such as loading partitioned fact tables, slim lookup tables, deduplication, loading snapshot and delta fact tables, and dealing with fact table indexes

Basic Steps

When loading a transaction table from OLTP into a fact table in the data warehouse, the value columns on the transaction table become fact table measures, the primary key(s) on the transaction table become degenerate dimension column on the fact table, and the alternate primary keys become dimensional key columns on the fact table.

As rows of fact tables is loaded, each of the key columns is converted from the natural key. This is why all the dimension tables must be populated first before we populate the fact tables: because we need the surrogate keys on the dimension tables to translate the fact table natural keys. This is the very basic and the very heart of data warehouse loading, so it is important to understand it. Perhaps the best way to describe this concept is using an example. Consider a simplified order detail table with the following columns: order_id (primary key), order_date, product_id, quantity and price.

Source table: order_detail

order_idorder_date product_id quantity price last_update
352 15/07/2006 BGCKQ 1 12.99 30/10/2006
410 30/10/2006 KMSCG 2 7.99 30/10/2006

In the data warehouse let us assume we have an over simplified star schema consisting of a date dimension, a product dimension and a sales fact table, as describe below.

Date dimension table: dim_date

date_key date day month
2388 15/07/2006 Saturday July
2485 30/10/2006 Monday October

Product dimension table: dim_product

product_key product_id product_name description
53076 BGCKQ Aike IDE HDcase USB 2
92184 KMSCG Sony BP71 VIAO CG7 battery

Fact table: fact_sales

fact_key date_key product_key order_id quantity price load_time
830923 2388 53076 352 1 11.99 15/07/2006

Notes:

Note that order_id 352 already exists in the data warehouse but the price has changed. When it was loaded on 15/07/2006 the price was 11.99 but now the price is 12.99.

In reality date dimension would contains many more other attributes, such as day of the week (2 columns: name and number), ISO date (YYYY-MM-DD), Julian date (number of elapsed days since the beginning of the year), SQL date (in SQL Server datetime format), day of the month, calendar week / month / quarter / year (3 columns for month: full name, short name and number), fiscal week / period / quarter / year, weekday flag, statutory holiday flag, last day of the month flag, etc. Plus day and month names in other languages if your data warehouse is internationalised.

In reality the dimension tables would have standard dimension table columns such as load_time and SCD attributes but for this simplified case these columns are not displayed here.

It is a common practice not to have a fact_key column in the fact table, with the argument being the combination of all the dimension keys will make the record unique. In this case, the primary key of the fact table is a composite key. When loading this kind of fact table we will need to compare all dimensional key columns to identify the correct row when doing updating (see step 3 below).

The basic steps in loading data warehouse fact tables are described below. It is not that different from the steps in loading dimension tables. In fact, they are almost the same. In this example, it is assumed that the source table, order_detail on the OLTP, has been loaded incrementally into a stage table called order_detail. The stage database name is stg. So what we need to do now is to load the fact table records from the stage into the data warehouse.

Step 1. Create The Temp Table

SELECT * INTO #fact_sales FROM dw.dbo.fact_sales WHERE 1 = 0

The temp >Step 2. Populate The Temp Table

SET IDENTITY_INSERT #fact_sales ON
INSERT INTO #fact_sales
(date_key, product_key, quantity, price, load_time)
SELECT 
ISNULL(f.fact_key, 0),
ISNULL(d.date_key, 0),
ISNULL(p.product_key, 0),
ISNULL(s.quantity, 0),
ISNULL(s.price, 0),
@current_load_time
FROM stg.dbo.order_detail s
LEFT JOIN dw.dbo.dim_date d ON s.trans_date = d.sql_date
LEFT JOIN dw.dbo.dim_product p ON s.product_id = p.product_id
LEFT JOIN dw.dbo.sales_fact f ON d.date_key = f.date_key
AND p.product_key = f.product_key 
WHERE s.load_time BETWEEN @last_run AND @current_run

The temp table after it is populated:

fact_key date_key product_key order_id quantity price load_time
830923 2388 53076 352 1 12.99 30/10/2006
0 2485 92184 410 2 7.99 30/10/2006

Source table: order_detail

order_id order_date product_id quantity price last_update
352 15/07/2006 BGCKQ 1 12.99 30/10/2006
410 30/10/2006 KMSCG 2 7.99 30/10/2006

Date dimension table: dim_date

date_key date day month
2388 15/07/2006 Saturday July
2485 30/10/2006 Monday October

Product dimension table: dim_product

product_key product_id product_name description
53076 BGCKQ Aike IDE HD case USB 2
92184 KMSCG Sony BP71 VIAO CG7 battery

Notes:

Look how dimensional keys are looked up in a single step, by joining the fact table to the dimension tables on the natural keys. Because they are LEFT JOINS, the fact table rows without corresponding dimensional rows will results in NULL dimensional keys in the fact table. The ISNULL then convert these NULLs to zeros.

Notice that we dont populate the load_time with getdate(), but with a variable named @current_load_time. This variable is populated with getdate() at the beginning of the loading batch and used by all processes in the batch. This is necessary so that in the event of failure, we know the point in time we have to restart the process from.

Notice that we only load the rows between @last_run and @current_run. This is necessary if we keep the records on the stage table for a few days, i.e. if the stage contains more than 1 day data. If we clear the data from the stage table as soon as we load them into data warehouse, we dont need to specify this where clause.

The example above is using only 2 dimensions but in the real practice we would have to deal with more dimensions. 10 to 15 dimension key columns on the fact tables are common occurance.

Step 3. Update existing records

UPDATE f
SET f.date_key = t.date_key,
f.product_key = t.product_key,
f.order_id = t.order_id,
f.quantity = t.quantity,
f.price = t.price,
f.load_time = t.load_time
FROM dw.fact_sales f
INNER JOIN #fact_sales t ON f.fact_key = t.fact_key
WHERE t.fact_key <> 0 AND 
(f.date_key <> t.date_key OR
f.product_key <> t.product_key OR
f.order_id <> t.order_id OR
f.quantity <> t.quantity OR
f.price <> t.price)

Source table: order_detail

order_id order_date product_id quantity price last_update
352 15/07/2006 BGCKQ 1 12.99 30/10/2006
410 30/10/2006 KMSCG 2 7.99 30/10/2006

fact_sales after the update:

fact_key date_key product_key order_id quantity price load_time
830923 2388 53076 352 1 12.99 30/10/2006

Notes:

Here we update the fact table, based on the data on the temp table. In this case the price was updated from 11.99 to 12.99. We only update the rows where the tmp tables fact_key is not 0, i.e. the rows already exist on the target fact table. For the rows where the fact_key is 0 (not exist on the fact table), we will insert them into the fact table later on.

Notice that when updating rows we update the load time column as well. The last line is used to specify which changes we want to pickup. In most cases, we want to pick up changes on all columns, but sometimes there are legitimate reasons for business rules to specify that changes on certain columns are to be ignored.

Step 4. Insert new records

INSERT INTO dw.dbo.fact_sales
(date_key, product_key, order_id, quantity, price)
SELECT date_key, product_key, order_id, quantity, price
FROM #fact_sales
WHERE fact_key = 0

Source table: order_detail

order_id order_date product_id quantity price last_update
352 15/07/2006 BGCKQ 1 12.99 30/10/2006
410 30/10/2006 KMSCG 2 7.99 30/10/2006

Date dimension table: dim_date

date_key date day month
2388 15/07/2006 Saturday July
2485 30/10/2006 Monday October

Product dimension table: dim_product

product_key product_id product_name description
53076 BGCKQ Aike IDE HD case USB 2
92184 KMSCG Sony BP71 VIAO CG7 battery

fact_sales after the insert:

fact_key date_key product_key order_id quantity price load_time
830923 2388 53076 352 1 12.99 30/10/2006
916912 2485 92184 410 2 7.99 30/10/2006

Notes:

It is a good practice to always declare the column names. This is important for flexibility and maintenance. Let RDMBS maintains the fact_key. When setting up the data warehouse, set this column to be IDENTITY (1,1).

Logging and closing

In every step above we need to do error handling and logging. Error handling is important because if the loading fails on any steps, we need to be able to recover from the failure. Logging is important to know what exactly happened on each steps, i.e. how many records are processed, etc.

At the end of the program, we should not forget to clean everything up, i.e. drop the temp table(s), follow control protocol e.g. set the process to complete state, etc.

The above code shows how to do upsert with native SQL codes, which is very fast and efficient. But it is worth to note here, that good dedicated ETL tools such as Informatica and Data Integrator have the facilities to do in-memory lookups which has very good performance too. Disk-base lookup is definitely not they way to go here, as they are slow. Mainly because there are a lot of SQL statements to execute, i.e. one for each lookup, and each of these statements touches the disk, which is a costly operation.

Now that we have understand the basic steps in loading fact tables, lets familiar ourselves with a few practical experiences such as loading partitioned fact tables, slim lookup tables, deduplication, loading snapshot and delta fact tables, and dealing with fact table indexes.

Slim Lookup Tables

When a dimension is very large, sometimes it would be a significant performance improvement if we provide a separate lookup table for key management. For example, if our customer dimension has 100 columns and 10 million rows, we could create a customer key lookup with only 3 columns: customer_id, customer_key, load_time, which would increase the performance of dimensional key lookup process on step 2 above. It is also more suitable for performing in-memory lookup.

Example of Slim Key Lookup table: lookup_customer

customer_id customer_key load_time
493238 30012 02/10/2006
493240 30013 03/10/2006
493241 30014 03/10/2006

The load_time column would be useful if we have very long dimension table, e.g. 25 million rows. This is not uncommon when the data warehouse is utilised to drive CRM systems (Customer Relationship Management), especially in the dot com companies dealing with online campaigns, be it subscription based or tactical campaigns. In this case we can specify a where clause to limit the selection on this slim key lookup, for example where load_time is within the last 2 years. This would cover, say, 98% of the lookup. For the remaining 2%, e.g. the one older than 3 years, we then go to the main customer dimension table to find the customer_key.

Altough it takes a bit of time to maintain the lookup table, overall we still save time as the time saved by querying a slim key lookup table is a lot greater than the time required to maintain the lookup table itself. This is especially true for a Master Data Management (MDM) dimensions such as customer, product and account, where they are used all over the place in the data warehouse and data marts. Time dimension is not that long - 10 years is only 3651 rows - hence I tend not to have a lookup for time dimension. For those of us who think to set the grain of time dimension to hours or minutes, the general advice is: don't. Either put a time stamp column on the fact table or have a time of day dimension. But this discussion (time of day dimension) is for another article.

Natural Key Changes

Sometimes a natural key which we thought was a good solid natural key could change and therefore can no longer be a natural key. Perhaps the best way to explain it is using an example. In a OLTP source system the customer table has a composite primary keys as combination of branch code and customer code. Customer code is only unique within a branch. Occasionally a branch could be closed and all the customers in that branch are moved or assigned to another branch. For example, see the customer table below.

branch_code customer_code customer_name other_attributes
1 1 Andrew ...
1 2 John ...
2 1 Steve ...
2 2 Paul ...

When branch 1 is closed, and its customers are moved to branch 2, the customer table becomes:

branch_code customer_code customer_name other_attributes
2 1 Steve ...
2 2 Paul ...
2 3 Andrew ...
2 4 John ...

If we use branch_code and customer_code as natural key in the data warehouse, we need to handle this branch closure event with care. In the previous project we utilise another column in the customer table which would help identify a unique record and we create a special table in the data warehouse to translate the change natural key to the correct data warehouse key.

Unknown Records

If a fact table record does not have a corresponding dimension record, basically we have 2 choices: either we don't load that record into the fact table, or we load it but we put 0 as the dimension key, referring to the unknown record in the dimension table. An unknown record is a record in the dimension table, with a dimension key of 0 and all the attributes are populated with blank string, number 0 or low value date, depending on the data type. Name and description columns are usually populated with the word "Unknown". The load_time column is populated with the date of the record was created. This date is normally equal to the date the data warehouse was setup, because the record was created by the data warehouse setup scripts. For example:

Product dimension table: dim_product

product_key product_id product_name description min_level valid_until load_time

0 0 Unknown Unknown 0 1/1/1900 12/10/2004

Here is an example of a row in the source table with a product_id that does not exist in the dimension table:

order_id order_date product_id quantity price last_update
358 19/08/2006 BGCKZ 3 2.99 31/10/2006

This is how fact_sales looks after that record is loaded:

fact_key date_key product_key order_id quantity price load_time
830937 2424 0 358 3 2.99 31/10/2006

If we load it into the warehouse, we need to flag it into the data quality system, so that it can be reported and corrected on the subsequent load. If we don't load it it should still be set as a data firewall rule. If we don't load it, the total of measure on fact table would not be accurate. In the example above, the total sales amount for July 2006 would be 3 x 2.99 less than what it should be. Because of this we tend to load fact table record that does not have a corresponding dimension record and set the dimension key to 0. This way the fact table measure total would be correct, it's just that the sales could not be traced to a valid product record. But all other dimension keys would still be valid, e.g. it can be traced to a valid date dimension record, a valid customer dimension record, etc. And above all, the referential integrity between the fact tables and the dimension tables are still valid.

Deletion

There are 2 main causes why we perform deletion on the fact tables: 1. because the data extraction approach is fixed period extraction, and 2. to accomodate reverse transactions such as cancellations. No 1 is usually physical deletion and no 2 is usually logical deletion. Let's discuss them one by one, using examples to clarify.

No 1, loading approach. In one of the data warehousing projects I've been involved with, we had difficulties extracting a transaction table from the source system incrementally, because the date stamp was not very reliable. We tried with 3 weeks tolerance i.e. where last updated date or created date is within the last 3 weeks but we still find some leakage, i.e. a few records were updated without the date stamp columns were not updated. Please refer to this article for discussion about incremental extraction and leakage. Luckily, source system did not allow the user to update records that were more than 6 months old. When a user tried to update a transaction record that was created more than 6 months ago, the system displayed an error, something like "You can not modify this record." So our approach of extracting this transaction table to the stage was to get 6 months data every time. And consequently, the approach of loading the fact table from stage was to delete the rows on the fact table that exist in stage, then reload all the stage records into the fact table. It was actually the same as updating those records, but we found that it was quicker to delete then reload. We identify the records on the fact table that exist in stage by comparing all the dimension key columns, i.e. the composite natural keys of the source transaction table.

No 2, accomodating reverse transaction. Sometimes, in the source table we had a main transaction table containing normal transactions and a smaller secondary transaction table containing cancellation records. We have 2 options loading this kind of tables into the fact table in the data warehouse: either we load the cancellation as a new record with negative measures, or we load the cancellation as logical deletion. Reverse transaction such as refund and credit notes needs to be implemented as negative measures, but for cancellation we have 2 options. Each approach has its own advantages and disadvantages. Which one is better depends on the situation, for example whether we will be loading from that fact table into OLAP cubes or not, whether we will be creating a summary fact table or not, whether we need the historical dates or not, and whether the secondary source table contains complete data or not. If we decided to go for logical deletion, then for each cancellation record exists on the secondary source table, we mark the logical delete column on the fact table record. All processes further down the stream such as loading into data marts, creating summary tables or loading into OLAP cubes need to be aware of this column and they need to handle it properly.

Deduplication

When loading records from stage to fact tables, sometimes we have duplicate records. For example: we declare that the grain of the fact table is 1 day for every product for each store. But we found this on the stage table of the fact table:

date product_id store_id quantity
19/8/2006 BGCKZ 309 30
19/8/2006 BGCKZ 309 1

Do we add them up, take the maximum, or take the minimum, or take the average? First we need to understand why it happen. In this case I always found it useful to go back to the business users or the source system expert. The second record could be an error, and in this case we take the earliest record. Or it could be a correction and in this case we sum them up. Or there can only be 1 measurement per day and in the case of 2 or more records found it must be a system error and therefore they want us to take an average. Whatever the business rule is, we need to document it, obtain sign off and then implement it in the code. This process is called deduplication. This normally happens if the source system allow duplication, i.e. it does not have the necessary contraints in the database to make it unique as per the grain criteria.

Deduplication does not only happen in fact tables, but also in dimension tables, especially MDM dimensions such as customer. Deduplication can also occurs when the fact table is loaded from 2 or more different source system. Many data quality software such as Trillium, DataFlux, DQ Global, have facilities to deduplicate data.

Fact Table Indexes

Fact tables can be very large. They can have millions of rows. To improve query performance, fact tables are normally indexed. The cluster index of a fact table is normally a composite of the dimensional keys. Or the fact key column, if there is one. For the non clustered indexes, deciding which column to index depends on how the fact table is used. Whether it is used for direct queries by end users, for populating OLAP cubes or by reports. SQL Profiler and Index Tuning Wizard are useful to understand what indexes would help improving the query performance.

If our fact table contains 100,000 rows or less, we just load the fact table with the indexes on. There is no need to worry about dropping indexes when loading. If our fact table contains more than 1 million rows, it may be quicker to drop the indexes before loading the data, and recreate them afterwards. If the load is less than 10% of the fact table length, generally speaking we don't need to worry about dropping indexes. Chances are we could decreate loading performance by doing so. But if we are loading 20% or more (of the fact table length, i.e. number of rows in the fact table) we may want to consider dropping and recreating indexes. It is very difficult to generalise this, as the performance differs depending on what indexes we have and what loading operations we perform. Hence we always need to test it to prove that our load is significantly improved by dropping and recreating indexes, before implementing it in production.

Fact Table Partitioning

Table partitioning is new in SQL Server 2005, but has been in Oracle since 1997 on Oracle 8 and improved in version 8i and 9i. In SQL Server 2000 we only have partitioned view, not partitioned table. This article provides a good overview on the partitioning on SQL Server 2005. Joy Mundy wrote an article about partitioning on SQL Server 2000. For Oracle, it's on this document and this manual. In DB2, table partitioning was introduced in version 8 since 2002, and was greatly improved in version 9 which was release in July 2006. Paul McInerney describes DB2 partitioning features for data warehousing in this article.

Fact table partitioning is basically dividing the fact table into several physical parts. Each part is called a partition. Each partition is ideally located on a different physical data file and ideally each file is located on different disk. For example, we can divide the fact table so that rows for January 2006 are located on partition 1, rows for February are located on partition 2, and so on. Each partition can be loaded separately in parallel. Partitioning can improve query performance and increase availability. We can add new partition, drop existing partition, truncate (empty) a particular partition, move a partition, split a partition, merge several partitions and exchange/switch partitions. All this improves maintenance operations and greatly simplify administration tasks.

If our fact table contains more than 1 million rows, we need to consider partitioning it. It can improve the loading performance significantly. We are talking 5-10 times quicker. This is because we load new data to just 1 partition, which is say 12 times smaller than the size of the whole fact table, hence quicker. There are also techniques to load data into a new empty table with exactly the same structure as the fact table, then switch/exchange partition that new table with the main table. Queries can be a lot quicker too. We can also partition indexes so that each part of the index serves only one table partition. This allow more processes to run in parallel. Partition can really be a life saver for a data warehouse.

Internal Data Warehouse Entities

Some data such as performance target or budget does not exist in any source system but it needs to exist in data warehouse. Hence they are known as internal data warehouse entities. For example, for each store we can calculate out of stock percentage, i.e. how many times a required product is out of stock. There may be a concensus that it needs to be under 5%. This 5% is not written any where in any source system but will be required by the reports so they need to exist in the data warehouse some where.

We have 2 options on how to get this performance target into the warehouse. 1) we can build an application which will store or persist the data into its database, then from there we ETL into staging and into the warehouse, or 2) put it on a simple spreadsheet and we import it into the warehouse. In any case we should not allow the data to be directly entered into data warehouse, for example using direct SQL statement or through SQL Server enterprise manager. All data stored in the warehouse must go through the data quality / data firewall rules, so that any exceptions to the rules are recorded and reported.

Loading Snapsnot and Delta Fact Tables

A snapshot fact table is a fact table that contains a measurement of status at specific point in time. For example:

a fact table that contains actual inventory level for each product at 9 am every day

a fact table that contains balances of every saving account in all branches on the first day of every month

a fact table that contains the details of all accounts that each customer has every day

There are 2 kinds of snapshot fact tables, periodic and accumulating. Periodic snapshot fact tables contains regular statement of status. All 3 example above are periodic snapshot fact tables. Accumulating snapshot fact tables show the status at any given moment. It is useful to track items with certain life time, for example: status of order lines. Please refer to this Ralph Kimball article for more details about snapshot fact tables. An example of accumulating snapshot can be found here.

How do we load periodic snapshot fact tables? We extract all records that satisfy the criteria from the source table at certain period. For example, take all active rows from the account tables including the balance. Do this automatically once a month on the first day. Loading accumulating snapshot is rather different. We still take all records from the source table that satisfy the criteria, then we update the fact table. For the example of purchasing accumulating snapshot above, everytime there is new piece of information about a particular purchase, we update the fact table record. We only insert a new record in the fact table when there is a new purchase requisition.

Delta fact table is a fact table that we produce as a result of comparing the condition of a source table on 2 different time point. For example: account table. This table in the source system contains all customer accounts. Say on Monday we have 100,000 active accounts and on Tuesday we have 100,001 active accounts, i.e. there were 2 new accounts opened, 1 account closed and 3 accounts changed. Out of the 3 accounts changed, 1 is changed interest rate (from 5.75% to 5.50%), 1 changed the credit limit (from 3000 to 3500), and 1 the interest payment frequency (from daily to monthly). On the delta fact table there will be 6 new rows today (suppose today is 6th Nov 2006). Legend for change_type: NA = new account, CA = closed account, IR = Interest Rate, CL = Credit Limit, IPF = Interest Payment Frequency.

account_key change_date change_type IR_before IR_after CL_before CL_after IPF_before IPF_after
49912 6/11/2006 NA
26077 6/11/2006 CA
32109 6/11/2006 IR 5.75 5.50
19387 6/11/2006 CL 3000 3500
29462 6/11/2006 IPF D M

To populate delta fact table, we download the source account table everyday and compare today's copy with yesterday's copy and entered the differences on the delta fact table.

Purging Or Pruning A Fact Table

Some people call it purging, some call it pruning. Purging or pruning a fact table is an activity to remove certain rows from the fact that satisfy certain criteria. To give us an idea below are some examples of purging criteria:

older than 2 years

older than 2 years and status is not active

keep daily records for the last 4 weeks then Monday only for the last 2 years

Example 1 and 2 is normally for transaction fact table and example 3 is normally applicable for periodic snapshot. Purging is important when our data warehouse is a few years old. Purging can improve query and load performance significantly. If the fact table is partitioned, it is a common exercise to archive the oldest partition (say older than 5 years old, partitioned by month) then drop or truncate the partition. Archiving can be done by exporting the partition (using exp in Oracle) or by backing up the partition.

Reloading Fact Table History

Soon after the data warehouse is in production, user may want us to reload say last 2 years history from the OLTP source system. Say they want 2 years history of order tables. It's not difficult to reload the history. Normally we just need to modify the time window on the ETL package. For example, if you use SSIS or DTS, we could set the time window on a table in the metadata database, where the Last Successful Extraction Time (LSET) and Current Extraction Time (CET) are kept. See this article for the concept of LSET and CET.

The problem with reloading fact table history is: we need to reload all related dimensions too. And most OLTP do not keep this history. All orders in the last 2 years, yes no problem they have it. But all customer and products and store details in the last 2 years? Often the customer and products history are not kept, their details are overwritten with new ones. SCD concept is not known and implemented in OLTP systems. This way they lost the history. The best way is probably to load the last condition of product, customer and store tables from the source system, and for all order table records which we can't find a match in the product, customer and store tables, we reference them to the unknown records. This way the sum of measure in the fact table will still be valid and the referential integrity will still be intact.

Vincent Rainardi

6th November 2006

Rate

4.94 (16)

You rated this post out of 5. Change rating

Share

Share

Rate

4.94 (16)

You rated this post out of 5. Change rating