Number of natural\business keys in dimension tables

  • My scenario is when the destination dimension table is derived from multiple source table with the source table changes occuring independently of each other.

    I imagine this is fairly common scenario, but I will also add in that low data latency is important.

    Would the norm be to include each natural key in the dimension table and have the ETL for each source table handled individually (and therefore each potentially trigger an SCD change)?

    I have a wide dimension that will come from an excessive number of source tables.

    I fear maintaining indexes on natural keys and the potential locking on the table caused by the parallel ETL processing that could be updating the table (unless I serialized it).

    Should the dimension have an excessive number of business keys if the source data warrants it?

    Does snowflaking make sense under the circumstances?

    Is creating mini-dimensions and having more keys on fact tables a better choice?

    Do others maintain business keys in staging and only have a business key that represents the grain of the attribute in the dimension?

    Would others merge data at the source and get a full row before getting to staging?

    Thanks for any thoughts

  • Daniel Hallam (9/19/2012)


    Should the dimension have an excessive number of business keys if the source data warrants it?

    I would say more if the questions the users were asking of the data warranted it.

    Does snowflaking make sense under the circumstances?

    What you're discussing really is why snowflaking became more common, but the question really comes down to can they use some of the dimensions without the others for the most common questions asked of the data mart? If yes, then snowflake, if not, don't.

    Is creating mini-dimensions and having more keys on fact tables a better choice?

    That's going to depend on just how wide that's going to get, honestly. I personally would need more information to make that judgement call, because that's what it comes down to.

    Would others merge data at the source and get a full row before getting to staging?

    Under most circumstances, this is what I do. Then I use the RowID + Checksums to test for updating, to strip out anything that doesn't need to be modified. For either static dimensions or SCDs you still need to check for change.

    You've got a bit of a mess there. The only recommendation I can make to you is start with snowflake and then work into combining the split leaves when the data is un-separatable. For example, if you never really look up particular data without category information as well, combine it. If you often look up by one or the other, leave 'em separate to ease ETL duties, particularly if you have a very low latency requirement.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • I imagine this is fairly common scenario

    I don't know that it's common at all, but more details would be helpful. You could consider seperate dimensions or maybe even a junk dimension.

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply