August 25, 2008 at 3:41 pm
Hello,
I'll describe my scenario:
I have a FactTable (FT) and 4 small related tables. The small tables allow to build some dimension for the cube.
Daily, around 200.000 new rows are added to the FT, but the 4 small tables suffer little changes (10 records are added or modified).
I'm using weekly partitions in the cube for the FT and proccess them using an incremental update.
The problem appears when changes are applied to any of the small tables. If I process the dimension, I'll have to process the partitions too included the partiotions related to past weeks.
I don't know if there is a way to avoid this unusefull reproccess....
All ideas are wellcomed,
Regards Mariano
August 26, 2008 at 5:25 am
When a dimension table is updated, you should only have to do a "Process Update" on the dimension. If you do this, your aggregations will be handled automatically on the fact data and you will only have to do a "Process Indexes" or a "Process Default" to process the indexes for the fact data.
August 26, 2008 at 2:48 pm
Hello Michael,
Thank you for your reply!!!
I'm working in a software project to update the cubes programatically.
So I'll program the dimensions to be updated with ProcessUpdate option and the cubes partitions will be updated with ProcessIndexes (ProcessUpdate is not supported for cube partitions).
But the current partition, which must include the new records, will be updated with ProcessAdd in order to perform the incremental update.
Do you think this strategy is correct?
I'll appreciate any comment or new ideas.
Thank you again, Mariano
August 27, 2008 at 4:45 am
Yes.
The order is important.
Update the dimensions.
Incrementally add new records to the fact groups.
Process "Default" - this will rebuild any indexes or aggregations that the other two processes messed up.
Viewing 4 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply