Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

Urgen help needed on MDX or SSAS cube design Expand / Collapse
Author
Message
Posted Thursday, May 16, 2013 2:52 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Wednesday, August 6, 2014 11:48 AM
Points: 10, Visits: 112
Hi, I just recently found we have duplicated records in our fact tables which are allowable. That means the combination of all dimension keys cannot uniquely determine a row in the fact table. Consider the following data:

Fact Test
DateKey TestKey CustomerKey Amount
5 1 1 16
5 1 2 10
5 1 2 4

When I tried to get the average Amount by DateKey and TestKey, I used the following mdx query

WITH MEMBER Measures.AvgAmountCal AS (
avg({[Dim Test].[Test Key].CURRENTMEMBER*[Dim Customer].[Customer Key].CHILDREN*[Dim Date].[Date Key].CURRENTMEMBER},
Measures.[Amount]
)
)
SELECT
{Measures.[Amount],[Measures].[Fact Test Count],Measures.AvgAmountCal} ON 0
,NONEMPTYCROSSJOIN([Dim Date].[Date Key].&[5], [Dim Test].[Test Key].CHILDREN) ON 1
FROM [Test DB]

However the result is
Amount Fact Test Count AvgAmountCal
5 1 30 3 15

It is more like that SSAS aggregate the duplicate key rows into one and count them as one row. Of course this can be solved by using Amount/[Fact Test Count]. But how can I calculate Standard Deviation/Top Percentile? They gave me the wrong value. I have tried serveral ways, including changing SSAS cube aggregation method from Sum to None. But it still does not solve the problem. This is very urgent. Anyone please provide your inputs. Thanks ahead!
Post #1453788
Posted Thursday, May 16, 2013 7:03 PM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: Moderators
Last Login: Wednesday, August 6, 2014 8:12 AM
Points: 1,815, Visits: 3,456
You could google to be sure but I'm reasonably sure that the engine does exactly that - aggregates records that would be dupes. Without another key value to differentiate them, it makes good sense to aggregate them to save space etc


Steve.
Post #1453818
Posted Thursday, May 16, 2013 7:46 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Wednesday, August 6, 2014 11:48 AM
Points: 10, Visits: 112
Appreciate Steve for your quick input. So there are no way of walking around, right? I think this phenomenon is very normal since in most fact table design, we allow some value (like -1) to represent unknown keys. If they will be aggregated into one row (if other keys are same), this will generate incorrect data calculation when rolling up, without being detected (like in this case). Am I right?

Thanks again for your quick reply, Steve.
Post #1453820
Posted Friday, May 17, 2013 9:00 AM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: Moderators
Last Login: Wednesday, August 6, 2014 8:12 AM
Points: 1,815, Visits: 3,456
That sounds right, yes. If there isn't anything to uniquely identify the rows (different invoice #, transaction id, time of day etc) then for all intents and purposes, they're the "same thing" and *should* be aggregated.

It sounds like there *should* be something that identifies these as separate events but your DW is not capturing it.



Steve.
Post #1454050
Posted Thursday, May 30, 2013 3:08 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Monday, July 28, 2014 8:51 AM
Points: 132, Visits: 581
Once the data is in SSAS, the most granular you can get is at the dimension keys. As mentioned by Steve, to be able to analyze the two records as separate, you'll need another dimension that separates them. SSAS is meant to be used as a tool to perform aggregate data analysis so when it stores data, it loses all the record level details.

Creating a SalesID/TransactionID that uniquely identifies the record and creating a dimension on it will allow you to perform the calculations you want, but you'll face a large storage and performance cost since you'll no longer be only storing aggregate data in SSAS.

If you need to perform standard deviation and percentiles across each individual record, you'll probably just have to do it in SQL.

I've had implementations where the user wanted to slice and dice on SSAS in excel, then drill into details which were queried from SQL. I used ASSP to tie an action to run a stored procedure that would return a data set based on parameters passed by the excel slice.
Post #1458470
Posted Friday, May 31, 2013 6:49 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Friday, August 29, 2014 6:48 AM
Points: 112, Visits: 75
Short answer is your fact table violates proper granularity. By definition a single fact must be unique based on the granularity of the dimensions. If it isn't unique then you are either missing a dimension or have not decomposed a dimension far enough. A cube aggregates measures across the dimensions. There is no such thing as a row in a cube so trying to create MDX based on SQL concepts will get you in trouble every time.
Post #1458647
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse