Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

sql12 tabular mode design Expand / Collapse
Author
Message
Posted Wednesday, January 23, 2013 5:43 PM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, September 2, 2014 5:46 PM
Points: 498, Visits: 307
Hello, Im finding it a little "extra" work is needed to get a complicated tabular model designed vs traditional OLAP cubes and it also seems more clunky to work with. For example renaming fact/dims tables/columns there is no auto-refresh of the data source view, you have to go to each table properties and refresh there and then modify your affected formulas. Having say 10 facts and 20 dims there are a lot of "inactive" relationships created by the designer which results in custom measures to be created for each of these, why did they do this? E.g. CountofInactiveRelationField:=CALCULATE(COUNT(Table[column]),USERELATIONSHIP(Table[column] ,LookupTable[LookupColumn])). If we dont do this then the counts are not correct for this inactive field. Traditionally to get a simple count rolled up for each dimension there is on standard count measure and the rollup happens automatically in OLAP cube. This happens to be a pain where there are 10+ formulas for each of the facts!

Is this the design experience developers are having out there? or is there some workarounds for this? It seems memory/compression speeds for data loading/retrieve is offset by bad design principles needed to be applied to get it.
Post #1410852
Posted Thursday, February 28, 2013 1:35 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Wednesday, July 30, 2014 2:03 PM
Points: 124, Visits: 488
Give me a few months and I'll be able to share the frustration! Still waiting on approval for my 2012 Enterprise license.
Post #1425273
Posted Wednesday, March 20, 2013 9:46 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Monday, October 20, 2014 8:23 AM
Points: 328, Visits: 1,999
When working in Tabular mode you need to move away from the concept of fact and dimension tables. Each of those joins will require an index creating on each table which is going to hurt the compression

Denormalising towards a single table will give the best performance and compression

Personally I think the use of the tabular mode should be considered very carefully. It is NOT a replacement for dimensional OLAP. You should not try and store your entire data warehouse in it.

Instead think of it as an easy to setup analytical space for smaller datamarts which you don't want to go to the effort of modelling in the dimensional cube. You can prototype in powerpivot in Excel and then import that model into Visual Studio when you're ready to deploy
Post #1433319
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse