Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

Stratety of processing dimensions and partitions (SSAS 2005) Expand / Collapse
Author
Message
Posted Monday, January 20, 2014 9:12 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Today @ 8:38 AM
Points: 48, Visits: 233
So in my case there are couple of dimensions and couple of measure groups. Each measure group has couple of partitions, which are partitioned by year (query binding). The dimensions will have updates/inserts occasionally and the facts will have inserts only daily. What's the best strategy to process the dimensions and facts? I am thinking of using a XMLA script (scheduled in SQL Server job) to process update all dimensions and process full partitions of current year daily and leave partitions of previous years untouched. But I also read somewhere process update (for dimensions in my case) is not as good as process full because indexes not be created (not sure about this). But if I use process full for dimensions, all related partitions will be invalidated, which means partitions of previous years will be invalid until I use process full on them.

I believe this is a very common case and wondering what's the general approach/best approach?
Post #1532718
Posted Tuesday, January 21, 2014 2:59 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Thursday, July 17, 2014 9:15 AM
Points: 451, Visits: 846
The way I have handled this in the past is to do a process update to the dimensions followed by processing the latest partition in the facts. Then the final step would be a process default or a process index on the entire SSAS database. This ensures that all of your aggregations are processed correctly. This will increase processing time but will make sure that your query performance is as good as it should be (that is, if you have the right aggregations designed!).

More info from a Chris Webb presentation here:
http://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CC0QFjAA&url=http%3A%2F%2Fwww.dvbi.ru%2FLinks%2FUseful_Books%2Ftabid%2F109%2FFileId%2F3562%2Flanguage%2Fru-RU%2FDefault.aspx&ei=S0TeUp2NM66w7Ab79IGQDw&usg=AFQjCNE6h7luuzwbgK_iUwDAb8oqEnT_XA&bvm=bv.59568121,d.bGQ&cad=rja





I'm on LinkedIn
Post #1532928
Posted Tuesday, January 21, 2014 5:18 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Today @ 8:38 AM
Points: 48, Visits: 233
Thanks for the reply. That's a very good point. I also found this:

http://social.msdn.microsoft.com/Forums/sqlserver/en-US/34175435-7020-4aee-8c93-0535d5c21329/ssas-cube-processing-best-practice-sql-server-agent-job?forum=sqlanalysisservices

The guy in the post seems having a similar question. I agree there is no one best practice for all situation.
Post #1532975
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse