Updating Tabular Model

  • When I go to update my Tabular Model, it seems like I sometimes have to delete the tabular model from the server before the model will update there.  Has anyone experienced this before, and if so, did you resolve it?  If you have, how did you resolve this?

  • Is it that you just want to update the data and it's not updating? Sometimes the cube process default is not sufficient. I run into that where I've made a change and I want the cube to update but it doesn't so I have to do a manual process. Typically it's something outside of the cube that changed like if I've changed a calculation in a view & the cube doesn't see anything different because the view metadata is the same.

  • IN the most recent case I encountered, it was a case of the source database having a reduced number of rows in the table I was using in my model, but when attempting to deploy the model the number of rows that was listed in the deployment window was the old number of rows, not the new number of rows listed in the designer.   nor did my calculated measures based off of that table.  

    I was using Power BI as a front end, and tried both refreshing Power BI and closing it, creating a new .pbix and importing data all over again.

  • What you have described is that it's processing default rather than full. The row counts you see are the rows that are already in the cube instead of what's in the data source if it's not doing a full process. When the data changes, the cube is not aware of this. A normal nightly reprocess would typically pick up the changes. If you want to process the data, then you'll need to process full. (If you get into partitioning, then process the updated partition.) Sometimes, you will process full with the default because you've made a change to the cube that requires it like adding a new column. Or in your case, a fresh deployment which is what's happening when you drop the cube & redeploy.

    There are two options to fix this. You can either change your deployment option or process it manually after deployment. Personally, I set the project to "Do Not Process" and then process manually in whichever way I want. I'm typically dealing with large data and I want more control. If the cube is small, you can set the project deployment option to process full instead of process default.

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply