• We aborted mission on trying to parallel process partitions. We need to be able to query other cubes while processing other cubes during business hours, and unfortuntley limiting the ThreadPool\Process\MaxThreads and CoordinatorQueryMaxThreads makes the experience very slow on the end user's side even when not processing the cube at the same time.

    The white notes link you provided was very useful. Based on the table that shows recommened CPUs per partition, we would need at least 20-40 CPUs, since we have around 20 partitions that are split out by date and year. This of course far exceeds our 12 CPUs.

    So, instead our solution was to change our code to process 1 partition at a time, and force the package to handle data and index build separately using ProcessData and ProcessIndexes.

    This makes our process take 110minutes instead of 45-55minutes, but at least this only takes up between 25%-45% CPU.

    Thanks again.