October 13, 2005 at 2:52 pm
When processing a 2GB cube with 11 partitions, the processing completes without error but an Aggregate amount for one cell is wrong, it does not match the underlying DB,nor the results from the SQL that the cube is using to build. After working with MS for 3 days, they have been able to reproduce the problem with both sp3 and SP4. Our Processing buffer was set to 100M. When changed to 32M the cube processess and yields the correct results. There is no warning when this problem cropped up, aside from the calls from our users
October 13, 2005 at 5:58 pm
Now that's a concern, the Perf guide almost goes as far as saying that 32Mb is a ridiculous Process Buffer size and everyone* should increase this (based on their tuning recommendations). I actually thought they *had* said that but couldn't see it when reviewing the doc today.
*Everyone meaning anyone who is building cubes with significant input row counts.
Steve.
October 13, 2005 at 6:01 pm
It was in the Ops guide....
The Process buffer size setting on the Processing tab in Analysis Manager (the ProcessReadSegmentSize value in the registry) determines the maximum size of each process buffer. By default, the maximum size of each process buffer is approximately 32 MB. For most applications, this is probably too small and should be immediately increased. A more effective setting is at least 150 to 200 MB.
Maybe this document needs a review and update?
Steve.
October 14, 2005 at 5:03 am
Actually after further discussion with MS, the fix for this problem is part of SP 4 and is available as a hot fix for sp3. We are testing it over the weekend for our situation. In the mean time we have scaled back our Processing buffer to 32M. The scary thing about this is the data that went crazy 532 billion vs 5 million happened to occur in the current month data so it was quickly caught. I'll post the results of our test of the fix.
Viewing 4 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply