monitor your memory on the SQL server side. we had an issue about 6 months ago where the table (views) that we query to began to consume all server memory. the server memory would go down to 80-100MB and bounce around there for a while (i presume swapping out to the page file) then the cube would return a failure. we resolved this by going from 2GB server memory to 4GB utilizing the /3GB switch. (we have a dual server setup - 1 AS server with 2GB RAM and 1 SQL server with 4GB RAM).
afterwards, we reduced the number of rows by splitting the cube into two separate cubes based on business unit, and did some tweaking with the views to reduce complexity. we've even further reduced the problem by partitioning the cube based on year.
once your fact table reaches X many rows (can't really tell you an exact number but i'd say 10-20 million is a good number) you should look for way to avoid processing it all in one fell swoop. break it up somehow and spread the load out.