I am using Analysis Services 2008 R2. I have a cube on a processing server which I regularly sync to the Querying cluster using a SQL agent job running an XMLA script.
Due to some recent development of the cube, it has grown in size. Last nights sync job failed as when copying the files over, the drive ran out of space. Surprisingly, the cube was still altered and ended up in a bad place whereby some data was refreshed and some wasn't.
It seems to me that the cube sync process is not atomic (ie it succeeds entirely or it fails entirely). Can anyone confirm that that is the case and if so, do you have any suggestions of failsafes to put in place to prevent a recurrence of this behaviour. I am considering a previous job step with some Powershell to check the size of the cube on disk on the processing server and ensure it is less than the available space on the querying server, but I am open to better suggestions! I haven't thought this through in it's entirety yet, its just the beginnings of a plan!