intermittent cube processing failure - LazyWriter Stream

  • We have a pretty big OLAP database, about 400-450gb.

    Its on its own VM with 24cpu and 74gb of RAM.

    I've had this problem intermittently in the past, but its become more frequent lately, to the point of it being a daily occurrence. The daily incremental process takes a couple hours each day (when it works).

    The error that is showing in the logs is:

    "Description: File system error: The following error occurred while writing to the file 'LazyWriter Stream': The I/O operation has been aborted because of either a thread exit or an application request. . "

    There will be several of these messages in the SQLAgent JobOutput against several different files within the OLAP database.

    Ive found some reference in searches to these thread exits being due to out of disk space issues, and Ive confirmed that not to be the case, we have over 200gb free on the drive at the point of the process failure as captured by perfmon..

    Is this just a matter of throwing a lot more memory at it, or is there something to be done within the SSAS instance's settings?

    Thanks for any input you all might have. Ive not found any hits with this particular message with regards to SSAS Processing failures.

  • In my opinion, you could still be running out of disk space. Remember that when a cube us processed, a new copy is created and processed while queries are still serviced from the existing version.

    So if you have a cube that is larger than 200 GB, SSAS does not have enough space to create a copy to process. You may want to consider processing individual partitions (assuming you're not doing it already).

    Here's a reference that may be helpful: http://www.jamesserra.com/archive/2011/06/can-you-query-a-ssas-cube-while-it-is-processing/[/url]

  • ive considered that... when the cube is steady state, not processing, there is 500gb + free on the data drive. the total drive volume is 900GB.

    we have metrics collected every minute which show free megabytes never dropping below 215gb. in the next collection after the failure, it returns to its steady state free MB value.

    Is it conceivable that this process is writing that much data, failing and releasing it all within a minute between collection points?

    thanks for taking the time to give my problem some thoughts.

  • LAW1143 (4/19/2016)


    ive considered that... when the cube is steady state, not processing, there is 500gb + free on the data drive. the total drive volume is 900GB.

    we have metrics collected every minute which show free megabytes never dropping below 215gb. in the next collection after the failure, it returns to its steady state free MB value.

    Is it conceivable that this process is writing that much data, failing and releasing it all within a minute between collection points?

    thanks for taking the time to give my problem some thoughts.

    I think it's conceivable, just not sure what the probability is. Do you currently have partitions in your cube? If not, I'd do that as a first step and process the partitions individually.

    How are you currently processing your cubes? SSIS, XMLA or something different?

  • it processes via an ssis task that I am admittedly not intimately familiar with.

    another note, we are seeing memory starvation on the OS at the time of it processing.

    Weve lowered the HardMemoryLimit from 95 to 91 and this still fails. Suspect its related to SSAS consuming memory, starving the OS and causing failure.

    I failed to mention, this is on windows 2008R2 server and sql2008r2 sp3, enterprise edition.

  • LAW1143 (4/19/2016)


    it processes via an ssis task that I am admittedly not intimately familiar with.

    another note, we are seeing memory starvation on the OS at the time of it processing.

    Weve lowered the HardMemoryLimit from 95 to 91 and this still fails. Suspect its related to SSAS consuming memory, starving the OS and causing failure.

    I failed to mention, this is on windows 2008R2 server and sql2008r2 sp3, enterprise edition.

    I'd definitely recommend that you look into the SSIS processing task first, and split it out to process on a more granular level.

    The memory pressure you're seeing can also be a contributing factor, and changing the memory limit will not necessarily fix it if the available memory is already too little for the task it needs to perform. You may need to run a profiler trace while processing to capture any warnings and messages from SSAS.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply