Memory configuration for SSAS in Cluster

  • SSAS has to memory settings, LowMemoryLimit and TotalMemoryLimit. When SSAS is running in a cluster (or otherwise on the same server as database engine instances), how should these values be set? The goal would be to have SSAS using memory as efficient as possible when it is running alone on a node, while it will release memory to database engine instances in case of a failover. Any ideas would be highly appreciated.



    Ole Kristian Velstadbråten Bangås - Virinco - Facebook - Twitter

    Concatenating Row Values in Transact-SQL[/url]

  • How much memory do you have on each node?

    How many SQL instances running on the cluster all together ?

    Thank You,

    Best Regards,

    SQLBuddy

  • Thanks for the reply. I'm asking in general terms, if it is possible to answer generally.

    In one specific scenario there are a three node cluster with three database engine instances (one 2005 and two 2008 R2), and one SSAS 2008 R2 which under normal circumstances will run on the same server as one of the 2008 R2 instances. The serverers are 2P8C with 64GB memory.



    Ole Kristian Velstadbråten Bangås - Virinco - Facebook - Twitter

    Concatenating Row Values in Transact-SQL[/url]

  • Generally Speaking SSAS has 3 main memory settings

    Memory\LowMemoryLimit : Defaults to 65%

    Memory\TotalMemoryLimit: Defaults to 80%

    Preallocate : Use Process: Private Bytes counter for the for the msmdsrv instance. It's peak value determines the maximum value that can be set for Preallocate memory setting.

    Eg:

    <Memory>

    <TotalMemoryLimit>80</TotalMemoryLimit>

    <LowMemoryLimit>65</LowMemoryLimit>

    <PreAllocate>40</PreAllocate>

    These all settings work fine on standalone systems. But on a Failover cluster with 3 nodes... the configuration becomes bit complex...

    Always on a Cluster, we need to design the maximum memory based on the worst case scenario where all the SQL Server instances including the SSAS will be running on the same node.

    But with 64 GB of RAM, lets say we assign OS mem = 4 GB

    So for each SQL Server instance Max Memory (including SSAS)

    = (64-4)/4 = 15 GB ... = 23.4 % TotalMemoryLimit for SSAS

    = 15.0 % LowMemoryLimit

    = 10.0 % Preallocate

    Lets suppose that not all SQL servers are crucial .. The we can use something like this

    SQL Server Instance 1 = 8 GB

    SQL Server Instance 2 = 8 GB

    SQL Server Instance 3 = 14 GB

    SSAS Instance = 30 GB = 46.8 % TotalMemoryLimit for SSAS

    = 30.0 % LowMemoryLimit

    = 20.0 % Preallocate

    In this way we can get many permutations and combinations .. I think you got the basic Idea..

    But if you consider SQL Server Instance 3 and SSAS runs on say Node 2, then you may configure more memory to SSAS instance and SQL server instance... but in case of the worst case scenario failover, some instances may stop working or hang or error out due to insufficient memory ...

    That's the problem with Cluster configurations.. There will be some wastage of resources just for the sake of the worst case scenarios.. Ample Memory may be available but the instance can't use it..

    Hope you got some idea.. So Just design the configuration based on your environment and business requirements.

    Thank You,

    Best Regards,

    SQLBuddy

  • As for maximum memory option as you mention, I'll have to disagree. I've written about it on my blog. These guidelines are pretty much the same as Brent Ozar's (MVP teaching the clustering module of the MCM track). Short story: The Maximum Memory option should generally be set for the good days, the minimum memory option for the bad days. As you add more and more nodes to the cluster, this is really important. Could you imagine a 8 node cluster, each node with 128GB of memory, and only allowing 16GB of memory to be allocated on average. Now, that is what I call waste.

    Hower, thanks for explaining the three memory options, I think I've got the idea. Now I need some time for testing.



    Ole Kristian Velstadbråten Bangås - Virinco - Facebook - Twitter

    Concatenating Row Values in Transact-SQL[/url]

  • Theory always looks fascinating. But things will be different when it comes to implementation and the intricacies remain hidden until issues arise..

    The blog looks good but the important factor has been ignored. Max and Min memory has very little use without Lock Pages in memory. I think experts should be aware of this..

    Unfortunately, disasters come without any prior notice. If we are not well prepared we have to face the trouble and the whole purpose of the cluster will be defeated at the time of crisis and the business may get affected irreversibly.

    Good Luck!!!

    Thank You,

    Best Regards,

    SQLBuddy

  • Thanks for the feedback. I did not mention Lock Pages in Memory simply because it's a post on Maximum and Minimum Server memory. The service account should always have the lock pages in memory privilege, whether maximum/minimum server memory is set or not. I'll add it as a notice.



    Ole Kristian Velstadbråten Bangås - Virinco - Facebook - Twitter

    Concatenating Row Values in Transact-SQL[/url]

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply