• Personally I like Jonathan Keyhaias' calculation for determining a starting point for memory and it is what I use for my production servers. Considering that you are indicating that the SQL Server is running alone on the server going to half of the OS memory should be unnecessary but you can check the ring buffer to double check that you are not under OS memory pressure. The link for the article from Jonathan is here https://www.sqlskills.com/blogs/jonathan/wow-an-online-calculator-to-misconfigure-your-sql-server-memory/. At a high level the formula goes 1GB for the OS, 1GB for each 4GB of RAM installed from 4-16, then 1GB for every 8GB of RAM above 16GB installed. I've found that this gives plenty of room for the OS in our environment but as always your mileage may vary due to environmental differences, test the config in a non prod environment, disclaimer, disclaimer and all that stuff.

    From what you describe I think you have two potential fronts that you have to take a look at. First, is adding RAM going to be a bandage fix and the mileage on this going to be one that may only last for a short period of time before you are right back where you are now? I can't answer this for certain because I don't know your environment but in heavily virtualized environments my experience points towards the second front to look at, and this will likely be a root cause. The second thing to look at is what is the underlying storage doing from an I/O perspective as compared to its maximum potential IOPS. In many cases what you may find is that the storage is saturated and that while adding RAM may alleviate some of the problem it can't eliminate all of it. If I were in your position I would try to get a meeting set up with the infrastructure folks to see what is going on with the storage. In my opinion 7-10ms consistent wait time is on the upper end of acceptable and combined with the cache issue you are seeing along with the spikes in latency this points to a storage subsystem issue.

    I hope this helps you out