We're trying to assess the effects of server memory on the performance of one of our databases. So we have run the following test on the same server; when it has 8GB and when it has 32GB:
1) Restore from the same backup.
2) Run the same set of read and read-write stored procedures, in a single connection.
3) See how long they take.
Agreed, it's not a great test, but we were astounded to find that they took:
~12 minutes with 8GB of memory.
~100 minutes with 32GB of memory.
As far as we can see, the data created in the database are the same in each case.
Obviously, we've repeated several times and got approximately the same timings.
Further investigations have suggested that the extra time (i.e. ~90 minutes) is entirely taken up by a single statement which is using a CLR function to populate a column of some table.
I understand that memory for CLR is managed outside of that allocated to SQL to some extent. Does anybody have any idea why limiting the amount of physical memory in the server might result in a massively reduced execution time? It seems so counterintuitive!