Follow my last post which can be found here (http://www.sqlservercentral.com/blogs/denniss-sql-blog-1/2017/02/01/improve-performance-by-replacing-temp-tables-with-memory-optimized-tables-part-1/), we discussed about replacing table variable by memory optimized table, this time we will look at another use case - global temp table.
The main idea here is to replace temp tables, where those data does not needs to be stored permanently, which seems to be one of the best use case for memory optimized tables! This time we will look at global temp table.
Like last time, let's assume a simple case here:
There is a stored procedure that needs to use a global temp table (create if it doesn't exists) then to insert 200 records to it. Here is the code I use:
This runs pretty fast in my test machine, only took 0 seconds 🙂 Which would be hard to show improvement in performance, in order to put more load to the test, we will use ostress command which can help to simulate 100 concurrent connections, with each connection executing the same stored proc 1000 times.
ostress -S. -dTestDB -q -n100 -r1000 -Q"sp_globaltemp" -oC:\tmp
This process took 56 sec to complete, and while its running, I can see that there are many blocking and deadlock generated, and there are many concurrent errors along the way.
We now try to do the same thing in memory optimized table, since it usage is for holding temp data, so we can set the durability of the memory table to be schema-only. One more thing to note is that we do not need to check the table existent within the stored proc anymore as this memory optimized tables is already created.
Using ostress tool to perform the same test as above, it only took 7 sec! Which is 8X improvement gain, not bad for a small change in code!
In the next post, we will take a look at the last use case, which is to replace local temp tables with memory optimized table, stay tune.