Memory-optimized tables and table-types

  • I have a very large stored procedure on my OLTP that runs multiple times concurrently based on user entries.  The procedure was originally written with #TempTables for SQL Server 2008R2.  We moved to SQL-2014 and I modified the procedure to use memory-optimized tables.  I take care of concurrency by having a SessionID column on every table.

    The procedure execution time dropped significantly and I was very happy. It used to take between 20-25 seconds and it dropped to 2 seconds. I decided to take a further step and remove the memory-optimized tables and use table variables on memory-optimized table types.  These are session exclusive and I wouldn't have to deal with session ID and data cleanup.  I did expect a short penalty in performance to account for table creation, memory allocation and what not. It also had the benefit that I could better manage the bucket counts for the hash indexes because I didn't have to worry about how many users concurrently query the data.

    However, the performance penalty was significant.  Execution time increased from 2 seconds to around 5 seconds.  I would appreciate anyone that can provide insight on the performance penalties of memory-optimized table types.

  • N_Muller - Friday, March 17, 2017 3:55 PM

    I have a very large stored procedure on my OLTP that runs multiple times concurrently based on user entries.  The procedure was originally written with #TempTables for SQL Server 2008R2.  We moved to SQL-2014 and I modified the procedure to use memory-optimized tables.  I take care of concurrency by having a SessionID column on every table.

    The procedure execution time dropped significantly and I was very happy. It used to take between 20-25 seconds and it dropped to 2 seconds. I decided to take a further step and remove the memory-optimized tables and use table variables on memory-optimized table types.  These are session exclusive and I wouldn't have to deal with session ID and data cleanup.  I did expect a short penalty in performance to account for table creation, memory allocation and what not. It also had the benefit that I could better manage the bucket counts for the hash indexes because I didn't have to worry about how many users concurrently query the data.

    However, the performance penalty was significant.  Execution time increased from 2 seconds to around 5 seconds.  I would appreciate anyone that can provide insight on the performance penalties of memory-optimized table types.

    Not the first time I have come across exactly the same scenario. There are numerous potential reasons. A few off the top of my head and without any clue about your system, design, code, etc. include:

    1) You have to copy the data up from disk based into memory table variables. Clearly that isn't free. Heck, simple blocking could cause delays like you are seeing.

    2) If you have skew in row counts you may have bad hash bucket sizes and that can REALLY screw with perf.

    3) Since it is transient I question the need for any indexes at all too. Often such scenarios mean a single or few hits on the temp object (in memory or otherwise) and the act of creating indexes when you are going to scan all the data anyway (or even if you do access just a few rows) is a net loss - sometimes substantially so.

    4) Have you done any analysis to see what is going on with optimization? What estimates are you getting coming out of the table var? If you have skew and put in a big-hitter value do you get a plan that reflects that?

    5) What analyses have you done so far to isolate the performance bottlenecks?

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

Viewing 2 posts - 1 through 1 (of 1 total)

You must be logged in to reply to this topic. Login to reply