IN-Memory tables

  • Hello All!
    I would like to know what is the the pros & cons of In-Memory tables in SQL Server 2016
    In addition if someone have a DEMO that  build SB with in-memory tables and compare it to the workload of traditional Disk tables in that demo it will be great!

    Thanks in advance!!!

  • Hi Netanel,

    You can check that out in the example database - https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Full.bak

  • Not a complete list but some highlights...

    Pro's:
    1. Can be faster than on-disk tables because of there's no IO cost.
    2. A bonus benefit of 2016 vs 2014 memory optimized tables is that queries against them can be parallelized. 
    3. The reduced IO cost is goid for overall DB 0erformance.

    Cons:
    1. More to configure an maintain.
    2. Memory optimized tables have more limitations (such as what indexes you can add)  3. They aren't always faster - there's a learning curve to get the most out of memory optimized objects.

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

  • This was removed by the editor as SPAM

  • By and large, if you're not experiencing IO_LATCH waits as one of your principal and overriding problems within your system, In-Memory is likely only going to be of marginal use to you. The whole structure around the in-memory tables is designed to address this issue which is mostly hit by systems that are extremely write oriented, think a database that collects diagnostic data from thousands of automobiles at the same time. Systems like this will absolutely benefit. Most other systems won't benefit as much. The exception to this is if you need to make extensive use of table variables. These can be used in-memory and can achieve some performance benefits. Aaron Bertrand has some examples that illustrate this. 

    By the way, the biggest limit is memory. You need to start with at least double and probably go to about 3-5 times the memory of the data you intend to put into the in-memory tables. If you want to load 15gb of data, Microsoft recommends you have at least 30gb of memory available. That's on top of the memory you need for your OS and the regular functioning of SQL Server. Read through this document and this one.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Just out of curiosity, how many of you good folks have actually used it?  Not being contrary or challenging here.  It seems that there are a lot of people that don't use it and, if you folks don't actually use it, I'd be curious as to why not because I'm getting ready to give it a whirl myself (been too busy up 'til now).  I'm also not sure that it would provide any benefit over what we do now.  I'll also admit that I'm going to proceed very cautiously because I can't afford to lose any transactions even if the lights went down hard and the room sized UPS went with it.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Monday, January 8, 2018 5:10 PM

    Just out of curiosity, how many of you good folks have actually used it?  Not being contrary or challenging here.  It seems that there are a lot of people that don't use it and, if you folks don't actually use it, I'd be curious as to why not because I'm getting ready to give it a whirl myself (been too busy up 'til now).  I'm also not sure that it would provide any benefit over what we do now.  I'll also admit that I'm going to proceed very cautiously because I can't afford to lose any transactions even if the lights went down hard and the room sized UPS went with it.

    I've tested the heck out of it. It doesn't do enough for me to get over the restrictions & added requirements for the hardware. In all my travels, I've run across two people who are using it successfully. Both were pretty much similar situations. Lots of data collection at a very high volume, so it solved their principal bottleneck. However, almost no one else is even testing it. Not sure why.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey - Monday, January 8, 2018 5:20 PM

    Jeff Moden - Monday, January 8, 2018 5:10 PM

    Just out of curiosity, how many of you good folks have actually used it?  Not being contrary or challenging here.  It seems that there are a lot of people that don't use it and, if you folks don't actually use it, I'd be curious as to why not because I'm getting ready to give it a whirl myself (been too busy up 'til now).  I'm also not sure that it would provide any benefit over what we do now.  I'll also admit that I'm going to proceed very cautiously because I can't afford to lose any transactions even if the lights went down hard and the room sized UPS went with it.

    I've tested the heck out of it. It doesn't do enough for me to get over the restrictions & added requirements for the hardware. In all my travels, I've run across two people who are using it successfully. Both were pretty much similar situations. Lots of data collection at a very high volume, so it solved their principal bottleneck. However, almost no one else is even testing it. Not sure why.

    That's the conclusion that I reached when I read about it.  Just not enough computational ROI. Still, I'm going to give it a try just so I know for sure.  Thanks for the feedback, Grant.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Monday, January 8, 2018 5:10 PM

    Just out of curiosity, how many of you good folks have actually used it?  Not being contrary or challenging here.  It seems that there are a lot of people that don't use it and, if you folks don't actually use it, I'd be curious as to why not because I'm getting ready to give it a whirl myself (been too busy up 'til now).  I'm also not sure that it would provide any benefit over what we do now.  I'll also admit that I'm going to proceed very cautiously because I can't afford to lose any transactions even if the lights went down hard and the room sized UPS went with it.

    I have played around with memory optimized tally tables. In the comment thread in your delimitedSplit8k article Paul White said he made delimitedsplit8k faster - but the results have been mixed for me. I didn't like the fact that you could only get serial execution plans on memory optimized tables in SQL Server 2014 and haven't played around with them much in 2016. CTE tally tables are memory optimized enough for me and you don't need SQL Server 2014 😉

    You might find this interesting. Inspired by this Ben-Gan article: What You Need to Know about the Batch Mode Window Aggregate Operator in SQL Server 2016: Part 1. I created a memory optimized columnstore tally table. 

    CREATE TABLE dbo.cseTally
    (
        N int NOT NULL,
    INDEX cscle_tally CLUSTERED COLUMNSTORE WITH (COMPRESSION_DELAY = 0),
    PRIMARY KEY NONCLUSTERED (N ASC)
    )
    WITH (MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_AND_DATA)

    Here's a "pattern extractor" - it's basically PatternSplitCM (Credit: Chris Morris) but only returns the unmatched rows. I tweaked the original logic to invoke batch mode processing using a window aggregate. In this example I'm extracting all sequences of numbers. 
    declare
    @string varchar(100) = '>>>abc5555xx99yyy123456zzzz321cba<<<',
    @delim varchar(100) = '[^0-9]'; --'[<>/]';

    select
    itemNumber = row_number() over (order by min(p)), -- causes sort, only use when required
    itemIndex = min(p),
    item   = substring(@string, min(p), max(p)-min(p)+1)
    from (values (@string,@delim)) c(s,d) --constants:string,delimiter
    cross apply
    (
    select sum(iif(i like d,1,0)) over (order by p),p,i
    from
    (
      select top (datalength(s)) n, substring(s,n,1)
      from dbo.cseTally
      order by n
    ) ng(p,i)  --ngrams: position,item
    ) split(g,p,i) --split: grouper,position,item
    where i not like d
    group by g;

    I attached the execution plan - all batch mode processing. The performance is not good when I run it against a table but I get rowmode processing against a sample table. Fun little experiment.

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply