The Ins and outs of In-Memory OLTP

  • Comments posted to this topic are about the item The Ins and outs of In-Memory OLTP

    -Roy

  • You mention OLTP In-Memory tables help with CPU bottlenecks - do you have any further details/reference material on this?

  • Thanks for the excellent Artcile. I am more interested n Migrating Database that is Non "In-Memory" to a database that is In-Memory. Is there a wizard or does one has to manually convert all the object to In-Memory objects eg creating MEMORY_OPTIMIZED tables and importing data from Non MEMORY_OPTIMIZED to MEMORY_OPTIMIZED tables?

  • Hi Simthembile,

    You will have to manually migrate the objects. Please keep in mind that it is not recommended to make all disk based tables into Memory objects. You might end up with Out of memory situation.

    You will have to look at what tables are accessed the most and then create those tables into memory based objects.

    -Roy

  • Hi Gabriel,

    In this version of SQL server, the execution of T-SQL statements does not need require that many instructions to execute it. Also if you create compiled stored procs, it is created as DLL's and they are loaded up into memory at the time of initialization of the DB. You might also know that DLL's are faster since it is already compiled.

    -Roy

  • Roy Ernest (9/16/2014)


    Hi Gabriel,

    In this version of SQL server, the execution of T-SQL statements does not need require that many instructions to execute it.

    So does this mean that cursors on regular disk based tables are going to run faster?

  • I strongly recommend not to use cursors. Whether be it on a disk based or a memory optimized. You still have to loop through each row. I am sure you can write those as a set based operation

    -Roy

  • Roy Ernest (9/17/2014)


    I strongly recommend not to use cursors. Whether be it on a disk based or a memory optimized. You still have to loop through each row. I am sure you can write those as a set based operation

    I know the recommendations, I was posting in response to your post possibly implying that T-SQL had improved in this version, and was wondering if cursor and procedural style programming speed has improved. I know that with the memory resident tables being accessed with dlls sees improvements with compiled code, but I haven't tried 2014 yet and was wondering if the interpreted T-SQL picked up some improvements.

    I know about the advantages with set based SQL, but was particularily interested in whether procedural styled non set oriented coding saw significant improvements with this release across the board, or whether it was limitted to the compiled dlls accessing the memory resident tables.

  • Roy ... I have heard a presentation on this previously. The question I had then, and now, involves the performance gain that supposed to come with in-memory tables.

    Operating systems and the bufferpool manager utilize the "least recently used" algorithm for managing the memory and BP pages. If a table (or part of a table) is hit so hard to warrant it to stay in-memory, I would suspect that it already is in memory -- due to the fact that it is active enough for the algorithm to not swap it out.

    Since in-memory tables draws down on total memory, why not just let SQL manage all the memory in the bufferpool? It will work at a much more granular page level. Those pages not active in a mostly active table will stay out of the bufferpool (where I think they should be).

    The same thing goes with SSD. You cannot get faster than the actual memory (I realized there is L1/L2/L3 cached memory). Unless you are truly saving spinning disk I/O, to me it doesn't make sense to use SSD. If there is a need, I would probably put TempDB on it.

  • Hey Patrick,

    There is an improvement performance. But Interpreted T-SQL does not perform like the compiled stored procedures. In SQL 2014, there was some major work done on the query optimizer cardinality estimation. This has helped with the performance.

    I am not sure if they did anything specific to Cursors though.

    -Roy

  • Hey Jim,

    you are right about the data being in the memory for tables that are accessed regularly. But what about the write speed? Even when the data is persisted, the way the changes are written to the T-log is slightly different for In-memory object when compared to Disk based tables. There is less write operation for In-Memory objects.

    Regarding SSD's, I see a big gain in performance on write operations when compared to disks. Our new server has Disks just for the OS. All the DB's are on SSD's. We have seen a very big improvement on IO. Our DB is not large, just 1.5TB but it is very active. At any given moment we have more than 6000 batch request per second.

    -Roy

  • Roy Ernest (9/18/2014)


    Hey Patrick,

    There is an improvement performance. But Interpreted T-SQL does not perform like the compiled stored procedures. In SQL 2014, there was some major work done on the query optimizer cardinality estimation. This has helped with the performance.

    I am not sure if they did anything specific to Cursors though.

    Ok, I doubt it helps procedural stuff, I was just curious because your item number #1 read like T-SQL might have gotten worked on for CPU contention, or whether it was just referencing compiled DLLs.

  • Roy ... thanks for the response. I really mean this because I haven't received one previously on this subject.

    You're right -- T-log sequential writes can really suck. Probably the best thing you can do is avoid parity-based RAID. I am unfamiliar with the improvement in logging and locking with In-memory tables. But a question I would have on this is why can't the same logging improvements be made to disk-based tables? And is there a problem if you reverted back to a disk-based table without a full backup? I suppose that like anything new, it is wise to run some rigorous tests to make sure it is right for your particular shop.

    Also to that point, in a perfect world I would put everything in memory, or at least on SSD. But the realities of life are that IT has historically been viewed by top management as overly expensive. Mind you, I don't agree with those twits. Nevertheless, IT is constantly defending and justifying costs with every new silver bullet that comes along. It might be open source (or open chaos as I like to refer to it), appliance databases, outsourcing, etc. Maybe in time this will work, but my experience is that "sexy" approaches like SSD have to be a solution to an existing performance problem, not a proactive, anticipatory solution. I could only wish that we all could work in companies where top management would view IT as a competitive advantage, rather than a necessary evil.

    Thanks again for the discussion.

    Jim

  • Hi Jim,

    There is a slight difference in how log write is done for memory optimized table. For non persisted data, there are no writes to the T-Log. For persisted data, the writes to T-Log are done in batches. If the data is changing for one row very frequently all of these changes are not done to the T-Log. Only one write is done with all the information needed.

    Also the changes that are done are written as deltas. This also reduces the writes.

    I am lucky to work in a company where they give importance to performance and not worry too much about the $$. On the long run they get their moneys worth. They also encourage us to expand our knowledge.

    -Roy

  • Hey good article Roy !

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply