Anyone Using or Planning to Use Memory-Optimized Tables?

  • I was just wondering if this feature will be used in the real world.

    Not being able to have identity columns and foreign key constraints looks like a major drawback, so I wonder if it will be used for anything besides generating impressive TPC tests for Microsoft.

  • I do have plans for implementation but still kind of struggling how to fit this into the architecture. There are certain benefits but the question is more how to fit these into the big picture with a tangible ROI. As it stands, I'm looking at a least few months of parallel operation before I'll make any comments. As you mentioned, there are drawbacks and at least from my standpoints, those are significant.

    😎

  • Michael Valentine Jones (4/28/2014)


    I was just wondering if this feature will be used in the real world.

    Not being able to have identity columns and foreign key constraints looks like a major drawback, so I wonder if it will be used for anything besides generating impressive TPC tests for Microsoft.

    For the very reasons you mention and more, no... I have no plans to use Hekaton for at least the foreseeable future.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • The thing is, I think the technology is amazing, but, it's really all about the ability to collect data and feed data out at high speed in an OLTP environment. Way too many people are going to think that it solves all sorts of problems that it doesn't. It's not a panacea. In fact, it's extremely focused in the problems it can solve. But, I think it solves those problems well. But, again, you have to have big enough machines to really make it work. We're still getting all sorts of questions from people running on an 8gb server. They're not going to implement in-memory anything any time soon. It's a niche solution, albeit a pretty cool one. But no, no immediate plans because we're no where near the target audience and don't have the kind of hardware necessary to make the whole thing work.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • No plans here so far. We looked into it, the problem would be there is not a way to add a where clause for the memory table. (hint: Microsoft). Our problem is we would be dealing with mass amounts of data, but we would only be interested in say the most current 3 months transactions for a particular customer as opposed to dealing with the entire customers history.

  • No plans, but I have advised two clients to consider replacing SQL 2012 with SQL 2014 in their upgrade plans. Partially because, knowing these guys, they won't actually upgrade until around 2016, but also so that I can consider Hekaton for very specific, very carefully targeted portions of their systems.

    Not that they're in any way at the point that there are no other performance gains to be had. Hell, most databases I see it's a combo of 'Did a cat throw up on that stored proc, or did you write it that way?', 'Indexes? What are they?' and '2003 called and would like its hardware back' :hehe:

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Grant Fritchey (4/29/2014)


    The thing is, I think the technology is amazing, but, it's really all about the ability to collect data and feed data out at high speed in an OLTP environment. Way too many people are going to think that it solves all sorts of problems that it doesn't. It's not a panacea. In fact, it's extremely focused in the problems it can solve. But, I think it solves those problems well. But, again, you have to have big enough machines to really make it work. We're still getting all sorts of questions from people running on an 8gb server. They're not going to implement in-memory anything any time soon. It's a niche solution, albeit a pretty cool one. But no, no immediate plans because we're no where near the target audience and don't have the kind of hardware necessary to make the whole thing work.

    I absolutely agree. It is both a niche and nice solution for high speed OLTP and a lot of people are thinking of it more as a general purpose RAM disk, which it isn't. It's very targeted and does well for what it was designed for. I do wish it had IDENTITY column capabilites, though.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • From what I understand, if you convert any table to use the clustered columnstore format, you're implicitly using this. It stores the first 10k rows (I believe) in the in-memory OLTP table then uses a tuple mover to move it to the columnstore. Hence its "updatable".

  • No. I did some testing with it and found the performance increase to be very nice. However, in order to get the tables involved to comply with requirements they basically had to be modified in many ways. That would mean the application(s) would have to be modified as well. None of that is going to happen anytime soon. There is also the question of memory for the server itself. In order to really load up we would need to move to the Enterpri$e Edition and the cost are just not something we are going to deal with at this point. Things like a SAN are way ahead of that.

    Cheers

  • Anyone think that a good use case for a Hekaton table would be for the ASPSession database tables? Obviously there are alternatives to caching session data, but for those using SQL, it might be a good solution (if the FK's and identity cols don't exist.. not on my work computer at the moment to check for those in that DB).

  • So Gail, what does cat barf in a stored proc look like?

    ...Just giving you a hard time. That comment made me chuckle. Can't tell you how many times I've run across that.

    Either that or the client had every file on the same set of disks. Slow disks. Along with cat barf of 10 to the nth degree of useless indexes that never get used. 0 reads.

    Write performance? Yeah, go commit, grab some coffee, watch the entire Lord of the Rings trilogy end-to-end. Unabridged version.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply