• Koen Verbeeck (7/31/2014)


    ChrisM@Work (7/31/2014)


    Koen Verbeeck (7/31/2014)


    Lynn Pettis (7/31/2014)


    Lynn Pettis (7/31/2014)


    SQLRNNR (7/30/2014)


    Lynn Pettis (7/30/2014)


    Really?? Let's put a 500 million row table into an in-memory table.

    It could work - depending on what the data is in that table and the data types. Of course you'd have to have enough memory for it. I'm sure Microsoft is hoping for bigger than that at some point.

    Yes, probably. Unfortunately the OP didn't say how big the rows of data are nor how much memory his server has so one can only speculate that he doesn't understand the new in-memory tables. I don't, but from what I read there are restrictions and conditions one must understand before using them. Not something I would want to jump into without having time to test and play in a sandbox first.

    Okay, he has the memory;

    Size of the table: 195 GB

    Number of Indexes: 90

    220GB memory is allocated only for the sql server....out of total 260 GB Server memory

    That leaves 25 GB of memory for SQL Server. Curious what will happen when there are large queries being run against that table.

    I wonder how the design of that database looks like...

    90 INDEXES! And the performance is still so poor that they're trialling in-memory tables? That smells to me like cr@p queries.

    Maybe they want to improve inserts. No locking and all with in-memory tables 😀

    That is possible. I think it is to prove they can do their nightly load into the warehouse faster.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events