Evaluating use of InMemory tables... Why is it slower than temp tables?

  • I created a table like this
    CREATE TABLE blahblahblah(
      LeafLevelDatasetId  INT NOT NULL IDENTITY(1, 1),
      DateKey   INT NOT NULL,
      SafeRegionKey  SMALLINT NOT NULL,
      SafeCountryKey  SMALLINT NOT NULL,
      etc...
    ,SpidFilter SMALLINT NOT NULL DEFAULT (@@spid)
    ,CONSTRAINT CHK_UILeafLevelDatasetSessionTable_SpidFilter CHECK ( SpidFilter = @@spid )
    ,INDEX UNCIX_UILeafLevelDatasetSessionTable NONCLUSTERED (
    SpidFilter, DateKey, SafeRegionKey, SafeCountryKey
    , Attribute1ValueKey, Attribute2ValueKey, Attribute3ValueKey, Attribute4ValueKey, Attribute5ValueKey)
    )
    WITH (MEMORY_OPTIMIZED = ON,DURABILITY = SCHEMA_ONLY);
    GO
    CREATE SECURITY POLICY ui.UILeafLevelDatasetSessionTable_SpidFilter_Policy
    ADD FILTER PREDICATE ui.udfSpidFilter(SpidFilter)
    ON ui.UILeafLevelDatasetSessionTable
    WITH (STATE = ON);

    I tried to pump about 500,000 rows in it and into a similar temp table (with a Unique Clustered Index) and it is always faster to insert into the temp table...
    The In Memory table takes about 3 seconds while it takes about 2 for the temp table...
    Am I doing something drastically wrong?

    Many thanks

  • Eric Mamet - Tuesday, August 8, 2017 11:04 AM

    I created a table like this
    CREATE TABLE blahblahblah(
      LeafLevelDatasetId  INT NOT NULL IDENTITY(1, 1),
      DateKey   INT NOT NULL,
      SafeRegionKey  SMALLINT NOT NULL,
      SafeCountryKey  SMALLINT NOT NULL,
      etc...
    ,SpidFilter SMALLINT NOT NULL DEFAULT (@@spid)
    ,CONSTRAINT CHK_UILeafLevelDatasetSessionTable_SpidFilter CHECK ( SpidFilter = @@spid )
    ,INDEX UNCIX_UILeafLevelDatasetSessionTable NONCLUSTERED (
    SpidFilter, DateKey, SafeRegionKey, SafeCountryKey
    , Attribute1ValueKey, Attribute2ValueKey, Attribute3ValueKey, Attribute4ValueKey, Attribute5ValueKey)
    )
    WITH (MEMORY_OPTIMIZED = ON,DURABILITY = SCHEMA_ONLY);
    GO
    CREATE SECURITY POLICY ui.UILeafLevelDatasetSessionTable_SpidFilter_Policy
    ADD FILTER PREDICATE ui.udfSpidFilter(SpidFilter)
    ON ui.UILeafLevelDatasetSessionTable
    WITH (STATE = ON);

    I tried to pump about 500,000 rows in it and into a similar temp table (with a Unique Clustered Index) and it is always faster to insert into the temp table...
    The In Memory table takes about 3 seconds while it takes about 2 for the temp table...
    Am I doing something drastically wrong?

    Many thanks

    It was my impression that the purpose of the in-memory tables was to make it faster to read the data not write it.  You should at least compare how long it takes to read from both types of tables.

    Drew

    J. Drew Allen
    Business Intelligence Analyst
    Philadelphia, PA

  • Fair point... so I tried to read back as well
    Now, the Insert into the InMemory table is sometimes a faster but the read is a lot slower!

    I attach my code just in case there is something flawed...
    The times I got (using 1,000,000 rows)
    Insert into In Memory table: 3.7s
    Insert into Tmp table: 7.5s
    Read from In Memory table: 0.5s
    Read from Tmp table: 0.06s
    Delete data from InMemory table: 2.1s

    The times vary between executions but the trend remains the same

    This is not really convincing...🙁

  • Something is up.

    I don't have this in production on any of my servers, however, I test it and demo it regularly. The memory optimized tables have almost always been radically faster than temporary tables (and regular tables and table variables...). They actually are designed to be faster on inserts since it eliminates latching and defers log writes. It's part of the design. They may not always be faster on reads. Inserts they should blaze.

    That key for your index is pretty wide. However that shouldn't be an issue. Try capturing the wait stats on the query using extended events.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Ok, will do.

    It was my understanding too that Insert should be blisteringly fast... and I need it to improve some speed issue I have with existing stored procedures.

  • Grant Fritchey - Wednesday, August 9, 2017 6:41 AM

    Something is up.
    ...
    That key for your index is pretty wide. However that shouldn't be an issue. Try capturing the wait stats on the query using extended events.

    I am not familiar with extended events so please let me know if I miss something. I attach the extended events file (as well as a profiler trace in case it helps...).

    Otherwise, could it simply be that there is not much benefit unless we use Natively Compiled Stored Procedures?

  • I tried to open the extended events file but it didn't work. What were the wait statistics?

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • The extended events file gives me a lot of MEMORY_ALLOCATION_EXT and some SOS_SCHEDULER_YIELD

    Is it possible to aggregate those numbers to get some idea.
    As you may guess, I have not made the move from Profiler to Extended Events yet... Maybe now is the time?

  • Eric Mamet - Thursday, August 10, 2017 2:08 AM

    The extended events file gives me a lot of MEMORY_ALLOCATION_EXT and some SOS_SCHEDULER_YIELD

    Is it possible to aggregate those numbers to get some idea.
    As you may guess, I have not made the move from Profiler to Extended Events yet... Maybe now is the time?

    Yes. That, I can pretty easily answer. All new functionality is in Extended Events and no new functionality is being added to Trace. It's been this way since 2012. The more new stuff you do and use, the less Trace has to offer. It's time to make the leap.

    As to aggregating, you can using the GUI (which I think is better than people give it credit for, although, it is flawed). Right click on the package (the screen at the bottom) to add to the columns in the screen at the top. When you're reading from a file as you are, the aggregation button on the toolbar should be immediately visible. That'll do it. Aggregate initially by the query to see what's going on. I wish I could see the file, but I can't get it to open correctly.

    How much memory is on the box and how much are you attempting to load into memory? It's possible you're overloading the memory, causing it to spill a bunch of stuff to disk, which is slowing you down, where as the temp table just automatically starts spilling to disk without forcing everything else out first.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Hi,

    Have you considered the drive and file allocations for the delta checkpoint files and log file?

    'To reduce any log waits, and improve overall transaction throughput, it's best to place the log file on a drive with fast random I/O, such as an SSD drive. As the use of memoryoptimized tables allows for a much greater throughput, we'll start to see a lot of activity needing to be written to the transaction log although, as we'll see in Chapter 6, the overall efficiency of the logging process is much higher for in-memory tables than for disk-based tables ' - SQL Server In Memory OLTP by Kalen Delaney.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply