Blog Post

Improve performance by replacing temp tables with memory optimized tables - Part 3

,

Welcome to the last part of this series, you can find the previous post here (http://www.sqlservercentral.com/blogs/denniss-sql-blog-1/2017/02/01/improve-performance-by-replacing-temp-tables-with-memory-optimized-tables-part-1/) and here (http://www.sqlservercentral.com/blogs/denniss-sql-blog-1/2017/02/08/improve-performance-by-replacing-temp-tables-with-memory-optimized-tables-part-2/).

Previously we discussed about using memory optimized tables to replace table variable and global temp table, this time we will replace local temp tables and seek for performance gain. We all use local temp tables here and there and its pretty common in development to hold temp data during a transaction, with SQL 2016 we should be able to get performance improvement with memory optimized tables.

Same as last time, let's assume a simple case here, a local temp table is used within a stored procedure and within it, it will insert 300 records to the temp table. As this is a local temp table, which existent are based on its session, we will need to modify our memory optimized table to fit that requirement, as each session should only allow to read its session data and nothing else, but given the way memory optimized table works, its just like a normal user table, hence we will need to add some constraint to limit it usage.

Firstly, let's start with the local temp table:

CREATE PROCEDURE sp_localtemp
AS
SET NOCOUNT ON
BEGIN
  DROP TABLE IF EXISTS #localtemp
  CREATE TABLE #localtemp
  (
    ColA INT NOT NULL ,
    ColB NVARCHAR(4000)
  );
  BEGIN TRAN
  DECLARE @i INT = 0;
  WHILE @i < 300
  BEGIN
    INSERT #localtemp
    VALUES ( @i, N'xyz' );
    SET @i += 1
  END;
  COMMIT

END

By using the ostress tool again, we simulate 100 connections with each connection execute 1000 times - a total of 100 thousand executions, it took a total of 27 seconds.

As discussed previously, the memory optimized table to replace local temp tables need to have additional constraint to ensure that only its own session can view and modify its data, this is done by creating function along with security policy.

-- a single filter function can be used for all session-level temp tables
CREATE FUNCTION dbo.fn_SessionFilter(@session_id smallint)
  RETURNS TABLE
WITH SCHEMABINDING, NATIVE_COMPILATION
AS
  RETURN SELECT 1 as fn_SessionFilter WHERE @session_id=@@spid;
GO
DROP TABLE IF EXISTS dbo.memtemp1
GO
CREATE TABLE dbo.memtemp1
(
  ColA INT NOT NULL ,
  ColB NVARCHAR(4000) ,
  session_id SMALLINT NOT NULL DEFAULT ( @@spid ) ,
  INDEX IX_session_id ( session_id ) ,
  CONSTRAINTCHK_temp1_session_id CHECK ( session_id = @@spid ),
)
WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY);
GO
-- add security policy to enable filtering on session_id, for each temp table
CREATE SECURITY POLICY dbo.secFilter
ADD FILTER PREDICATE dbo.fn_SessionFilter(session_id)
ON dbo.memtemp1
WITH (STATE = ON);
GO
DROP PROCEDURE IF EXISTS sp_memtemp1
GO
CREATE PROCEDURE sp_memtemp1
AS
BEGIN
  SET NOCOUNT ON
  DELETE FROM dbo.memtemp1;
  BEGIN TRAN
  DECLARE @i INT = 0;
  WHILE @i < 300
  BEGIN
    INSERT dbo.memtemp1 (ColA, ColB)
    VALUES ( @i, N'xyz' );
    SET @i += 1;
  END;
  COMMIT
END
GO

With the same number of execution by 100 connections, it only take 7 seconds, which is close to 4X performance increase, its not as good as the previous use case, but it still a pretty good performance gain. Like everything else, there is a small code change involves, so please do test it in your own environment and make sure it work as the way you expected.

I hope this series can help everyone using SQL 2016 to improve performance in their own environment and pushing more on their workload.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating