Natively Compiled Stored Procedures: What they are all about!

  • Ed Pollack (8/6/2014)


    Yes! You can definitely create an in memory table using a table type and pass it to a natively compiled stored proc. The only restriction is that the table type must be passed into the stored proc as READONLY. This is probably OK for whatever applications you have in mind, but the compiler will throw an error if you leave the keyword off of the parameter.

    I didn't mean a "permanent" in-memory table, but a table variable, which is in-memory. The advantage of the table variable is that it is kept within scope, so multiple parallel applications can work without interfering with one another. Once the application completes, the table variable is gone. A permanent in-memory table needs to be cleaned up and managed by each instance of the application by an application instance ID of some sort.

  • N_Muller (8/6/2014)


    Ed Pollack (8/6/2014)


    Yes! You can definitely create an in memory table using a table type and pass it to a natively compiled stored proc. The only restriction is that the table type must be passed into the stored proc as READONLY. This is probably OK for whatever applications you have in mind, but the compiler will throw an error if you leave the keyword off of the parameter.

    I didn't mean a "permanent" in-memory table, but a table variable, which is in-memory. The advantage of the table variable is that it is kept within scope, so multiple parallel applications can work without interfering with one another. Once the application completes, the table variable is gone. A permanent in-memory table needs to be cleaned up and managed by each instance of the application by an application instance ID of some sort.

    Understood---and unfortunately, you can't do that. if you try to pass non-IMOLTP data to a memory-optimized table, you will get an error message that looks something like:

    Msg 41323, Level 16, State 1, Procedure PROCNAME, Line 13

    The table type 'dbo.TABLETYPENAME' is not a memory optimized table type and cannot be used in a natively compiled stored procedure.

    As of now, the only flexibility you have is to create a standard stored proc that takes the table as a parameter and then uses that table and any other data you pass in to manipulate in-memory tables...which isn't particularly better than simply using your in-memory tables in standard stored procs with no additional modifications.

    The limitations on mixing in-memory and non-in-memory data are fairly strict and all pointed in a single direction---ie you can pass in-memory data into non-in-memory objects without SQL Server complaining, but if you attempt to get any non-in-memory data into in-memory objects, you'll usually be out of luck.

    Variables are OK, but table variables are handled in the same fashion as a temp table or standard table, since they do involve tempDB storage and that's a no-no for in-memory OLTP at the moment.

  • The restrictions are there to make it easy to compile into straightforward (read "fast") code. When you're writing these, just imagine how you would write the same thing in a .net language like c# or vb.net. That makes a lot of the looping oriented logic understandable. A lot of what we express as a set-based operation ends up happening iteratively under the hood. Perhaps we should call it Row By Optimized Row. Not the same ring as RBAR, sadly.

  • N_Muller (8/6/2014)

    I didn't mean a "permanent" in-memory table, but a table variable, which is in-memory. The advantage of the table variable is that it is kept within scope, so multiple parallel applications can work without interfering with one another. Once the application completes, the table variable is gone. A permanent in-memory table needs to be cleaned up and managed by each instance of the application by an application instance ID of some sort.

    Are you sure that a table variable is in-memory? Because that is not a 100% true statement.

    http://www.sqlservercentral.com/articles/Temporary+Tables/66720/

    And, great article Ed. Gives me something to play with on the upcoming rainy weekend!

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Michael L John (5/20/2016)


    N_Muller (8/6/2014)

    I didn't mean a "permanent" in-memory table, but a table variable, which is in-memory. The advantage of the table variable is that it is kept within scope, so multiple parallel applications can work without interfering with one another. Once the application completes, the table variable is gone. A permanent in-memory table needs to be cleaned up and managed by each instance of the application by an application instance ID of some sort.

    Are you sure that a table variable is in-memory? Because that is not a 100% true statement.

    http://www.sqlservercentral.com/articles/Temporary+Tables/66720/

    And, great article Ed. Gives me something to play with on the upcoming rainy weekend!

    Thanks! Table variables are written to TempDB as this step is necessary in order to process a temp table or table variable, but assuming size is reasonable, the result will be that all work on that table will reside in memory. While not 100% true, it's true for the sake of performance comparisons with anything that isn't going to be BIG.

  • Ed Pollack (5/20/2016)


    Michael L John (5/20/2016)


    N_Muller (8/6/2014)

    I didn't mean a "permanent" in-memory table, but a table variable, which is in-memory. The advantage of the table variable is that it is kept within scope, so multiple parallel applications can work without interfering with one another. Once the application completes, the table variable is gone. A permanent in-memory table needs to be cleaned up and managed by each instance of the application by an application instance ID of some sort.

    Are you sure that a table variable is in-memory? Because that is not a 100% true statement.

    http://www.sqlservercentral.com/articles/Temporary+Tables/66720/

    And, great article Ed. Gives me something to play with on the upcoming rainy weekend!

    Thanks! Table variables are written to TempDB as this step is necessary in order to process a temp table or table variable, but assuming size is reasonable, the result will be that all work on that table will reside in memory. While not 100% true, it's true for the sake of performance comparisons with anything that isn't going to be BIG.

    Exactly! Which is why I am re-writing a fairly large ETL implementation to eliminate the blind use of table variables!

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • I didn't even realize this existed. Thanks.

  • Ed, thanks for the research and sharing. Bummer about the limitations. Have any of these changed in the soon to be released SQL 2016?

    The more you are prepared, the less you need it.

  • Yes! SQL Server 2016 removes most of the restrictions, both on In-Memory OLTP, and on natively compiled stored procedures. In 2014 the exceptions were the rule in that most anything you wanted was not supported. In 2016, exceptions are actual exceptions, and are rare.

    The following doc is from CTP2, but is still accurate for the current release of 2016:

    http://download.microsoft.com/download/8/3/6/8360731A-A27C-4684-BC88-FC7B5849A133/SQL_Server_2016_In_Memory_OLTP_White_Paper.pdf

    Andrew..Peterson (5/23/2016)


    Ed, thanks for the research and sharing. Bummer about the limitations. Have any of these changed in the soon to be released SQL 2016?

  • Hello Edward,

    In the stored procedures you have used as an example you have two queries. One to retrieve all the records and one to retrieve a single record. The stored procedures have a parameter to specify a single record or, if omitted, to select all records. Both queries would typically require different execution plans (depending on available indexes, statistics, etc.). If I recall correctly, in several advanced courses the problem is discussed sub-optimal execution plans, and splitting up those stored procedures into separate ones would avoid this issue. This would of course require the if statement to be moved to application code, as T-SQL does not support overloading of stored procedures.

    My question is why did you choose to use this specific stored procedure as an example? And not a simpler version which only queries the specific record? How would that be different in performance?

  • akrap - Monday, July 2, 2018 6:09 AM

    Hello Edward,

    In the stored procedures you have used as an example you have two queries. One to retrieve all the records and one to retrieve a single record. The stored procedures have a parameter to specify a single record or, if omitted, to select all records. Both queries would typically require different execution plans (depending on available indexes, statistics, etc.). If I recall correctly, in several advanced courses the problem is discussed sub-optimal execution plans, and splitting up those stored procedures into separate ones would avoid this issue. This would of course require the if statement to be moved to application code, as T-SQL does not support overloading of stored procedures.

    My question is why did you choose to use this specific stored procedure as an example? And not a simpler version which only queries the specific record? How would that be different in performance?

    Hi!
    The examples chosen for this article were based solely on demonstrating how natively compiled stored procedures work.  You are 100% correct in the assumption that a parameter that varies the result set from very narrow to very wide will have a higher chance of suffering the negative side of parameter sniffing.  You're also correct that splitting it into 2 procs (if allowable by the app using them) would be a great fix for this.

    These examples were intentional as they highlight performance differences between on-disk tables & procs and in-memory.  A table scan on disk vs. in-memory provides a far more dramatic performance improvement than comparing two narrow seeks.

    The goal of this article is to introduce natively-compiled stored procs and how to use them.  I appreciate your eye on performance --- and especially presenting the correct solution to parameter sniffing!  Hopefully no one is copying these procs and pasting them into production environments as-is : )

    Thanks again for reading, and for your feedback!
    -Ed

Viewing 11 posts - 16 through 25 (of 25 total)

You must be logged in to reply to this topic. Login to reply