Use of always encrypted columns in temporary table

  • I have implemented always encrypted for test purpose. It works fine until I query/update from physical tables. But inserting always encrypted data in temporary table fails. As per documentation, table variables can not be used with always encrypted column. Columns which I am trying to encrypt are used in temporary tables in several places in our application.

    Is there any way to use temporary table with always encrypted data?

  • This is going to be very complex. The issue is that SQL Server cannot decrypt values, so it becomes problematic to try and use a temp table where you are transferring data around from a user database to tempdb.

    Don't forget, tempdb needs it's own AlwaysEncrypted CMK and CEK, which I was able to create. I can then create a temp table that uses them. However, any data manipulation to /from this table requires a .NET application to move data around. You cannot do

    insert #mytemp

    select * from mycustomer

    That won't work.

  • Hi Steve,
    What would be your approach if you absolutely HAD TO implement Always Encrypted columns in an SQL environment which uses many #temp tables?
    I have such a context - our app uses extensively #temp tables and now we have to implement Always Encrypted columns. It is very unlikely that we can move the functionality of inserting into temp tables  to the app accessing the SQL server - this would seriously reduce the performance ...
    Thank you for any pointers.

  • When you say you use lots of temp tables, are you querying/filtering on the columns that need AE? If I had

    create table CUstomer
    (CustomerName varchar.
     CustomerAddress varchar
    , CustomerCC varchar encrypted by xxxx,
    ,Status int
    )

    for my data, and I did things like
    select *
     into #temptable
    from customer
    where status = 1

    That would likely not be an issue. If you're trying to somehow work with the data in AE columns, you'll need to rearchitect the way your code works. And perhaps not be able to use AE with those columns, or not be able to process them as you want in SQL Server.

  • Hi Steve - Thanks for the quick reply!
    I'm trying to store data into a #tmp table, and even this is not really working... Our first target is to be able to store a subset of data into a #tmp table, and then use the #tmp table in a join, so no processing happens on #tmp tables, for now.
    If I'm doing something really silly, please bear with me - we are just starting on this AE stuff 🙂
    Here is what I have:
    In database TEST_AE, table G_CONTACTS:
    [ContactID] [int] NULL,
    [FirstName] [nvarchar](50) NOT NULL,
    [LastName] [nvarchar](50) NOT NULL,
    [PlaceOfBirth] [nvarchar](50)
      COLLATE Latin1_General_BIN2
      ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK_Auto1],
      ENCRYPTION_TYPE = Deterministic,
      ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NULL,
    )
    For testing purposes, this table has about 10 rows.
    The only encrypted column is "PlaceOfBirth"
    I have created the CMK and CEK on the database, and everything looks ok for the table G_CONTACTS.
    Now, I 'm trying to create the temp table #tmp, so first I apply the same keys to the tempdb database:
    (Note: I have shortened the keys by replacing partially with "..."

    USE tempdb
    CREATE COLUMN MASTER KEY [CMK_Auto1]
    WITH
    (
     KEY_STORE_PROVIDER_NAME = N'MSSQL_CERTIFICATE_STORE',
     KEY_PATH = N'CurrentUser/my/40...955'
    )
    GOUSE [tempdb]
    CREATE COLUMN ENCRYPTION KEY [CEK_Auto1]
    WITH VALUES
    (
     COLUMN_MASTER_KEY = [CMK_Auto1],
     ALGORITHM = 'RSA_OAEP',
     ENCRYPTED_VALUE = 0x016E000001...48B5
    )
    GOBack into my database, i create #tmp table

    use TEST_AE
    if object_id('tempdb..#tmp') is not null
     drop table #tmp

    CREATE TABLE #tmp(
     [ContactID] [int] NULL,
     [FirstName] [nvarchar](50) NOT NULL,
     [LastName] [nvarchar](50) NOT NULL,
     [PlaceOfBirth] [nvarchar](50)
      COLLATE Latin1_General_BIN2
      ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK_Auto1],
      ENCRYPTION_TYPE = Deterministic,
      ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NULL,
    )
     All good up to here.
    And finally I try to insert data from G_CONTACTS into #tmp, and this is where it fails...
    insert into #tmp
    select ContactID, FirstName, LastName, PlaceOfBirth
    from G_CONTACTS

    Error message is:
    Operand type clash:
    nvarchar(50) encrypted with (
    encryption_type = 'DETERMINISTIC',
    encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256',
    column_encryption_key_name = 'CEK_Auto1',
    column_encryption_key_database_name = 'TEST_AE')
    is incompatible with nvarchar(50) encrypted with (
    encryption_type = 'DETERMINISTIC',
    encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256',
    column_encryption_key_name = 'CEK_Auto1',
    column_encryption_key_database_name = 'tempdb')

    Thank you!

  • Are both encryption key files 100% identical ?

  • Hi Steve (sgmunson),
    I created the 2 initial keys directly on the TEST_AE database, through MS SSMS, then encrypted the PlaceOfBirth field in the G_CONTACTS table.
    Then, scripted out the two keys CMK and CEK, and applied the scripts to the tempdb database.
    When creating the #tmp table, the encryption keys are available in tempdb database...
    Was there anything that I missed in this sequence? It seems so, since for some reason it complains the fields are not compatible.
    Thank you!

  • OK, sorry, haven't had a chance to test this, but the data can't be inserted into the temp table when encrypted. That makes sense, given the nature of this encryption. SQL Server doesn't have access to the keys that encrypt/decrypt data. Those exist on the client. That's one reason why you get the incompatible error. The instance/database can't move an encrypted int. They can't move to a binary field either. Much of the restriction around AE is because the server can't decrypt the data.

    The way to  handle this is do your processing with non-encrypted columns. Why do you need to copy the PlaceofBIrth over? You have the PK, work with that, join back to the main table from the client to get the encrypted data.

  • Thank you, Steve!
    Is this a limitation that was introduced on purpose, when copying encrypted data between a regular DB and tempdb (or any other database)?
    Because if i'm trying to move the data from the G_CONTACTS table into a DB regular table - for ex _tmp_CONTACTS - then everything works very smooth...
    CREATE TABLE _tmp_CONTACTS (
    [ContactID] [int] NULL,
    [FirstName] [nvarchar](50) NOT NULL,
    [LastName] [nvarchar](50) NOT NULL,
    [PlaceOfBirth] [nvarchar](50)
    COLLATE Latin1_General_BIN2
    ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK_Auto1],
    ENCRYPTION_TYPE = Deterministic,
    ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NULL,
    )
     insert into _tmp_CONTACTS
    select ContactID, FirstName, LastName, PlaceOfBirth
    from G_CONTACTS

    In this case, the data seems to be moved between the tables without an attempt to decrypt / re-encrypt.

    Your other suggestion would work in some of our scenarios... Most of the times, we store data in #temp tables after collecting it from different LARGE tables, so that we can further use the #temp table in joins. In such cases, we have to pull also numeric and char fields in the #temp table, too.  
    I picked the example with G_CONTACTS and PlaceOfBirth just because it was a smaller table, with fewer records, and we needed to work out first a proof of concept.

    Thank you for taking the time to answer this!

  • andrei.toma - Monday, September 11, 2017 9:36 AM

    Thank you, Steve!
    Is this a limitation that was introduced on purpose, when copying encrypted data between a regular DB and tempdb (or any other database)?
    Because if i'm trying to move the data from the G_CONTACTS table into a DB regular table - for ex _tmp_CONTACTS - then everything works very smooth...
    CREATE TABLE _tmp_CONTACTS (
    [ContactID] [int] NULL,
    [FirstName] [nvarchar](50) NOT NULL,
    [LastName] [nvarchar](50) NOT NULL,
    [PlaceOfBirth] [nvarchar](50)
    COLLATE Latin1_General_BIN2
    ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK_Auto1],
    ENCRYPTION_TYPE = Deterministic,
    ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NULL,
    )
     insert into _tmp_CONTACTS
    select ContactID, FirstName, LastName, PlaceOfBirth
    from G_CONTACTS

    In this case, the data seems to be moved between the tables without an attempt to decrypt / re-encrypt.

    Your other suggestion would work in some of our scenarios... Most of the times, we store data in #temp tables after collecting it from different LARGE tables, so that we can further use the #temp table in joins. In such cases, we have to pull also numeric and char fields in the #temp table, too.  
    I picked the example with G_CONTACTS and PlaceOfBirth just because it was a smaller table, with fewer records, and we needed to work out first a proof of concept.

    Thank you for taking the time to answer this!

    But did you stop and think about what Steve was trying to say?   The point is, that with any table that has a primary key, you can always retrieve the encrypted data at a later point in the process simply by using only non-encrypted data in your temp tables, and then using the retrieved primary key values to then select out any needed encrypted data.   Of course, if you need to filter based on a field that is encrypted, you're going to have to go about it in an entirely different manner.

  • It's a restriction because the keys are in different databases. I believe as a security precaution, you can't reference those keys from another db. I hadn't thought about this before, but I'm not surprised about the issues. I wish we had tempdb space in each db to alleviate this, but oh well.

    However,  as I mentioned,  why do you need to move the  encrypted  data to tempdb?

  • We don't *need* the data to be stored in tempdb per se, it is just that we select into #tmp tables...
    We have many large tables in our database and we noticed a significant improvement in performance if, instead of bringing them all together with joins and where conditions, we extract partial information (column-wise and row-wise) into a #tmp table, with proper indexes, and then we continue the processing of data by using the #tmp table, instead of several large tables.
    Good point : This has helped us in many scenarios, and the practice was extended to the point where now (bad point:) we can't redesign the processes without a significant impact on time/cost.

    There is also an alternate option - to simulate the #tmp tables by using local tables, but that comes with other issues, like good management, granting proper rights and clean-up at the end of the processing.
    Thank you.

Viewing 12 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply