Temporal tables -- reasons not to use them?

  • scdecade

    Mr or Mrs. 500

    Points: 534

    In the project I'm working on we're about to implement 4 temporal tables in Azure SQL.  In the past I've worked with version tracking using  guids and triggers.  The plan is to "tokenize" the datetime2's using Convert(varchar(27), token_dt, 127) format to capture the full precision of the datetime2.  Does anybody see anything wrong with this?  Any advice to working with temporal tables?

    The job spec says:  "When the token is decoded it will be matched against current data to see if the terms remain valid.   If the token isn't valid and depending on what's changed... [shortened] ...these are the different error messages..."  Then there's a list of error messages like "The delivery calendar is no longer valid" etc.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Steve Jones - SSC Editor

    SSC Guru

    Points: 716562

    It seems like you're talking about two things here. Not sure what temporal tables have to do with the tokens. Usually you are hashing somehow to create this token to detect changes. This is separate from temporal tables. It's hard to know what you mean with the token aspect here. If you want the chance, you can query the temporal table and see if something has changed.

    I've seen a few people have issues with some archival or long term work with temporal tables, and there certainly can be challenges with updating schema here over time. Learn lots and practice dealing with the development aspects of altering the temporal table and history table. Also, develop some query patterns to help people understand what's different with the data over time, as I find querying temporal tables slightly non-intuitive.

  • scdecade

    Mr or Mrs. 500

    Points: 534

    Steve Jones - SSC Editor wrote:

    It seems like you're talking about two things here. Not sure what temporal tables have to do with the tokens. Usually you are hashing somehow to create this token to detect changes. This is separate from temporal tables. It's hard to know what you mean with the token aspect here. If you want the chance, you can query the temporal table and see if something has changed.

    Sorry to muddle things together.  In the past I've created tokens using a concatenation of guids to represent a collection of versioned attributes.  Afaik with temporal tables guids aren't necessary for the versioning to function.  I guess it could still be a column in the table tho.  This time instead of guids I was thinking of using Convert(varchar(27), start_time, 127) to capture the point in time row versions.

    To get up to speed on the syntax my plan is to create the initial DDL and then do a comparison to a different instance using Apex Diff (it's what's available to me).   Then mess around with changes and hopefully the comparisons keep working.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Jeff Moden

    SSC Guru

    Points: 995116

    scdecade wrote:

    In the project I'm working on we're about to implement 4 temporal tables in Azure SQL.  In the past I've worked with version tracking using  guids and triggers.  The plan is to "tokenize" the datetime2's using Convert(varchar(27), token_dt, 127) format to capture the full precision of the datetime2.  Does anybody see anything wrong with this?  Any advice to working with temporal tables?

    The job spec says:  "When the token is decoded it will be matched against current data to see if the terms remain valid.   If the token isn't valid and depending on what's changed... [shortened] ...these are the different error messages..."  Then there's a list of error messages like "The delivery calendar is no longer valid" etc.

    I'm pretty sure that the system times captured by temporal tables are in the DATETIME2(7) level of precision.  What else would you need?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
    "If you think its expensive to hire a professional to do the job, wait until you hire an amateur."--Red Adair
    "Change is inevitable... change for the better is not."
    When you put the right degree of spin on it, the number 3|8 is also a glyph that describes the nature of a DBAs job. 😉

    Helpful Links:
    How to post code problems
    Create a Tally Function (fnTally)

  • scdecade

    Mr or Mrs. 500

    Points: 534

    Jeff Moden wrote:

    I'm pretty sure that the system times captured by temporal tables are in the DATETIME2(7) level of precision.  What else would you need?

    Yea right, it's super precise so it's a non-risky substitute for a guid.  I'm just trying to be safe and conservative.

    declare @dt datetime2='9999-12-31 23:59:59.9999999'
    select len(@dt) len_dt, convert(varchar(27), @dt, 127) s1;

    It's 7 digits to the right of the decimal.  That has to be plenty.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Jeff Moden

    SSC Guru

    Points: 995116

    scdecade wrote:

    Jeff Moden wrote:

    I'm pretty sure that the system times captured by temporal tables are in the DATETIME2(7) level of precision.  What else would you need?

    Yea right, it's super precise so it's a non-risky substitute for a guid.  I'm just trying to be safe and conservative.

    declare @dt datetime2='9999-12-31 23:59:59.9999999'
    select len(@dt) len_dt, convert(varchar(27), @dt, 127) s1;

    It's 7 digits to the right of the decimal.  That has to be plenty.

    I don't see where you used a GUID either in the original post nor here as a part of the formula.  To be sure, what you did with the CONVERT on the date will actually slow things down comparatively a fair pit because 1) it is a conversion, 2) it's a conversion to character based data instead of date/time data, 3) like I said, it's already defined as DATETIME2(7) in the tables by default.  No need to rattle the rubble on this one.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
    "If you think its expensive to hire a professional to do the job, wait until you hire an amateur."--Red Adair
    "Change is inevitable... change for the better is not."
    When you put the right degree of spin on it, the number 3|8 is also a glyph that describes the nature of a DBAs job. 😉

    Helpful Links:
    How to post code problems
    Create a Tally Function (fnTally)

  • scdecade

    Mr or Mrs. 500

    Points: 534

    Jeff Moden wrote:

    I don't see where you used a GUID either in the original post nor here as a part of the formula.  To be sure, what you did with the CONVERT on the date will actually slow things down comparatively a fair pit because 1) it is a conversion, 2) it's a conversion to character based data instead of date/time data, 3) like I said, it's already defined as DATETIME2(7) in the tables by default.  No need to rattle the rubble on this one. 

    Sorry again it wasn't explained well.  Currently I maintain a bunch of databases and API's for different clients.  The db's are all SQL Server and the API's are all .NET.  We're in the late stages of a long effort to convert all of the API's from .NET full framework to .NET Core.  Several of these API's have payment integrations using Stripe.NET.  Managing separate Stripe integrations is a PITA.  My current project is to replace the individual merchant accounts with a single platform account called a "Stripe Connect" account.

    One of the clients is a CRM system for Community Supported Agriculture (CSA) boxes of vegetables.    They sell quarterly subscriptions based on a flexible calendar of delivery dates.  Jonathan AC Roberts rewrote one of the calendaring procedures a few months ago in the SSC Forum for the dbo.daterange function.  Thank you again Jonathan!  That code is part of a larger release which will get pushed when we convert to .NET Core.

    /*
    quarters
    */
    drop table if exists fc.quarters;
    go
    create table fc.quarters(
    qtr_id bigint identity(1,1) constraint pk_quarters_qtr_id primary key not null,
    area_id bigint not null references fc.areas(area_id),
    cal_year int not null,
    cal_qtr int not null check(cal_qtr between 1 and 4),
    delivery_dt date not null,
    delivery_num int not null check(delivery_num between 1 and 7),
    freq_days int not null check(freq_days in(7, 14)),
    enroll_closed bit not null,
    enroll_status nvarchar(96) null,
    omit_display bit not null,
    securitystamp uniqueidentifier unique not null,
    created_dt datetime2 not null,
    edited_dt datetime2 not null,
    constraint
    quarters_unq_area_year_qtr unique(area_id, cal_year, cal_qtr));
    go

    /*
    subscriptions
    */
    drop table if exists fc.subscriptions;
    go
    create table fc.subscriptions(
    sub_id bigint identity(1,1) constraint pk_subscriptions_sub_id primary key not null,
    qtr_id bigint not null references fc.quarters(qtr_id),
    shr_id bigint not null references fc.shares(shr_id),
    price_pre_pay int not null,
    price_per_dlv int not null,
    enroll_max int not null,
    enroll_closed bit not null,
    enroll_status nvarchar(96) null,
    omit_display bit not null,
    securitystamp uniqueidentifier unique not null,
    created_dt datetime2 not null,
    edited_dt datetime2 not null,
    constraint
    subscriptions_unq_qtr_shr unique(qtr_id, shr_id));
    go

    /*
    shares
    */
    drop table if exists fc.shares;
    go
    create table fc.shares(
    shr_id bigint identity(1,1) constraint pk_shares_shr_id primary key not null,
    share nvarchar(96) unique not null,
    share_pct int not null);
    go

    /*
    shr_id share share_pct
    1 1/4 Share 25
    2 1/2 Share 50
    3 1 Share 100
    4 2 Shares 200
    5 3 Shares 300
    6 4 Shares 400
    7 5 Shares 500
    8 6 Shares 600
    9 7 Shares 700
    10 8 Shares 800
    11 9 Shares 900
    12 10 Shares 1000
    */

    /*
    users
    */
    drop table if exists fc.users;
    go
    create table fc.users(
    id bigint identity(1,1) constraint pk_users_id primary key not null,
    username nvarchar(140) not null,
    email nvarchar(256) unique not null,
    securitystamp uniqueidentifier unique not null,
    created_dt datetime2 not null,
    edited_dt datetime2 not null);
    go

    /*
    shareholders
    */
    drop table if exists fc.shareholders;
    go
    create table fc.shareholders(
    s_id bigint identity(1,1) constraint pk_shareholders_s_id primary key not null,
    qtr_id bigint not null references fc.quarters(qtr_id),
    u_id bigint not null references fc.users(id),
    sub_id bigint not null references fc.subscriptions(sub_id),
    s_status_id bigint not null references fc.shareholder_statuses(s_status_id),
    securitystamp uniqueidentifier unique not null,
    created_dt datetime2 not null,
    edited_dt datetime2 not null,
    constraint
    shareholders_unq_qtr_u unique(qtr_id, u_id));
    go

    /*
    shareholder_deliveries
    */
    drop table if exists fc.shareholder_deliveries;
    go
    create table fc.shareholder_deliveries(
    sd_id bigint identity(1,1) constraint pk_shareholder_deliveries_sd_id primary key not null,
    s_id bigint not null references fc.shareholders(s_id),
    delivery_dt date not null,
    cancelled bit not null default 0,
    created_dt datetime2 not null,
    edited_dt datetime2 not null,
    constraint
    shareholder_deliveries_unq_s_deliv unique(s_id, delivery_dt));
    go

    When a user requests to purchase a subscription they become a 'shareholder'.  In the shareholders table the status column s_status_id is controlled by what happens with payment transactions to/from Stripe.

    There are '%_history' tables for each of these tables which are maintained by AFTER UPDATE and AFTER DELETE triggers on the base tables.  The uniqueidentifier columns (all called SECURITYSTAMP) represent the row version.  If anything changes the triggers make sure there's a new guid and the history is maintained.

    The person who runs the CSA program is always changing the calendar and prices and subscription attributes based on holidays, his wife's work schedule, availability of blueberries in August, yada yada yada...  Right now it's just this one territory but the plan is to enroll other areas.  So when the price subtotal is sent to the user it's accompanied by a token that contains the exact versions of the calendar, the subscription, and the delivery dates, respectively.  The token could sit in someone's browser for 1 or 2 week and things could've changed.  If it's possible to fulfill the transaction it happens otherwise we return a 400 with a message.

    /* generate a user bearer token */
    drop function if exists fc.bearer_token;
    go
    create function fc.bearer_token(
    @quarter uniqueidentifier,
    @subscription uniqueidentifier,
    @shareholding uniqueidentifier,
    @user uniqueidentifier,
    @check uniqueidentifier,
    @expire_offset_days bigint)
    returns nvarchar(max)
    as
    begin
    declare
    @token varchar(max),
    @json_token varchar(max);

    select @token=
    cast(@quarter as char(36)) + /* quarter */
    cast(@subscription as char(36)) + /* subscription */
    cast(@shareholding as char(36)) + /* shareholder deliveries */
    cast(@user as char(36)) + /* user */
    cast(@check as char(36)) + /* check */
    (select convert(char(19), dateadd(dd, @expire_offset_days, getutcdate()), 127)); /* expiration */

    select @json_token=(select cast(@token as varbinary(max)) for xml path(''), binary base64);
    return @json_token;
    end
    go

    The plan is to replace the version handling with temporal tables.  Instead of guids in the token, it will use (serialized in json) primary keys plus the row start_date (datetime2) using convert(varchar(27), @dt, 127)) to represent the version.   Or I guess we could keep the guids.  Idk because I'm not sure which is the better way right now.

    Any comments or suggestions would be greatly appreciated.  Thank you for looking at this.

    • This reply was modified 1 week, 4 days ago by  scdecade.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

Viewing 7 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply