Temporal tables -- reasons not to use them?

  • In the project I'm working on we're about to implement 4 temporal tables in Azure SQL.  In the past I've worked with version tracking using  guids and triggers.  The plan is to "tokenize" the datetime2's using Convert(varchar(27), token_dt, 127) format to capture the full precision of the datetime2.  Does anybody see anything wrong with this?  Any advice to working with temporal tables?

    The job spec says:  "When the token is decoded it will be matched against current data to see if the terms remain valid.   If the token isn't valid and depending on what's changed... [shortened] ...these are the different error messages..."  Then there's a list of error messages like "The delivery calendar is no longer valid" etc.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • It seems like you're talking about two things here. Not sure what temporal tables have to do with the tokens. Usually you are hashing somehow to create this token to detect changes. This is separate from temporal tables. It's hard to know what you mean with the token aspect here. If you want the chance, you can query the temporal table and see if something has changed.

    I've seen a few people have issues with some archival or long term work with temporal tables, and there certainly can be challenges with updating schema here over time. Learn lots and practice dealing with the development aspects of altering the temporal table and history table. Also, develop some query patterns to help people understand what's different with the data over time, as I find querying temporal tables slightly non-intuitive.

  • Steve Jones - SSC Editor wrote:

    It seems like you're talking about two things here. Not sure what temporal tables have to do with the tokens. Usually you are hashing somehow to create this token to detect changes. This is separate from temporal tables. It's hard to know what you mean with the token aspect here. If you want the chance, you can query the temporal table and see if something has changed.

    Sorry to muddle things together.  In the past I've created tokens using a concatenation of guids to represent a collection of versioned attributes.  Afaik with temporal tables guids aren't necessary for the versioning to function.  I guess it could still be a column in the table tho.  This time instead of guids I was thinking of using Convert(varchar(27), start_time, 127) to capture the point in time row versions.

    To get up to speed on the syntax my plan is to create the initial DDL and then do a comparison to a different instance using Apex Diff (it's what's available to me).   Then mess around with changes and hopefully the comparisons keep working.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Steve Collins wrote:

    In the project I'm working on we're about to implement 4 temporal tables in Azure SQL.  In the past I've worked with version tracking using  guids and triggers.  The plan is to "tokenize" the datetime2's using Convert(varchar(27), token_dt, 127) format to capture the full precision of the datetime2.  Does anybody see anything wrong with this?  Any advice to working with temporal tables?

    The job spec says:  "When the token is decoded it will be matched against current data to see if the terms remain valid.   If the token isn't valid and depending on what's changed... [shortened] ...these are the different error messages..."  Then there's a list of error messages like "The delivery calendar is no longer valid" etc.

    I'm pretty sure that the system times captured by temporal tables are in the DATETIME2(7) level of precision.  What else would you need?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    I'm pretty sure that the system times captured by temporal tables are in the DATETIME2(7) level of precision.  What else would you need?

    Yea right, it's super precise so it's a non-risky substitute for a guid.  I'm just trying to be safe and conservative.

    declare @dt datetime2='9999-12-31 23:59:59.9999999'
    select len(@dt) len_dt, convert(varchar(27), @dt, 127) s1;

    It's 7 digits to the right of the decimal.  That has to be plenty.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Steve Collins wrote:

    Jeff Moden wrote:

    I'm pretty sure that the system times captured by temporal tables are in the DATETIME2(7) level of precision.  What else would you need?

    Yea right, it's super precise so it's a non-risky substitute for a guid.  I'm just trying to be safe and conservative.

    declare @dt datetime2='9999-12-31 23:59:59.9999999'
    select len(@dt) len_dt, convert(varchar(27), @dt, 127) s1;

    It's 7 digits to the right of the decimal.  That has to be plenty.

    I don't see where you used a GUID either in the original post nor here as a part of the formula.  To be sure, what you did with the CONVERT on the date will actually slow things down comparatively a fair pit because 1) it is a conversion, 2) it's a conversion to character based data instead of date/time data, 3) like I said, it's already defined as DATETIME2(7) in the tables by default.  No need to rattle the rubble on this one.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    I don't see where you used a GUID either in the original post nor here as a part of the formula.  To be sure, what you did with the CONVERT on the date will actually slow things down comparatively a fair pit because 1) it is a conversion, 2) it's a conversion to character based data instead of date/time data, 3) like I said, it's already defined as DATETIME2(7) in the tables by default.  No need to rattle the rubble on this one. 

    Sorry again it wasn't explained well.  Currently I maintain a bunch of databases and API's for different clients.  The db's are all SQL Server and the API's are all .NET.  We're in the late stages of a long effort to convert all of the API's from .NET full framework to .NET Core.  Several of these API's have payment integrations using Stripe.NET.  Managing separate Stripe integrations is a PITA.  My current project is to replace the individual merchant accounts with a single platform account called a "Stripe Connect" account.

    One of the clients is a CRM system for Community Supported Agriculture (CSA) boxes of vegetables.    They sell quarterly subscriptions based on a flexible calendar of delivery dates.  Jonathan AC Roberts rewrote one of the calendaring procedures a few months ago in the SSC Forum for the dbo.daterange function.  Thank you again Jonathan!  That code is part of a larger release which will get pushed when we convert to .NET Core.

    /*
    quarters
    */
    drop table if exists fc.quarters;
    go
    create table fc.quarters(
    qtr_idbigint identity(1,1) constraint pk_quarters_qtr_id primary key not null,
    area_idbigint not null references fc.areas(area_id),
    cal_yearint not null,
    cal_qtrint not null check(cal_qtr between 1 and 4),
    delivery_dtdate not null,
    delivery_numint not null check(delivery_num between 1 and 7),
    freq_daysint not null check(freq_days in(7, 14)),
    enroll_closedbit not null,
    enroll_statusnvarchar(96) null,
    omit_displaybit not null,
    securitystampuniqueidentifier unique not null,
    created_dtdatetime2 not null,
    edited_dtdatetime2 not null,
    constraint
    quarters_unq_area_year_qtr unique(area_id, cal_year, cal_qtr));
    go

    /*
    subscriptions
    */
    drop table if exists fc.subscriptions;
    go
    create table fc.subscriptions(
    sub_idbigint identity(1,1) constraint pk_subscriptions_sub_id primary key not null,
    qtr_idbigint not null references fc.quarters(qtr_id),
    shr_idbigint not null references fc.shares(shr_id),
    price_pre_payint not null,
    price_per_dlvint not null,
    enroll_maxint not null,
    enroll_closedbit not null,
    enroll_statusnvarchar(96) null,
    omit_displaybit not null,
    securitystampuniqueidentifier unique not null,
    created_dtdatetime2 not null,
    edited_dtdatetime2 not null,
    constraint
    subscriptions_unq_qtr_shr unique(qtr_id, shr_id));
    go

    /*
    shares
    */
    drop table if exists fc.shares;
    go
    create table fc.shares(
    shr_idbigint identity(1,1) constraint pk_shares_shr_id primary key not null,
    sharenvarchar(96) unique not null,
    share_pctint not null);
    go

    /*
    shr_idshareshare_pct
    11/4 Share25
    21/2 Share50
    31 Share100
    42 Shares200
    53 Shares300
    64 Shares400
    75 Shares500
    86 Shares600
    97 Shares700
    108 Shares800
    119 Shares900
    1210 Shares1000
    */

    /*
    users
    */
    drop table if exists fc.users;
    go
    create table fc.users(
    idbigint identity(1,1) constraint pk_users_id primary key not null,
    usernamenvarchar(140) not null,
    emailnvarchar(256) unique not null,
    securitystampuniqueidentifier unique not null,
    created_dtdatetime2 not null,
    edited_dtdatetime2 not null);
    go

    /*
    shareholders
    */
    drop table if exists fc.shareholders;
    go
    create table fc.shareholders(
    s_idbigint identity(1,1) constraint pk_shareholders_s_id primary key not null,
    qtr_idbigint not null references fc.quarters(qtr_id),
    u_idbigint not null references fc.users(id),
    sub_idbigint not null references fc.subscriptions(sub_id),
    s_status_idbigint not null references fc.shareholder_statuses(s_status_id),
    securitystampuniqueidentifier unique not null,
    created_dtdatetime2 not null,
    edited_dtdatetime2 not null,
    constraint
    shareholders_unq_qtr_u unique(qtr_id, u_id));
    go

    /*
    shareholder_deliveries
    */
    drop table if exists fc.shareholder_deliveries;
    go
    create table fc.shareholder_deliveries(
    sd_idbigint identity(1,1) constraint pk_shareholder_deliveries_sd_id primary key not null,
    s_idbigint not null references fc.shareholders(s_id),
    delivery_dtdate not null,
    cancelledbit not null default 0,
    created_dtdatetime2 not null,
    edited_dtdatetime2 not null,
    constraint
    shareholder_deliveries_unq_s_deliv unique(s_id, delivery_dt));
    go

    When a user requests to purchase a subscription they become a 'shareholder'.  In the shareholders table the status column s_status_id is controlled by what happens with payment transactions to/from Stripe.

    There are '%_history' tables for each of these tables which are maintained by AFTER UPDATE and AFTER DELETE triggers on the base tables.  The uniqueidentifier columns (all called SECURITYSTAMP) represent the row version.  If anything changes the triggers make sure there's a new guid and the history is maintained.

    The person who runs the CSA program is always changing the calendar and prices and subscription attributes based on holidays, his wife's work schedule, availability of blueberries in August, yada yada yada...  Right now it's just this one territory but the plan is to enroll other areas.  So when the price subtotal is sent to the user it's accompanied by a token that contains the exact versions of the calendar, the subscription, and the delivery dates, respectively.  The token could sit in someone's browser for 1 or 2 week and things could've changed.  If it's possible to fulfill the transaction it happens otherwise we return a 400 with a message.

    /* generate a user bearer token */
    drop function if exists fc.bearer_token;
    go
    create function fc.bearer_token(
    @quarteruniqueidentifier,
    @subscriptionuniqueidentifier,
    @shareholdinguniqueidentifier,
    @useruniqueidentifier,
    @checkuniqueidentifier,
    @expire_offset_daysbigint)
    returns nvarchar(max)
    as
    begin
    declare
    @tokenvarchar(max),
    @json_tokenvarchar(max);

    select @token=
    cast(@quarter as char(36)) +/* quarter */
    cast(@subscription as char(36)) +/* subscription */
    cast(@shareholding as char(36)) +/* shareholder deliveries */
    cast(@user as char(36)) +/* user */
    cast(@check as char(36)) +/* check */
    (select convert(char(19), dateadd(dd, @expire_offset_days, getutcdate()), 127)); /* expiration */

    select @json_token=(select cast(@token as varbinary(max)) for xml path(''), binary base64);
    return @json_token;
    end
    go

    The plan is to replace the version handling with temporal tables.  Instead of guids in the token, it will use (serialized in json) primary keys plus the row start_date (datetime2) using convert(varchar(27), @dt, 127)) to represent the version.   Or I guess we could keep the guids.  Idk because I'm not sure which is the better way right now.

    Any comments or suggestions would be greatly appreciated.  Thank you for looking at this.

    • This reply was modified 4 years, 5 months ago by  Steve Collins.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • This was removed by the editor as SPAM

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply