JSON data with Pivoted

  • Hi All,

    I need some assistance and not sure how to achieve the expected output.

    I have a JSON input parameter which is a nvarchar(max) passed from my application as per my sample code below.

    I've started to break up the JSON into a table output and am a bit stuck on getting how I expect it to be done.

    -- Test Json Passed via the Application
    DECLARE @JsonContent NVARCHAR(MAX) =
    '[
    {
    "Id": "DateIdentified001",
    "ColumnName": "DateIdentified",
    "DataType": "3",
    "ColumnWidth": 10,
    "Formula": "NA",
    "Value": "2023-11-01"
    },
    {
    "Id": "Description001",
    "ColumnName": "Description",
    "DataType": "2",
    "ColumnWidth": 20,
    "Formula": "NA",
    "Value": "Desc-Number-1"
    },
    {
    "Id": "RectificationSteps001",
    "ColumnName": "RectificationSteps",
    "DataType": "2",
    "ColumnWidth": 20,
    "Formula": "NA",
    "Value": "Rec-Number-1"
    },
    {
    "Id": "ReviewDate001",
    "ColumnName": "ReviewDate",
    "DataType": "3",
    "ColumnWidth": 10,
    "Formula": "DATEADD(DAY,10,DateIdentified)",
    "Value": ""
    },
    {
    "Id": "DateIdentified002",
    "ColumnName": "DateIdentified",
    "DataType": "3",
    "ColumnWidth": 10,
    "Formula": "NA",
    "Value": "2023-12-01"
    },
    {
    "Id": "Description002",
    "ColumnName": "Description",
    "DataType": "2",
    "ColumnWidth": 20,
    "Formula": "NA",
    "Value": "Desc-Number-2"
    },
    {
    "Id": "RectificationSteps002",
    "ColumnName": "RectificationSteps",
    "DataType": "2",
    "ColumnWidth": 20,
    "Formula": "NA",
    "Value": "Rec-Number-2"
    },
    {
    "Id": "ReviewDate002",
    "ColumnName": "ReviewDate",
    "DataType": "3",
    "ColumnWidth": 10,
    "Formula": "DATEADD(DAY,10,DateIdentified)",
    "Value": ""
    }

    ]'

    -- Query to break up the JSON
    SELECT
    ColumnName,
    Formula,
    [Value]
    FROM OPENJSON(@JsonContent) WITH (
    ColumnName NVARCHAR(255) '$.ColumnName',
    Formula NVARCHAR(255) '$.Formula',
    [Value] NVARCHAR(255) '$.Value'
    )

    My output is as per image "OriginalOutput.png" or simply the execution of running the sample code.

    But the expected output is like the File "ExpectedOutput.png,

    The Review Date would be the actual formulated value but I'm not worried at this stage cause I can apply that after I get the expected output which I think would be easy.

    This is a very cut down version cause I've brough back 4 columns but it could be dynamic up to 10. the output would need to be built dynamically, But as above I can give that a go once I stop getting stuck on the same problem

    I'm struggling to achieve this but would it be a PIVOT to get it done what I expect.

    EDIT:

    I'm open to making changes to the Application JSON output if needed, for example have a value like C1 -> C10 which is static for every column to Pivot if needed.

     

    • This topic was modified 8 months, 3 weeks ago by  Tava. Reason: Adding in extra help
    Attachments:
    You must be logged in to view attached files.
  • This is not an easy format to achieve what you want, because the array items are not linked to each other.

    But if you were to add a key to the initial JSON (corresponding with the Id column in your expected output) it will make things a lot easier.

    The absence of evidence is not evidence of absence
    - Martin Rees
    The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
    - Phil Parkin

  • 100% agree with Phil.  Fix the JSON please 🙂  To save myself from toil I used ChatGPT-4 and here's the result

    You're right, embedding SQL code in JSON is not a good practice. It's better to keep data and logic separate.

    Including the DataType and ColumnWidth in the JSON data is not necessary if the target is SQL Server. The database schema should already define these properties. Including them in the JSON could lead to confusion or errors if there's a discrepancy between the JSON and the database schema.

    The JSON should ideally only include the data that needs to be inserted or updated in the database. Here's a revised version of the JSON without the DataType and ColumnWidth:

    DECLARE @JsonContent NVARCHAR(MAX) = N'
    {
    "Records": [
    {
    "Id": "001",
    "Attributes": [
    {
    "ColumnName": "DateIdentified",
    "Value": "2023-11-01"
    },
    {
    "ColumnName": "Description",
    "Value": "Desc-Number-1"
    },
    {
    "ColumnName": "RectificationSteps",
    "Value": "Rec-Number-1"
    },
    {
    "ColumnName": "ReviewDate",
    "DaysToAdd": 10,
    "Value": ""
    }
    ]
    },
    {
    "Id": "002",
    "Attributes": [
    {
    "ColumnName": "DateIdentified",
    "Value": "2023-12-01"
    },
    {
    "ColumnName": "Description",
    "Value": "Desc-Number-2"
    },
    {
    "ColumnName": "RectificationSteps",
    "Value": "Rec-Number-2"
    },
    {
    "ColumnName": "ReviewDate",
    "DaysToAdd": 10,
    "Value": ""
    }
    ]
    }
    ]
    }'

    select isjson(@JsonContent);

    • This reply was modified 8 months, 3 weeks ago by  Steve Collins. Reason: Fixed JSON to work as a declared variable in SQL Server

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Thanks Phil & Steve, that's making life a bit easier.

    So based on your code Steve, I've now changed it to the below JSON

    DECLARE @JsonContent NVARCHAR(MAX) = 
    N'{
      "Records": [
        {
          "Id": "001",
          "Attributes": [
            {
              "ColumnName": "DateIdentified",
              "Value": "2023-11-01"
            },
            {
              "ColumnName": "Description",
              "Value": "Desc-Number-1"
            },
            {
              "ColumnName": "RectificationSteps",
              "Value": "Rec-Number-1"
            },
            {
              "ColumnName": "ReviewDate",
              "DaysToAdd": 10,
              "Value": ""
            }
          ]
        },
        {
          "Id": "002",
          "Attributes": [
            {
              "ColumnName": "DateIdentified",
              "Value": "2023-12-01"
            },
            {
              "ColumnName": "Description",
              "Value": "Desc-Number-2"
            },
            {
              "ColumnName": "RectificationSteps",
              "Value": "Rec-Number-2"
            },
            {
              "ColumnName": "ReviewDate",
              "DaysToAdd": 10,
              "Value": ""
            }
          ]
        }
      ]
    }'


    -- NEW CODE BASED ON NEW SAMPLE JSON
    SELECT t1.ID, t2.ColumnName, t2.DaysToAdd, t2.[Value]
    FROM OPENJSON (@JsonContent, '$.Records')
    WITH (
    Id NVARCHAR(255) '$.Id',
    Attributes  NVARCHAR(max) '$.Attributes'  AS JSON
    ) t1
    CROSS APPLY OPENJSON(t1.Attributes) 
    WITH (
       ColumnName nvarchar(255) '$.ColumnName',
       [DaysToAdd] nvarchar(255) '$.DaysToAdd',
       [Value] nvarchar(255) '$.Value'
    ) t2

    So based on my Output now which is cleaner, now would I look at doing a PIVOT to achieve the results? Or am I still on the wrong track with my JSON code which I've used.

     

    • This reply was modified 8 months, 3 weeks ago by  Tava.
    Attachments:
    You must be logged in to view attached files.
  • Ok, I think I'm closer, its just the final step of getting "DaysToAdd" on the same Line and making it a single entry.

     

    DECLARE @JsonContent NVARCHAR(MAX) = 
    N'{
      "Records": [
        {
          "Id": "001",
          "Attributes": [
            {
              "ColumnName": "DateIdentified",
              "Value": "2023-11-01"
            },
            {
              "ColumnName": "Description",
              "Value": "Desc-Number-1"
            },
            {
              "ColumnName": "RectificationSteps",
              "Value": "Rec-Number-1"
            },
            {
              "ColumnName": "ReviewDate",
              "DaysToAdd": 10,
              "Value": ""
            }
          ]
        },
        {
          "Id": "002",
          "Attributes": [
            {
              "ColumnName": "DateIdentified",
              "Value": "2023-12-01"
            },
            {
              "ColumnName": "Description",
              "Value": "Desc-Number-2"
            },
            {
              "ColumnName": "RectificationSteps",
              "Value": "Rec-Number-2"
            },
            {
              "ColumnName": "ReviewDate",
              "DaysToAdd": 10,
              "Value": ""
            }
          ]
        }
      ]
    }'


    SELECT * 
    FROM (
    SELECT t1.ID, t2.ColumnName, t2.DaysToAdd, t2.[Value]
    FROM OPENJSON (@JsonContent, '$.Records')
    WITH (
    Id NVARCHAR(255) '$.Id',
    Attributes  NVARCHAR(max) '$.Attributes'  AS JSON
    ) t1
    CROSS APPLY OPENJSON(t1.Attributes) 
    WITH (
       ColumnName nvarchar(255) '$.ColumnName',
       [DaysToAdd] nvarchar(255) '$.DaysToAdd',
       [Value] nvarchar(255) '$.Value'
    ) t2
    ) result
    PIVOT
    (
    MAX([Value])
    FOR ColumnName IN (
    [DateIdentified],
    [Description],
    [RectificationSteps],
    [ReviewDate]
    )
    ) AS PivotTable

    Uploaded new output

    Attachments:
    You must be logged in to view attached files.
  • This mostly borrows your FROM clause except I changed DaysToAdd to be 'int' which the JSON supports (it's not in quotations).  Instead of PIVOT I always use conditional aggregation.  It's just as efficient to run and it's a more flexible syntax imo

    select t1.Id, 
    max(iif(t2.ColumnName=N'ReviewDate', t2.DaysToAdd, null)) DaysToAdd,
    max(iif(t2.ColumnName=N'DateIdentified', t2.[Value], null)) DateIdentified,
    max(iif(t2.ColumnName=N'Description', t2.[Value], null)) [Description],
    max(iif(t2.ColumnName=N'RectificationSteps', t2.[Value], null)) RectificationSteps,
    dateadd(day, max(iif(t2.ColumnName=N'ReviewDate', t2.DaysToAdd, null)),
    max(iif(t2.ColumnName=N'DateIdentified', t2.[Value], null))) ReviewDate
    from openjson(@JsonContent, '$.Records')
    with(Id NVARCHAR(255) '$.Id',
    Attributes NVARCHAR(max) '$.Attributes' AS JSON) t1
    cross apply openjson(t1.Attributes)
    with(ColumnName nvarchar(255) '$.ColumnName',
    [DaysToAdd] int '$.DaysToAdd',
    [Value] nvarchar(255) '$.Value') t2
    group by t1.Id;

     

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Steve Collins wrote:

    This mostly borrows your FROM clause except I changed DaysToAdd to be 'int' which the JSON supports (it's not in quotations).  Instead of PIVOT I always use conditional aggregation.  It's just as efficient to run and it's a more flexible syntax imo

    select t1.Id, 
    max(iif(t2.ColumnName=N'ReviewDate', t2.DaysToAdd, null)) DaysToAdd,
    max(iif(t2.ColumnName=N'DateIdentified', t2.[Value], null)) DateIdentified,
    max(iif(t2.ColumnName=N'Description', t2.[Value], null)) [Description],
    max(iif(t2.ColumnName=N'RectificationSteps', t2.[Value], null)) RectificationSteps,
    dateadd(day, max(iif(t2.ColumnName=N'ReviewDate', t2.DaysToAdd, null)),
    max(iif(t2.ColumnName=N'DateIdentified', t2.[Value], null))) ReviewDate
    from openjson(@JsonContent, '$.Records')
    with(Id NVARCHAR(255) '$.Id',
    Attributes NVARCHAR(max) '$.Attributes' AS JSON) t1
    cross apply openjson(t1.Attributes)
    with(ColumnName nvarchar(255) '$.ColumnName',
    [DaysToAdd] int '$.DaysToAdd',
    [Value] nvarchar(255) '$.Value') t2
    group by t1.Id;

    That is excellent! it's working as expected. Really appreciate your help especially the removal of PIVOT.

     

    I just need to work out how to do this all dynamically, as the column Names and amount of columns can vary in the JSON.

     

     

  • Tava wrote:

    I just need to work out how to do this all dynamically, as the column Names and amount of columns can vary in the JSON.

    Then you will need to use dynamic SQL. That will be a challenge, to say the least.

    Best of luck.

    The absence of evidence is not evidence of absence
    - Martin Rees
    The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
    - Phil Parkin

  • Phil Parkin wrote:

    Tava wrote:

    I just need to work out how to do this all dynamically, as the column Names and amount of columns can vary in the JSON.

    Then you will need to use dynamic SQL. That will be a challenge, to say the least.

    Best of luck.

    To help there a bit and especially since this boils down to the "Black Art" of "CrossTabs", the following old article may help a whole lot in the area of making things "dynamic".

    https://www.sqlservercentral.com/articles/cross-tabs-and-pivots-part-2-dynamic-cross-tabs

     

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks Phil & Jeff, definitely not as easy as I thought.

    That link you shared Jeff is intense, I need to read and understand it more.

    Is there anyway, you can think of by changing the JSON will make life easier?

    for example, even though the amount of Columns and Column names are dynamic, We can cap the amount of Columns to 10.

    So If I had something like Column1 = C1, Column2 = C2 .... Column10 = C10 and preset 10 Columns even if only 4 are used.

    And then Add a Field Value for Column name under the attribute would that group easier

     

     

     

     

  • Tava wrote:

    Hi All,

    I need some assistance and not sure how to achieve the expected output.

    I have a JSON input parameter which is a nvarchar(max) passed from my application as per my sample code below.

    I've started to break up the JSON into a table output and am a bit stuck on getting how I expect it to be done.

    -- Test Json Passed via the Application
    DECLARE @JsonContent NVARCHAR(MAX) =
    '[
    {
    "Id": "DateIdentified001",
    "ColumnName": "DateIdentified",
    "DataType": "3",
    "ColumnWidth": 10,
    "Formula": "NA",
    "Value": "2023-11-01"
    },
    {
    "Id": "Description001",
    "ColumnName": "Description",
    "DataType": "2",
    "ColumnWidth": 20,
    "Formula": "NA",
    "Value": "Desc-Number-1"
    },
    {
    "Id": "RectificationSteps001",
    "ColumnName": "RectificationSteps",
    "DataType": "2",
    "ColumnWidth": 20,
    "Formula": "NA",
    "Value": "Rec-Number-1"
    },
    {
    "Id": "ReviewDate001",
    "ColumnName": "ReviewDate",
    "DataType": "3",
    "ColumnWidth": 10,
    "Formula": "DATEADD(DAY,10,DateIdentified)",
    "Value": ""
    },
    {
    "Id": "DateIdentified002",
    "ColumnName": "DateIdentified",
    "DataType": "3",
    "ColumnWidth": 10,
    "Formula": "NA",
    "Value": "2023-12-01"
    },
    {
    "Id": "Description002",
    "ColumnName": "Description",
    "DataType": "2",
    "ColumnWidth": 20,
    "Formula": "NA",
    "Value": "Desc-Number-2"
    },
    {
    "Id": "RectificationSteps002",
    "ColumnName": "RectificationSteps",
    "DataType": "2",
    "ColumnWidth": 20,
    "Formula": "NA",
    "Value": "Rec-Number-2"
    },
    {
    "Id": "ReviewDate002",
    "ColumnName": "ReviewDate",
    "DataType": "3",
    "ColumnWidth": 10,
    "Formula": "DATEADD(DAY,10,DateIdentified)",
    "Value": ""
    }

    ]'

    -- Query to break up the JSON
    SELECT
    ColumnName,
    Formula,
    [Value]
    FROM OPENJSON(@JsonContent) WITH (
    ColumnName NVARCHAR(255) '$.ColumnName',
    Formula NVARCHAR(255) '$.Formula',
    [Value] NVARCHAR(255) '$.Value'
    )

    My output is as per image "OriginalOutput.png" or simply the execution of running the sample code.

    But the expected output is like the File "ExpectedOutput.png,

    The Review Date would be the actual formulated value but I'm not worried at this stage cause I can apply that after I get the expected output which I think would be easy.

    This is a very cut down version cause I've brough back 4 columns but it could be dynamic up to 10. the output would need to be built dynamically, But as above I can give that a go once I stop getting stuck on the same problem

    I'm struggling to achieve this but would it be a PIVOT to get it done what I expect.

    EDIT:

    I'm open to making changes to the Application JSON output if needed, for example have a value like C1 -> C10 which is static for every column to Pivot if needed.

    Tava wrote:

    Thanks Phil & Jeff, definitely not as easy as I thought.

    That link you shared Jeff is intense, I need to read and understand it more.

    Is there anyway, you can think of by changing the JSON will make life easier?

    for example, even though the amount of Columns and Column names are dynamic, We can cap the amount of Columns to 10.

    So If I had something like Column1 = C1, Column2 = C2 .... Column10 = C10 and preset 10 Columns even if only 4 are used.

    And then Add a Field Value for Column name under the attribute would that group easier

    Will the JSON data IN YOUR ORIGINAL POST ALWAYS have the same name/value pairs as below?

    Id

    ColumnName

    DataType

    ColumnWidth

    Formula

    Value

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • The JSON will always have the same values/pairs as that part is static.

    That pairing is still not 100% as I'm taking suggestions provided by Phil/Steves to help improve the JSON and passing onto the .net dev I work with.

    But its safe to say, that what we decide with, will not change.

     

     

     

     

  • Tava wrote:

    The JSON will always have the same values/pairs as that part is static.

    That pairing is still not 100% as I'm taking suggestions provided by Phil/Steves to help improve the JSON and passing onto the .net dev I work with.

    But its safe to say, that what we decide with, will not change.

    If they're going to make a change, they should make a change to the value stored in the ID column.  Instead of the column name and a 3 digit number, it should just be the ID number.  Other than that, I'd leave the original design of the JSON alone because of the incredible flexibility it offers (although if they have all that, what on earth is preventing them from making a good ol' fashion TSV (Tab Separated Value) file that would take virtually no effort to import into a table).

    Other than that and with the confirmation that you just gave that the key/value pairs for each root.key will stay the same, I should be able to finish the "pivot" for your first post of example data tonight.

    Disclaimer... I'm not a big, smart, JSON user but I've been able to do a nice simple parse of it to put all the parts into an EAV (Entity, Attribute, Value table) that currently using the stripped out 3 digit number from the ID column (which is why it would be better as just a number), and the root.key to keep the column sets together and, like I said, I should be able to have the unpivot ready tonight.

    And, yep... it'll all be dynamic, although (like I said above) it would be MUCH better if the source were a TSV instead of JSON.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    If they're going to make a change, they should make a change to the value stored in the ID column.  Instead of the column name and a 3 digit number, it should just be the ID number.  Other than that, I'd leave the original design of the JSON alone because of the incredible flexibility it offers (although if they have all that, what on earth is preventing them from making a good ol' fashion TSV (Tab Separated Value) file that would take virtually no effort to import into a table).

    Other than that and with the confirmation that you just gave that the key/value pairs for each root.key will stay the same, I should be able to finish the "pivot" for your first post of example data tonight.

    Disclaimer... I'm not a big, smart, JSON user but I've been able to do a nice simple parse of it to put all the parts into an EAV (Entity, Attribute, Value table) that currently using the stripped out 3 digit number from the ID column (which is why it would be better as just a number), and the root.key to keep the column sets together and, like I said, I should be able to have the unpivot ready tonight.

    And, yep... it'll all be dynamic, although (like I said above) it would be MUCH better if the source were a TSV instead of JSON.

    Hi Jeff,

    I can go back and make that suggestion, to use a TSV. If that will simplify the process. JSON was the suggestion used and i don't think it was anticipated how hard this would be.

     

    Just confirming when you say paring/values will remain the same, I mean the Red highlighted text will always stay the same but the black text will vary depending on what is created.

    {

    "Id": "Description002",

    "ColumnName": "Description",

    "DataType": "2",

    "ColumnWidth": 20,

    "Formula": "NA",

    "Value": "Desc-Number-2"

    }

    However, once its created it cannot be changed. (i.e. Column Name "Description" can not change, if it does need to then a new report would need to be created.

     

     

     

  • Tava wrote:

    The JSON will always have the same values/pairs as that part is static.

    That pairing is still not 100% as I'm taking suggestions provided by Phil/Steves to help improve the JSON and passing onto the .net dev I work with.

    But its safe to say, that what we decide with, will not change. 

    On this particular topic I disagree with Jeff and I think Phil/me have the right approach.  It would be a big pita for the .NET dev to create the original JSON because the hierarchies in the data are not object modeled.  Said another way: to create that JSON from nested CLR objects would require writing loops afaik and imo.  You could pass them these POCO classes which serialize into the suggested JSON.

        public class Rootobject
    {
    public Record[] Records { get; set; }
    }

    public class Record
    {
    public string Id { get; set; }
    public Attribute[] Attributes { get; set; }
    }

    public class Attribute
    {
    public string ColumnName { get; set; }
    public string Value { get; set; }
    public int DaysToAdd { get; set; }
    }

    Then to serialize/deserialze

        //using System.Text.Json;

    // ...

    // Deserialize JSON string to POCO
    string jsonString = "{ \"Records\": [ { \"Id\": \"1\", \"Attributes\": [ { \"ColumnName\": \"Test\", \"Value\": \"Value\", \"DaysToAdd\": 5 } ] } ] }";
    Rootobject rootObject = JsonSerializer.Deserialize<Rootobject>(jsonString);

    // Serialize POCO to JSON string
    string jsonString = JsonSerializer.Serialize(rootObject);

    One of the big advantages of JSON as medium of exchange vs TSV (Tab Separated Value) or any other flat file is the ability to nest subarrays and to represent hierarchies.  Another advantage is the handling of escaping/delimiting within strings is pretty much solved.

    After a bunch of prompts ChatpGPT-4 produced the following:

    The original JSON structure can be represented by the following POCO classes in C#:

    public class Rootobject
    {
    public Item[] Items { get; set; }
    }

    public class Item
    {
    public string Id { get; set; }
    public string ColumnName { get; set; }
    public string DataType { get; set; }
    public int ColumnWidth { get; set; }
    public string Formula { get; set; }
    public string Value { get; set; }
    }

    However, this structure does not represent the evident hierarchies in the data. The JSON structure suggests that there are multiple records, each with multiple attributes. But the POCO structure above treats each attribute as a separate record. To properly represent the hierarchies, you would need a more complex object model, like the one you proposed in your previous response.

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

Viewing 15 posts - 1 through 15 (of 29 total)

You must be logged in to reply to this topic. Login to reply