using rows value as dynamic columns to summarize data

  • i have a table with the following fields taken from an excel file (sample data is included in the code)

    - client

    - type of transaction

    - amount (not using this at the moment)

    i need to come up with an output that will summarize it per client and per type of transaction (counting the occurrence of the transaction type), the field [type of transaction] may contain several values, since its coming from an excel file it may vary from time to time so i can't do pivots with static columns (afaik)

    i can do this using cursor, exec and a temporary table but i'm looking for a more elegant way of doing this

    DECLARE @typeOfTransaction table (fieldname varchar(50))

    DECLARE @rec table ([client] varchar(50), [type of transaction] varchar(50), [amount] varchar(50))

    -- data

    insert into @rec

    /*

    SELECT *

    FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0',

    'Excel 8.0;Database=d:\temp\sample.xls', [Sheet1$])

    */

    select 'acme' as [client], 'ammendment' as [type of transaction],'100' as [amount]

    union

    select 'company', 'coverage date', '200'

    union

    select 'somewhere', 'ammendment', '300'

    union

    select 'acme', 'coverage date', '400'

    union

    select 'company', 'coverage date', '500'

    union

    select 'company', 'whatever', '600'

    -- header

    insert into @typeOfTransaction

    SELECT distinct [type of transaction]

    FROM @rec

    -- columns

    Declare @col as varchar(MAX)

    select

    @col = ISNULL(@col + '; ', '')

    + 'ALTER TABLE #summary ADD [' + fieldname + '] int default 0 not null'

    from @typeOfTransaction

    -- create table

    create table #summary ([client] varchar(50))

    exec(@col)

    -- per client

    declare @client varchar(50)

    declare cur cursor forward_only for

    select distinct [client]

    from @rec

    open cur

    fetch next from cur

    into @client

    while @@FETCH_STATUS = 0

    begin

    exec('insert into #summary

    ([client]) values (''' + @client + ''')')

    declare @subtype varchar(50)

    declare subcur cursor forward_only for

    select distinct [type of transaction]

    from @rec

    where [client] = @client

    open subcur

    fetch next from subcur

    into @subtype

    while @@FETCH_STATUS = 0

    begin

    declare @cnt int

    select @cnt = COUNT(*)

    from @rec

    where [client] = @client

    and [type of transaction] = @subtype

    declare @subSQL nvarchar(MAX)

    set @subSQL = 'update #summary set [' + @subtype + '] = ' + CAST(@cnt as varchar) + ' where [client] = ''' + @client + ''';'

    exec(@subSQL)

    fetch next from subcur

    into @subtype

    end

    close subcur

    deallocate subcur

    fetch next from cur

    into @client

    end

    close cur

    deallocate cur

    select * from #summary

    drop table #summary


    slow down when you need to hurry, stop when you need to move on,
    look back when you need to forget, or you might slip and leave sanity

  • Try this:

    -- Note: Cannot do this with a table variable

    CREATE TABLE #rec ([client] varchar(50), [type of transaction] varchar(50), [amount] varchar(50))

    insert into #rec

    select 'acme' as [client], 'ammendment' as [type of transaction],'100' as [amount]

    union all select 'company', 'coverage date', '200'

    union all select 'somewhere', 'ammendment', '300'

    union all select 'acme', 'coverage date', '400'

    union all select 'company', 'coverage date', '500'

    union all select 'company', 'whatever', '600'

    -- Start by setting up a working query based on known [type of transaction]

    SELECT client

    ,[ammendment]=COUNT(CASE [type of transaction] WHEN 'ammendment' THEN 1 ELSE NULL END)

    ,[coverage date]=COUNT(CASE [type of transaction] WHEN 'coverage date' THEN 1 ELSE NULL END)

    ,[whatever]=COUNT(CASE [type of transaction] WHEN 'whatever' THEN 1 ELSE NULL END)

    FROM #rec

    GROUP BY client

    -- Code above will be discarded once you get the dynamic SQL below working

    DECLARE @SQL NVARCHAR(MAX) = 'SELECT client ' -- Preamble of working query

    SELECT @SQL = @SQL + -- Add the repeating SQL (CASE stmts)

    (

    SELECT ',[' + t + ']=COUNT(CASE [type of transaction] WHEN ''' + t + ''' THEN 1 ELSE NULL END) '

    FROM (

    SELECT t=[type of transaction]

    FROM #rec

    GROUP BY [type of transaction]) y

    FOR XML PATH(''), TYPE).VALUE('.', 'VARCHAR(MAX)')

    + ' FROM #rec GROUP BY CLIENT' -- Final SQL

    -- SELECT out the string, copy from results and run until you've got it right

    --SELECT @SQL

    -- Then comment out the above and EXEC your dynamic SQL

    EXEC (@SQL)

    DROP TABLE #rec

    If the comments included don't explain it well enough, let me know and I'd be happy to answer your questions.

    The TYPE/VALUE stuff simply makes sure that if your data contains ampersands (or other characters special to XML) they're handled properly. I recommend this if your source is someone updating an Excel spreadsheet.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • this helps not only on the performance of my application but also improves my knowledge on this

    thank you dwain.c


    slow down when you need to hurry, stop when you need to move on,
    look back when you need to forget, or you might slip and leave sanity

  • You're most welcome.

    BTW. My edits were caused by the fact that 'ammendment' is mispelled in your sample data. When I constructed the initial query I spelled it correctly and after I posted I noticed some data discrepancies. It turned out that the Dynamic SQL version was producing the right results while the one I set up based on the 3 columns was wrong (structure was right, counts were wrong)!

    I mention this because you'll need to figure out how to deal with mispelled column names coming from Excel.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply