Using Variables as column names in select statement

  • Hi sql_ppl,

    I am creating a trigger and in the trigger I call a sp with params. The sp should update data in another database that has exactly the same table structures (ie multi company, same app). I am going to create a trigger per table that is being updated (as the tables being updated is known to me).

    I also allready know which column is being updated, to get the pre and post values of the column I use the Inserted and Deleted tables (which are only available while trigger is running).

    The column being updated is passed to a variable and I want to pass the pre and post values to variables as well.

    To get my values I use the following:

    SET @execSQL = 'DECLARE @BeforeValue sql_variant, @AfterValue sql_variant '+

    'SELECT @BeforeValue = D.['+@ColumnName+'], '+

    ' @AfterValue = I.['+@ColumnName+'] '+

    'FROM Inserted I '+

    'INNER JOIN Deleted D on I.['+@KeyCol+'] = D.['+@KeyCol+'] '

    exec(@execSQL)

    Where @ColumnName = column being updated

    @KeyCol = Key column that is designated in each trigger

    As you can see I cant do this because the exec() statement causes the Inserted and Deleted tables to be outside the scope of the trigger and fails.

    My Question (at long last) is: Is there a way to pass the @ColumnName / @KeyCol vars as a actual columns in the select clause, ie

    DECLARE @BeforeValue sql_variant, @AfterValue sql_variant

    SELECT @BeforeValue = D.[@ColumnName],

    @AfterValue = I.[@ColumnName]

    FROM Inserted I

    INNER JOIN Deleted D on I.[@KeyCol] = D.[@KeyCol]

    Your help would be much appreciated.

    thx all

  • SET @execSQL = 'DECLARE @BeforeValue sql_variant, @AfterValue sql_variant '+

    'SELECT @BeforeValue = D.['+@ColumnName+'], '+

    ' @AfterValue = I.['+@ColumnName+'] '+

    'FROM Inserted I '+

    'INNER JOIN Deleted D on I.['+@KeyCol+'] = D.['+@KeyCol+'] '

    exec(@execSQL)

    Where @ColumnName = column being updated

    @KeyCol = Key column that is designated in each trigger

    i see a couple of things:

    isn't it true that the either the INSERTED table or the DELETED table will always be empty in the trigger, so if you inner join, the two, you'd always get no rows?

    since you are declaring a variable, then the trigger would fail if the INSERTED table had more than one row in it's event.

    i think you should copy whatever the stored proc was doing, instead of trying to use the stored proc.

    and of course make sure the whole thing remails set based instead of assuming only one row would fire the trigger.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • MSDN ->

    The relational engine may need to build a worktable to perform a logical operation specified in an SQL statement. Worktables are typically generated for certain GROUP BY, ORDER BY, or UNION queries. For example, if an ORDER BY clause references columns not covered by any indexes, the relational engine may need to generate a worktable to sort the result set into the order requested.

    Worktables are built in tempdb and are dropped automatically at the end of the statement.

  • Ignore my above reply

    It's wrongly posted here

    😎

  • Lowell, as far as i know the inserted and deleted tables will have the details of your post in it while the trigger is firing. Thus the deleted table will have the "previous" value and the inserted table will have the "posted" value. This will work if you are updating say tables used exclusively for lookups, thus, in theory 😉 you should be able to get back a single row for the inner join.

    The reason I did not want to put the sp in the trigger is maintanance, if something should change I have to change all the triggers instead of say one sp.

    thx for the input.

  • Thanks Theo; I really need to test that myself , as I clearly don't make enough triggers; I should have known that...i assumed that the tables would only be populated based on the event it was doing, so inserted would have data for insert/update, and deleted would only have the data in the delete event.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Hello Theo,

    as far as I know it isn't possible, and even if it was, I don't think it would be advisable. A trigger calling SP also isn't the best solution, trigger should contain short pieces of code that can be executed quickly, because triggers tend to be fired pretty often. Dynamic SQL in a trigger is something I wouldn't recommend.

    Also, I would be VERY careful with triggers, especially if you are updating another database. If a trigger fails, everything will be rolled back... if I understand it correctly, it means that as soon as the other database is offline for any reason, or is experiencing blocking problems etc., you won't be able to do anything in your database (that is, not on tables with such triggers).

    You didn't describe in detail what is the reason for setting up the triggers. Maybe replication, or some other way would accomplish what you need? If you really want to use triggers, the best way out seems to be to use the trigger to insert just the ID of changed row into some "ToDo" table and then use a job that will update the rows in another database (based on current values in your DB) and delete ID from the ToDo table.

  • Hi Vladan,

    thx for the input, I discussed my proposed solution with client and supervisor and the triggers will only be used on 2 tables as these tables carry counters that are updated in code, thus only the columns that changed must be updated if record exists in other database. the rest of the lookup tables will be repopulated with the master database records with a dts package that calls a sp.

    I created a update and insert trigger for the 1st table and tested, works absolutly brilliant. The only reason that I wanted to call alias.@column was to make the scripts generic, as there are only 2 tables that are going to use the triggers I decided to make unique triggers for each, 4 triggers to maintain isn't that big of a deal for me so I am happy with the result.

    PS I also considered using an audit table as you suggested, but my supervisor did not agree, so what the boss wants the boss gets i suppose.

    thx again for the input.

  • Lowell (11/7/2007)


    Thanks Theo; I really need to test that myself , as I clearly don't make enough triggers; I should have known that...i assumed that the tables would only be populated based on the event it was doing, so inserted would have data for insert/update, and deleted would only have the data in the delete event.

    deleted has data in the update/delete event.

    _____________
    Code for TallyGenerator

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply