Dumb Question

  • Hi all,

    i have something like the below:

    declare @ss varchar(30)

    set @ss='ProductID'

    SELECT @SS from Product

    If i execute the above, 'ProductID' is returned. But I want to retrieve the value of productID? How will i get the value of the productid instead of the string 'ProductID'

    i know this is probably the dumbest question ever 😉

    Thanks for your time.

  • I know it can be done by dynamic sql but is there any way to do it without dynamic sql?

    Thanks,

    kayal

  • You are explicity setting the value for @ss as ProductID

    Try something like the following

    Set @ss = (SELECT ProductID FROM SomeTable)

    Note that you may have to set a top 1 or where clause to ensure only 1 result is returned.

  • Quite a tricky question, I am not aware of a way that it can be done without using dynamic SQL.

    Am interested if anyone has a solution ..

  • Thanks Raleigh. I dont want to assign the result of the select to a variable but i want to use the variable(which has the field name) in the select list to get the value of the field.

    The reason why i need this is in the audit table trigger, I get the field name of the primary key

    SELECT @primarykey-2 = COLUMN_NAME

    fromINFORMATION_SCHEMA.TABLE_CONSTRAINTS pk ,

    INFORMATION_SCHEMA.KEY_COLUMN_USAGE c

    where pk.TABLE_NAME = @table_name

    andCONSTRAINT_TYPE = 'PRIMARY KEY'

    andc.TABLE_NAME = pk.TABLE_NAME

    andc.CONSTRAINT_NAME = pk.CONSTRAINT_NAME

    I want to use the @primarykey-2 in my select to get the primary key value. Also using the COLUMNS_UPDATED() function i find out the fieldnames which was updated. I need to use the field names in the select list to get the value of the field.

    Thank You. i appreciate your reply.

  • You cannot do this without dynamic SQL.

  • Since triggers have to be created on each table that you're going to log, why not just put the primary key in the trigger directly? This seems a little too generic.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Thanks Micheal and Gsquared for the replies. i really appreciate it.

    GSquared,

    You are right. But the maintenance will be really difficult right. Any change in the table, everybody forgets to update the trigger right. So I am trying to make it as generic as possible. Do you think that will affect the performance badly? Thats why i am tring to avoid the dynamic sql. Performance is ofcourse an important criteria for me. But my manager donot want me to hard code things as well. I dono what is a best solution here.

    Any suggestion is greatly appreciated.

  • I wrote a couple of articles on logging and auditing a little while back. They might help. They're at:

    http://www.sqlservercentral.com/articles/Auditing/63247/

    http://www.sqlservercentral.com/articles/Auditing/63248/

    The discussions have a lot of good data in them as well.

    Check those out, and if they don't answer what you need, please let me know and I'll see if I can help clarify it.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Hi Gsquared,

    That was a quick reply. I read ur articles on auditing a month back. Those 2 articles was awesome and Only that got me started. Again now i am stuck in a confusion. Those articles have quite a bit of information. I have to re-visit them and read it again. That will resolve my problem i guess 🙂

    Thanks Again,

    Kayal

  • If they don't resolve the problem, let me know, and I'll gladly clarify them for you.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Awesome articles, Gus... dunno how I missed the second one, but I just read it. Really nice job.

    My only suggestion to everyone is to NOT audit inserts. The data already lives in the main table. The only thing audit triggers should do is modify changes to the original row... that, of course, includes deletes. The reason I'm so adamant about NOT auditing Inserts is because, depending on the audit method you chose, will a least double the size of your database and in some cases, will increase the size my a factor of 4 (sometimes more for full EAV audit tables).

    Of course, maybe you would rather have a 2-4 terabyte database to backup instead a 1 terabyte database 😀

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (7/21/2008)


    Awesome articles, Gus... dunno how I missed the second one, but I just read it. Really nice job.

    My only suggestion to everyone is to NOT audit inserts. The data already lives in the main table. The only thing audit triggers should do is modify changes to the original row... that, of course, includes deletes. The reason I'm so adamant about NOT auditing Inserts is because, depending on the audit method you chose, will a least double the size of your database and in some cases, will increase the size my a factor of 4 (sometimes more for full EAV audit tables).

    Of course, maybe you would rather have a 2-4 terabyte database to backup instead a 1 terabyte database 😀

    Thanks for the compliment.

    On auditing inserts, if you audit before-and-after, there's no need to audit inserts. If you just audit changes (inserted table), you need to log the insert, but don't need to log deletions other than that they happened.

    Let's say you insert a row that's exactly 1000 bytes wide, and you change it 10 times, then delete it. It stays 1000 bytes during each update.

    Before-and-After Auditing, without Insert, with Delete: 21,000 bytes of log data

    After-only Auditing, with Insert, Delete timestamp-only: 11,008 bytes of log data

    If you use the XML logging I suggested to someone yesterday, which only logs the colums that actually change, you can end up with much smaller log tables.

    For example, this:

    set nocount on

    --drop table loggingtest

    --drop table loggingtest_log

    go

    create table LoggingTest (

    ID int identity primary key,

    Col1 char(100),

    Col2 char(100),

    Col3 char(100),

    Col4 char(100),

    Col5 char(100),

    Col6 char(100),

    Col7 char(100),

    Col8 char(100),

    Col9 char(100),

    Col10 char(100))

    go

    create table LoggingTest_Log (

    XActionID int identity primary key,

    LogDate datetime not null default (getdate()),

    LogBy varchar(100) not null default(system_user),

    Data XML)

    go

    create trigger LoggingTest_LogXML on dbo.LoggingTest

    after insert, update, delete

    as

    set nocount on

    /*

    NullIf used on inserted and deleted, because it reduces the size of the

    resulting XML data. XML, by default, ignores null values and doesn't

    include the column.

    */

    declare @XML xml

    select @XML =

    (select isnull(i.ID, d.ID) as ID,

    rtrim(nullif(cast(d.Col1 as varchar(100)), cast(i.Col1 as varchar(100)))) as Col1_From,

    rtrim(nullif(cast(i.Col1 as varchar(100)), cast(d.Col1 as varchar(100)))) as Col1_To,

    rtrim(nullif(cast(d.Col2 as varchar(100)), cast(i.Col2 as varchar(100)))) as Col2_From,

    rtrim(nullif(cast(i.Col2 as varchar(100)), cast(d.Col2 as varchar(100)))) as Col2_To,

    rtrim(nullif(cast(d.Col3 as varchar(100)), cast(i.Col3 as varchar(100)))) as Col3_From,

    rtrim(nullif(cast(i.Col3 as varchar(100)), cast(d.Col3 as varchar(100)))) as Col3_To,

    rtrim(nullif(cast(d.Col4 as varchar(100)), cast(i.Col4 as varchar(100)))) as Col4_From,

    rtrim(nullif(cast(i.Col4 as varchar(100)), cast(d.Col4 as varchar(100)))) as Col4_To,

    rtrim(nullif(cast(d.Col5 as varchar(100)), cast(i.Col5 as varchar(100)))) as Col5_From,

    rtrim(nullif(cast(i.Col5 as varchar(100)), cast(d.Col5 as varchar(100)))) as Col5_To,

    rtrim(nullif(cast(d.Col6 as varchar(100)), cast(i.Col6 as varchar(100)))) as Col6_From,

    rtrim(nullif(cast(i.Col6 as varchar(100)), cast(d.Col6 as varchar(100)))) as Col6_To,

    rtrim(nullif(cast(d.Col7 as varchar(100)), cast(i.Col7 as varchar(100)))) as Col7_From,

    rtrim(nullif(cast(i.Col7 as varchar(100)), cast(d.Col7 as varchar(100)))) as Col7_To,

    rtrim(nullif(cast(d.Col8 as varchar(100)), cast(i.Col8 as varchar(100)))) as Col8_From,

    rtrim(nullif(cast(i.Col8 as varchar(100)), cast(d.Col8 as varchar(100)))) as Col8_To,

    rtrim(nullif(cast(d.Col9 as varchar(100)), cast(i.Col9 as varchar(100)))) as Col9_From,

    rtrim(nullif(cast(i.Col9 as varchar(100)), cast(d.Col9 as varchar(100)))) as Col9_To,

    rtrim(nullif(cast(d.Col10 as varchar(100)), cast(i.Col10 as varchar(100)))) as Col10_From,

    rtrim(nullif(cast(i.Col10 as varchar(100)), cast(d.Col10 as varchar(100)))) as Col10_To,

    case

    when i.ID is not null and d.ID is not null then 'U'

    when i.ID is null then 'D'

    else 'I'

    end as Act

    from inserted i

    full outer join deleted d

    on i.id = d.id

    for xml auto)

    insert into dbo.LoggingTest_Log (Data) -- Rest of log columns are filled by defaults

    select @xml;

    go

    insert into dbo.loggingtest (col1, col2, col3, col4, col5, col6, col7, col8,

    col9, col10)

    select 'a','a','a','a','a','a','a','a','a','a'

    update dbo.loggingtest

    set col1 = '1'

    update dbo.loggingtest

    set col1 = '2'

    update dbo.loggingtest

    set col1 = '3'

    update dbo.loggingtest

    set col1 = '4'

    update dbo.loggingtest

    set col1 = '5'

    update dbo.loggingtest

    set col1 = '6'

    update dbo.loggingtest

    set col1 = '7'

    update dbo.loggingtest

    set col1 = '8'

    update dbo.loggingtest

    set col1 = '9'

    update dbo.loggingtest

    set col1 = '10'

    delete from dbo.loggingtest

    The primary table shows as .008 Meg of data, and the log table ends up at .008 Meg too, even after a logged insert, 10 logged updates, and a logged delete. That's kind of an artificial case, because of the minimal size of the updates, but it does illustrate the point.

    So, depending on your method of logging, logging inserts may or may not matter much. With this XML log, it does matter, and can/should be omitted. With an "after-only" log, it's needed.

    (If you want to modify that XML trigger to not log inserts, just change it from a Full Outer Join to a Right Outer Join, and modify the Case statement.)

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • I guess I don't understand why you would need to log the insert... if no changes occur at all, the original data is still available in the original table.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • It depends on the logging options you use. If, for example, you just do a heap-dump (all-column-insert) from "inserted" in triggers, you get really fast, really simple logging. About the smallest performance impact you can get from trigger-based logging.

    If you do it that way, you don't have "before-and-after" logging, so you end up with what stuff was changed to, but not what it was changed from. In that case, you need the original data to actually have the full trail on it.

    It can actually take less disk space, if data is updated more often than it's inserted. If you add 100 rows per day, and do 10,000 updates per day, "after-only" logging, including the inserted data, will be much smaller on the disk than "before-and-after" logging. This is because the data logged is half the size for updates in "after-only" as it is in "before-and-after".

    It's easy enough to test. Add two logging triggers to a test table, and two log tables (1 for each trigger). Make one trigger be the usual "before (from 'deleted') and after (from 'inserted')", without logging inserts. Make the other trigger log everything in the "inserted" table, but nothing from "deleted". (Make it fair; if one trigger loads only the columns that were changed, make the other trigger do the same, etc.) Load up the main table with a bunch of rows, 1000 or so. Then do 100-thousand random updates on the table, including various columns. Check the size of the two log tables.

    Here's an example:

    create table LogTest2 (

    ID int identity primary key,

    Col1 varchar(100),

    Col2 varchar(100))

    go

    create table LogTest2_Log1 (

    LogID int identity primary key,

    LogDate datetime not null default(getdate()),

    ID int,

    Col1_From varchar(100),

    Col1_To varchar(100),

    Col2_From varchar(100),

    Col2_To varchar(100),

    Act char(1))

    go

    create table LogTest2_Log2 (

    LogID int identity primary key,

    LogDate datetime not null default(getdate()),

    ID int,

    Col1 varchar(100),

    Col2 varchar(100),

    Act char(1))

    go

    create trigger LogTest2_L1 on dbo.LogTest2

    after update, delete

    as

    insert into dbo.logtest2_log1 (id, col1_from, col1_to, col2_from, col2_to, act)

    select isnull(i.id, d.id), d.col1, i.col1, d.col2, i.col2,

    case

    when i.id is null then 'D'

    else 'U'

    end

    from inserted i

    right outer join deleted d

    on i.id = d.id;

    go

    create trigger LogTest2_L2 on dbo.LogTest2

    after insert, update, delete

    as

    insert into dbo.logtest2_log2 (id, col1, col2, act)

    select i.id, i.col1, i.col2,

    case

    when i.id is not null and d.id is not null then 'U'

    else 'I'

    end

    from inserted i

    left outer join deleted d

    on i.id = d.id

    insert into dbo.logtest2_log2 (id, act)

    select d.id, 'D'

    from deleted d

    left outer join inserted i

    on d.id = i.id

    where i.id is null;

    go

    set nocount on

    insert into dbo.logtest2 (col1, col2)

    select number, number

    from dbo.numbers

    where number between 1 and 1000

    go

    update dbo.logtest2

    set col1 = checksum(newid()), col2 = checksum(newid())

    where id = abs(checksum(newid()))%1000 + 1

    go 10000

    Log1 ended up at .727 Meg, Log2 at .547 Meg, after 10,000 updates.

    With "after-only", you need the original data in the log, or you don't know what it was before it was updated, which can matter.

    If you insert more often that you update, then before-and-after logging, without the insert, is often more efficient.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

Viewing 15 posts - 1 through 15 (of 46 total)

You must be logged in to reply to this topic. Login to reply