• It depends on the logging options you use. If, for example, you just do a heap-dump (all-column-insert) from "inserted" in triggers, you get really fast, really simple logging. About the smallest performance impact you can get from trigger-based logging.

    If you do it that way, you don't have "before-and-after" logging, so you end up with what stuff was changed to, but not what it was changed from. In that case, you need the original data to actually have the full trail on it.

    It can actually take less disk space, if data is updated more often than it's inserted. If you add 100 rows per day, and do 10,000 updates per day, "after-only" logging, including the inserted data, will be much smaller on the disk than "before-and-after" logging. This is because the data logged is half the size for updates in "after-only" as it is in "before-and-after".

    It's easy enough to test. Add two logging triggers to a test table, and two log tables (1 for each trigger). Make one trigger be the usual "before (from 'deleted') and after (from 'inserted')", without logging inserts. Make the other trigger log everything in the "inserted" table, but nothing from "deleted". (Make it fair; if one trigger loads only the columns that were changed, make the other trigger do the same, etc.) Load up the main table with a bunch of rows, 1000 or so. Then do 100-thousand random updates on the table, including various columns. Check the size of the two log tables.

    Here's an example:

    create table LogTest2 (

    ID int identity primary key,

    Col1 varchar(100),

    Col2 varchar(100))

    go

    create table LogTest2_Log1 (

    LogID int identity primary key,

    LogDate datetime not null default(getdate()),

    ID int,

    Col1_From varchar(100),

    Col1_To varchar(100),

    Col2_From varchar(100),

    Col2_To varchar(100),

    Act char(1))

    go

    create table LogTest2_Log2 (

    LogID int identity primary key,

    LogDate datetime not null default(getdate()),

    ID int,

    Col1 varchar(100),

    Col2 varchar(100),

    Act char(1))

    go

    create trigger LogTest2_L1 on dbo.LogTest2

    after update, delete

    as

    insert into dbo.logtest2_log1 (id, col1_from, col1_to, col2_from, col2_to, act)

    select isnull(i.id, d.id), d.col1, i.col1, d.col2, i.col2,

    case

    when i.id is null then 'D'

    else 'U'

    end

    from inserted i

    right outer join deleted d

    on i.id = d.id;

    go

    create trigger LogTest2_L2 on dbo.LogTest2

    after insert, update, delete

    as

    insert into dbo.logtest2_log2 (id, col1, col2, act)

    select i.id, i.col1, i.col2,

    case

    when i.id is not null and d.id is not null then 'U'

    else 'I'

    end

    from inserted i

    left outer join deleted d

    on i.id = d.id

    insert into dbo.logtest2_log2 (id, act)

    select d.id, 'D'

    from deleted d

    left outer join inserted i

    on d.id = i.id

    where i.id is null;

    go

    set nocount on

    insert into dbo.logtest2 (col1, col2)

    select number, number

    from dbo.numbers

    where number between 1 and 1000

    go

    update dbo.logtest2

    set col1 = checksum(newid()), col2 = checksum(newid())

    where id = abs(checksum(newid()))%1000 + 1

    go 10000

    Log1 ended up at .727 Meg, Log2 at .547 Meg, after 10,000 updates.

    With "after-only", you need the original data in the log, or you don't know what it was before it was updated, which can matter.

    If you insert more often that you update, then before-and-after logging, without the insert, is often more efficient.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon