SQL Clone
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 


Dumb Question


Dumb Question

Author
Message
GSquared
GSquared
SSC-Insane
SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)

Group: General Forum Members
Points: 23329 Visits: 9730
If they don't resolve the problem, let me know, and I'll gladly clarify them for you.

- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread

"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Jeff Moden
Jeff Moden
SSC Guru
SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)

Group: General Forum Members
Points: 85188 Visits: 41077
Awesome articles, Gus... dunno how I missed the second one, but I just read it. Really nice job.

My only suggestion to everyone is to NOT audit inserts. The data already lives in the main table. The only thing audit triggers should do is modify changes to the original row... that, of course, includes deletes. The reason I'm so adamant about NOT auditing Inserts is because, depending on the audit method you chose, will a least double the size of your database and in some cases, will increase the size my a factor of 4 (sometimes more for full EAV audit tables).

Of course, maybe you would rather have a 2-4 terabyte database to backup instead a 1 terabyte database BigGrin

--Jeff Moden

RBAR is pronounced ree-bar and is a Modenism for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
If you think its expensive to hire a professional to do the job, wait until you hire an amateur. -- Red Adair

Helpful Links:
How to post code problems
How to post performance problems
Forum FAQs
GSquared
GSquared
SSC-Insane
SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)

Group: General Forum Members
Points: 23329 Visits: 9730
Jeff Moden (7/21/2008)
Awesome articles, Gus... dunno how I missed the second one, but I just read it. Really nice job.

My only suggestion to everyone is to NOT audit inserts. The data already lives in the main table. The only thing audit triggers should do is modify changes to the original row... that, of course, includes deletes. The reason I'm so adamant about NOT auditing Inserts is because, depending on the audit method you chose, will a least double the size of your database and in some cases, will increase the size my a factor of 4 (sometimes more for full EAV audit tables).

Of course, maybe you would rather have a 2-4 terabyte database to backup instead a 1 terabyte database BigGrin


Thanks for the compliment.

On auditing inserts, if you audit before-and-after, there's no need to audit inserts. If you just audit changes (inserted table), you need to log the insert, but don't need to log deletions other than that they happened.

Let's say you insert a row that's exactly 1000 bytes wide, and you change it 10 times, then delete it. It stays 1000 bytes during each update.

Before-and-After Auditing, without Insert, with Delete: 21,000 bytes of log data
After-only Auditing, with Insert, Delete timestamp-only: 11,008 bytes of log data

If you use the XML logging I suggested to someone yesterday, which only logs the colums that actually change, you can end up with much smaller log tables.

For example, this:


set nocount on
--drop table loggingtest
--drop table loggingtest_log
go
create table LoggingTest (
ID int identity primary key,
Col1 char(100),
Col2 char(100),
Col3 char(100),
Col4 char(100),
Col5 char(100),
Col6 char(100),
Col7 char(100),
Col8 char(100),
Col9 char(100),
Col10 char(100))
go
create table LoggingTest_Log (
XActionID int identity primary key,
LogDate datetime not null default (getdate()),
LogBy varchar(100) not null default(system_user),
Data XML)
go
create trigger LoggingTest_LogXML on dbo.LoggingTest
after insert, update, delete
as
set nocount on
/*
NullIf used on inserted and deleted, because it reduces the size of the
resulting XML data. XML, by default, ignores null values and doesn't
include the column.
*/

declare @XML xml

select @XML =
(select isnull(i.ID, d.ID) as ID,
rtrim(nullif(cast(d.Col1 as varchar(100)), cast(i.Col1 as varchar(100)))) as Col1_From,
rtrim(nullif(cast(i.Col1 as varchar(100)), cast(d.Col1 as varchar(100)))) as Col1_To,

rtrim(nullif(cast(d.Col2 as varchar(100)), cast(i.Col2 as varchar(100)))) as Col2_From,
rtrim(nullif(cast(i.Col2 as varchar(100)), cast(d.Col2 as varchar(100)))) as Col2_To,

rtrim(nullif(cast(d.Col3 as varchar(100)), cast(i.Col3 as varchar(100)))) as Col3_From,
rtrim(nullif(cast(i.Col3 as varchar(100)), cast(d.Col3 as varchar(100)))) as Col3_To,

rtrim(nullif(cast(d.Col4 as varchar(100)), cast(i.Col4 as varchar(100)))) as Col4_From,
rtrim(nullif(cast(i.Col4 as varchar(100)), cast(d.Col4 as varchar(100)))) as Col4_To,

rtrim(nullif(cast(d.Col5 as varchar(100)), cast(i.Col5 as varchar(100)))) as Col5_From,
rtrim(nullif(cast(i.Col5 as varchar(100)), cast(d.Col5 as varchar(100)))) as Col5_To,

rtrim(nullif(cast(d.Col6 as varchar(100)), cast(i.Col6 as varchar(100)))) as Col6_From,
rtrim(nullif(cast(i.Col6 as varchar(100)), cast(d.Col6 as varchar(100)))) as Col6_To,

rtrim(nullif(cast(d.Col7 as varchar(100)), cast(i.Col7 as varchar(100)))) as Col7_From,
rtrim(nullif(cast(i.Col7 as varchar(100)), cast(d.Col7 as varchar(100)))) as Col7_To,

rtrim(nullif(cast(d.Col8 as varchar(100)), cast(i.Col8 as varchar(100)))) as Col8_From,
rtrim(nullif(cast(i.Col8 as varchar(100)), cast(d.Col8 as varchar(100)))) as Col8_To,

rtrim(nullif(cast(d.Col9 as varchar(100)), cast(i.Col9 as varchar(100)))) as Col9_From,
rtrim(nullif(cast(i.Col9 as varchar(100)), cast(d.Col9 as varchar(100)))) as Col9_To,

rtrim(nullif(cast(d.Col10 as varchar(100)), cast(i.Col10 as varchar(100)))) as Col10_From,
rtrim(nullif(cast(i.Col10 as varchar(100)), cast(d.Col10 as varchar(100)))) as Col10_To,

case
when i.ID is not null and d.ID is not null then 'U'
when i.ID is null then 'D'
else 'I'
end as Act

from inserted i
full outer join deleted d
on i.id = d.id
for xml auto)

insert into dbo.LoggingTest_Log (Data) -- Rest of log columns are filled by defaults
select @xml;
go
insert into dbo.loggingtest (col1, col2, col3, col4, col5, col6, col7, col8,
col9, col10)
select 'a','a','a','a','a','a','a','a','a','a'

update dbo.loggingtest
set col1 = '1'

update dbo.loggingtest
set col1 = '2'

update dbo.loggingtest
set col1 = '3'

update dbo.loggingtest
set col1 = '4'

update dbo.loggingtest
set col1 = '5'

update dbo.loggingtest
set col1 = '6'

update dbo.loggingtest
set col1 = '7'

update dbo.loggingtest
set col1 = '8'

update dbo.loggingtest
set col1 = '9'

update dbo.loggingtest
set col1 = '10'

delete from dbo.loggingtest



The primary table shows as .008 Meg of data, and the log table ends up at .008 Meg too, even after a logged insert, 10 logged updates, and a logged delete. That's kind of an artificial case, because of the minimal size of the updates, but it does illustrate the point.

So, depending on your method of logging, logging inserts may or may not matter much. With this XML log, it does matter, and can/should be omitted. With an "after-only" log, it's needed.

(If you want to modify that XML trigger to not log inserts, just change it from a Full Outer Join to a Right Outer Join, and modify the Case statement.)

- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread

"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Jeff Moden
Jeff Moden
SSC Guru
SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)

Group: General Forum Members
Points: 85188 Visits: 41077
I guess I don't understand why you would need to log the insert... if no changes occur at all, the original data is still available in the original table.

--Jeff Moden

RBAR is pronounced ree-bar and is a Modenism for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
If you think its expensive to hire a professional to do the job, wait until you hire an amateur. -- Red Adair

Helpful Links:
How to post code problems
How to post performance problems
Forum FAQs
GSquared
GSquared
SSC-Insane
SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)

Group: General Forum Members
Points: 23329 Visits: 9730
It depends on the logging options you use. If, for example, you just do a heap-dump (all-column-insert) from "inserted" in triggers, you get really fast, really simple logging. About the smallest performance impact you can get from trigger-based logging.

If you do it that way, you don't have "before-and-after" logging, so you end up with what stuff was changed to, but not what it was changed from. In that case, you need the original data to actually have the full trail on it.

It can actually take less disk space, if data is updated more often than it's inserted. If you add 100 rows per day, and do 10,000 updates per day, "after-only" logging, including the inserted data, will be much smaller on the disk than "before-and-after" logging. This is because the data logged is half the size for updates in "after-only" as it is in "before-and-after".

It's easy enough to test. Add two logging triggers to a test table, and two log tables (1 for each trigger). Make one trigger be the usual "before (from 'deleted') and after (from 'inserted')", without logging inserts. Make the other trigger log everything in the "inserted" table, but nothing from "deleted". (Make it fair; if one trigger loads only the columns that were changed, make the other trigger do the same, etc.) Load up the main table with a bunch of rows, 1000 or so. Then do 100-thousand random updates on the table, including various columns. Check the size of the two log tables.

Here's an example:


create table LogTest2 (
ID int identity primary key,
Col1 varchar(100),
Col2 varchar(100))
go
create table LogTest2_Log1 (
LogID int identity primary key,
LogDate datetime not null default(getdate()),
ID int,
Col1_From varchar(100),
Col1_To varchar(100),
Col2_From varchar(100),
Col2_To varchar(100),
Act char(1))
go
create table LogTest2_Log2 (
LogID int identity primary key,
LogDate datetime not null default(getdate()),
ID int,
Col1 varchar(100),
Col2 varchar(100),
Act char(1))
go
create trigger LogTest2_L1 on dbo.LogTest2
after update, delete
as
insert into dbo.logtest2_log1 (id, col1_from, col1_to, col2_from, col2_to, act)
select isnull(i.id, d.id), d.col1, i.col1, d.col2, i.col2,
case
when i.id is null then 'D'
else 'U'
end
from inserted i
right outer join deleted d
on i.id = d.id;
go
create trigger LogTest2_L2 on dbo.LogTest2
after insert, update, delete
as
insert into dbo.logtest2_log2 (id, col1, col2, act)
select i.id, i.col1, i.col2,
case
when i.id is not null and d.id is not null then 'U'
else 'I'
end
from inserted i
left outer join deleted d
on i.id = d.id

insert into dbo.logtest2_log2 (id, act)
select d.id, 'D'
from deleted d
left outer join inserted i
on d.id = i.id
where i.id is null;
go
set nocount on

insert into dbo.logtest2 (col1, col2)
select number, number
from dbo.numbers
where number between 1 and 1000
go
update dbo.logtest2
set col1 = checksum(newid()), col2 = checksum(newid())
where id = abs(checksum(newid()))%1000 + 1
go 10000



Log1 ended up at .727 Meg, Log2 at .547 Meg, after 10,000 updates.

With "after-only", you need the original data in the log, or you don't know what it was before it was updated, which can matter.

If you insert more often that you update, then before-and-after logging, without the insert, is often more efficient.

- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread

"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Jeff Moden
Jeff Moden
SSC Guru
SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)

Group: General Forum Members
Points: 85188 Visits: 41077
Lot's of good stuff there Gus... I just don't think any of it needs to audit the original insert... Smile

--Jeff Moden

RBAR is pronounced ree-bar and is a Modenism for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
If you think its expensive to hire a professional to do the job, wait until you hire an amateur. -- Red Adair

Helpful Links:
How to post code problems
How to post performance problems
Forum FAQs
GSquared
GSquared
SSC-Insane
SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)

Group: General Forum Members
Points: 23329 Visits: 9730
If you just do the "after-only" logging, to save space in the log for a table that gets far more updates than inserts, you need the original data, or you can't track what was changed in the first update on a column.

If you don't need to know what was changed, you don't need to log the insert.

- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread

"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Jeff Moden
Jeff Moden
SSC Guru
SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)SSC Guru (85K reputation)

Group: General Forum Members
Points: 85188 Visits: 41077
GSquared (7/23/2008)
you need the original data, or you can't track what was changed in the first update on a column.


??? I'm not sure why you say that! We all know what a join is... Smile

--Jeff Moden

RBAR is pronounced ree-bar and is a Modenism for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
If you think its expensive to hire a professional to do the job, wait until you hire an amateur. -- Red Adair

Helpful Links:
How to post code problems
How to post performance problems
Forum FAQs
GSquared
GSquared
SSC-Insane
SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)SSC-Insane (23K reputation)

Group: General Forum Members
Points: 23329 Visits: 9730
I don't follow you.

Here's the scenario I'm talking about:

Table1 has a logging trigger on it that follows the "after-only" model I wrote about, because it's usual use is 10-20 updates per insert, and this results in a smaller log table.

You insert data into it. This is NOT logged.

I update the data in it. This is logged, but only what I changed it to, not what it was changed from.

Management needs to know what it was changed from and to, as part of a report.

How does a join get that information in this scenario?

- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread

"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Lynn Pettis
Lynn Pettis
SSC-Dedicated
SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)SSC-Dedicated (39K reputation)

Group: General Forum Members
Points: 39049 Visits: 38518
I can see GSquared's point, just don't seem to see another way to explain it at the moment. I'll keep thinking about it and if I come up with an idea, I'll post back.

Cool

Cool
Lynn Pettis

For better assistance in answering your questions, click here
For tips to get better help with Performance Problems, click here
For Running Totals and its variations, click here or when working with partitioned tables
For more about Tally Tables, click here
For more about Cross Tabs and Pivots, click here and here
Managing Transaction Logs

SQL Musings from the Desert Fountain Valley SQL (My Mirror Blog)
Go


Permissions

You can't post new topics.
You can't post topic replies.
You can't post new polls.
You can't post replies to polls.
You can't edit your own topics.
You can't delete your own topics.
You can't edit other topics.
You can't delete other topics.
You can't edit your own posts.
You can't edit other posts.
You can't delete your own posts.
You can't delete other posts.
You can't post events.
You can't edit your own events.
You can't edit other events.
You can't delete your own events.
You can't delete other events.
You can't send private messages.
You can't send emails.
You can read topics.
You can't vote in polls.
You can't upload attachments.
You can download attachments.
You can't post HTML code.
You can't edit HTML code.
You can't post IFCode.
You can't post JavaScript.
You can post emoticons.
You can't post or upload images.

Select a forum

































































































































































SQLServerCentral


Search