Click here to monitor SSC
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 


Audit Trails and Logging Part II


Audit Trails and Logging Part II

Author
Message
Jack Corbett
  Jack Corbett
SSChampion
SSChampion (12K reputation)SSChampion (12K reputation)SSChampion (12K reputation)SSChampion (12K reputation)SSChampion (12K reputation)SSChampion (12K reputation)SSChampion (12K reputation)SSChampion (12K reputation)

Group: General Forum Members
Points: 12294 Visits: 14863
DCPeterson (6/10/2008)

Now, tracking which queries were sent, which tables/columns were queried, and who sent them makes sense and is easily accomplished with a trace.


This is why I did not mention server side traces. Even running constant server side traces and storing in a table for querying will lead to a very large database quite quickly.



Jack Corbett

Applications Developer

Don't let the good be the enemy of the best. -- Paul Fleming
At best you can say that one job may be more secure than another, but total job security is an illusion. -- Rod at work

Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
How to Post Performance Problems
Crosstabs and Pivots or How to turn rows into columns Part 1
Crosstabs and Pivots or How to turn rows into columns Part 2
DCPeterson
DCPeterson
Ten Centuries
Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)

Group: General Forum Members
Points: 1129 Visits: 432
Jack Corbett (6/10/2008)
DCPeterson (6/10/2008)

Now, tracking which queries were sent, which tables/columns were queried, and who sent them makes sense and is easily accomplished with a trace.


This is why I did not mention server side traces. Even running constant server side traces and storing in a table for querying will lead to a very large database quite quickly.


True, but an audit on a busy system is likely to become large regardles of the method used to create the audit. I have yet to see a system where the audit requirement stated that every time any user accesses any data, we have to track it. There is almost always some data that is sensitive and some that is not. Usually, you want to audit any access to the sensitive data and then audit changes to certain data. Traces can be tarteted (filtered) to capture data for only those events/objects/users that you are interested in.

The most common problem I've seen when it comes to audit logs is that the requirements aren't well defined so you get something like this: "well, we don't know what we might want to see, so just capture everything..." Sure, I can do that, but then you will be buried under a mountain of useless audit data.

/*****************

If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek



*****************/
GSquared
GSquared
SSCoach
SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)

Group: General Forum Members
Points: 15959 Visits: 9729
Lee Forst (6/10/2008)
I would like to get a feel from the author and others about a different type of logging. One requirement I have seen is that because of privacy issues, data loss protection issues, and sometimes legal requirements, systems must log who has “seen” the data. Rather than logging INSERTS, UPDATES, and DELETES, this is targeted to logging SELECTS. For example, let’s say you have a web site that allows end users to search for patients. The end users have a few different search criterion to search by. And for example, let’s say a user searches for a last name of Smith. The requirement says we must log who caused the search, when the search was done, what the criterion was and the log the results returned to the application. In this case assume 10 fields were returned and 500 patients matched the last name of Smith. Therefore our log tables would need to contain that user John Doe on 06/10/2008 at 8:45am did a search of LastName = Smith and 500 records were returned by the search and record what data was returned (10x500).

I think this type of requirement is very difficult to implement by using custom code and not to mention the storage required to do this. But in some cases it is law. Thoughts on the subject?


It is possible. There are a couple of ways to do it. One would be to set up something in the data connection layer that tracks that kind of thing. Another would be to add logging to your select procs, and then make sure all data is through those procs.

A select proc could either use system_user or an input parameter to log who selected what, could certainly insert the input params and run-time into a log, and could, probably using For XML, log the results. It would take some work to build such a log, but not a lot, with For XML and an XML data type for the results log column. Would be much more complex to build with logging each column as its correct data type, or something like that.

The main thing, though, is that this would not log if a DBA bypassed the procs and ran a direct select against the database, through the command line or Management Studio or some such. You would have to make sure nobody had that kind of access to the production database.

I haven't had to set up anything like that kind of security, but I think it could be done.

- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread

"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
GSquared
GSquared
SSCoach
SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)SSCoach (15K reputation)

Group: General Forum Members
Points: 15959 Visits: 9729
DCPeterson, on the subject of a "server side trace". I'm not clear on what you mean by that. Are you talking about something outside the database, or some feature of SQL 2005/2000 that I'm unaware of? (I've read that SQL 2008 will offer more built-in auditing features, but haven't had a chance to play with those yet.) Or do you just mean running Profiler constantly?

Just running Profiler constantly will give you a log, but the files will quickly fill up a lot of disk space, and can't be queried the way log tables can (at least, not easily). If you have Profiler inserting into tables, the performance hit is just about the same as using triggers, as is the space used.

Could be done. Like any of the other methods, it has advantages and disadvantages. I can't think of an easy way to query such data to readily get the complete history of a particular row in a particular table, to run undo commands, or to track workflow data for management reports. On the other hand, you could get performance data right in your audit log, which could be useful for other things.

- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread

"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Jack Corbett
  Jack Corbett
SSChampion
SSChampion (12K reputation)SSChampion (12K reputation)SSChampion (12K reputation)SSChampion (12K reputation)SSChampion (12K reputation)SSChampion (12K reputation)SSChampion (12K reputation)SSChampion (12K reputation)

Group: General Forum Members
Points: 12294 Visits: 14863
DCPeterson (6/10/2008)

The most common problem I've seen when it comes to audit logs is that the requirements aren't well defined so you get something like this: "well, we don't know what we might want to see, so just capture everything..." Sure, I can do that, but then you will be buried under a mountain of useless audit data.


This is the problem. Most audit trails created audit everything and then expect you to sort it out.

Now I am not advocating that this as the solution, but theoretically you could run your server-side trace to log the desired (well-filtered) selects to a table and, using a trigger, service broker, or another custom service, re-run the query to get the results using an account that is ignored by the trace or when logged ignored by the application re-running queries. This takes care of missing ad-hoc queries. Of course now you have doubled the read load on your database.

Needless to say this is an interesting problem to consider, especially since I don't know how you could deny rights to the DBA. Then you would also have to protect the logged data and would need to log access to the logged data? Seems like an endless loopw00t



Jack Corbett

Applications Developer

Don't let the good be the enemy of the best. -- Paul Fleming
At best you can say that one job may be more secure than another, but total job security is an illusion. -- Rod at work

Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
How to Post Performance Problems
Crosstabs and Pivots or How to turn rows into columns Part 1
Crosstabs and Pivots or How to turn rows into columns Part 2
DCPeterson
DCPeterson
Ten Centuries
Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)

Group: General Forum Members
Points: 1129 Visits: 432
Jack Corbett (6/10/2008)
DCPeterson (6/10/2008)

The most common problem I've seen when it comes to audit logs is that the requirements aren't well defined so you get something like this: "well, we don't know what we might want to see, so just capture everything..." Sure, I can do that, but then you will be buried under a mountain of useless audit data.


This is the problem. Most audit trails created audit everything and then expect you to sort it out.

Now I am not advocating that this as the solution, but theoretically you could run your server-side trace to log the desired (well-filtered) selects to a table and, using a trigger, service broker, or another custom service, re-run the query to get the results using an account that is ignored by the trace or when logged ignored by the application re-running queries. This takes care of missing ad-hoc queries. Of course now you have doubled the read load on your database.

Needless to say this is an interesting problem to consider, especially since I don't know how you could deny rights to the DBA. Then you would also have to protect the logged data and would need to log access to the logged data? Seems like an endless loopw00t


Yeah, this seems like a requirement out of a DBA nightmare! I don't think there is a GOOD way to do something like this. It's just plain ugly, no matter how you approach it.

/*****************

If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek



*****************/
DCPeterson
DCPeterson
Ten Centuries
Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)Ten Centuries (1.1K reputation)

Group: General Forum Members
Points: 1129 Visits: 432
GSquared (6/10/2008)
DCPeterson, on the subject of a "server side trace". I'm not clear on what you mean by that. Are you talking about something outside the database, or some feature of SQL 2005/2000 that I'm unaware of? (I've read that SQL 2008 will offer more built-in auditing features, but haven't had a chance to play with those yet.) Or do you just mean running Profiler constantly?

Just running Profiler constantly will give you a log, but the files will quickly fill up a lot of disk space, and can't be queried the way log tables can (at least, not easily). If you have Profiler inserting into tables, the performance hit is just about the same as using triggers, as is the space used.

Could be done. Like any of the other methods, it has advantages and disadvantages. I can't think of an easy way to query such data to readily get the complete history of a particular row in a particular table, to run undo commands, or to track workflow data for management reports. On the other hand, you could get performance data right in your audit log, which could be useful for other things.


First off, I think it helps to get terminology correct. A trace refers to the built in capability of SQL Server to capture defined data about defined events into a file. Profiler is a GUI that can be used to define a trace and view the results. When someone speaks about a "server side trace" that means that the trace is running on the server independently of Profiler. Profiler is a terrific tool but it does impose some additional overhead and it is not appropriate to run Profiler continuously on a production server.

Many DBA's don't realize that by default there is a trace running all the time in SQL 2005. Just like in Profiler, a trace can be configured to capture as much or as little data as you want, also some of the events you could capture are going to impose much more overhead than others, so it pays to do your homework and define your trace as narrowly as possible but not narrower...

Trace files are actually very easy to query with the fn_trace_gettable function. Part of the trace definition allows you to control how many trace files will be generated, how big they will be and whether they will be overwritten etc... so you can control the amount of disk space used.

/*****************

If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek



*****************/
Tore Bostrup-382308
Tore Bostrup-382308
SSC-Enthusiastic
SSC-Enthusiastic (198 reputation)SSC-Enthusiastic (198 reputation)SSC-Enthusiastic (198 reputation)SSC-Enthusiastic (198 reputation)SSC-Enthusiastic (198 reputation)SSC-Enthusiastic (198 reputation)SSC-Enthusiastic (198 reputation)SSC-Enthusiastic (198 reputation)

Group: General Forum Members
Points: 198 Visits: 135
Lee, I presume you're working with a healtcare system or somewhere outside the US.

There may be some healthcare specific products out there.

Logically, capturing the "pipe" to (and from) the database on the server (similar to an ODBC trace) would probably be the "safest". However, the ODBC trace itself is client-side (and is configured on the client), so it is not "safe" for audit purposes. Several years ago, I did hear of applications that captured the IP traffic for this purpose. It would probably have to be captured on the SQL Server.

If the requirement is application specific and the application is to be developed, the easiest way would probably be to architect your application with a layer that allows you to perform additional actions like capturing the results. However, the overriding issue may be how to ensure that all paths to the data is covered (other apps, ad hoc queries).

Consider using security settings limited to a service account known only to your company's security administration group, and require all access to the database to go through a separate service running under this account. That service could then perform any logging required. You would have to notify the service who the actual user is, so that this can be logged.

If this is not an option, it *may* be possible to implement a proxy database ("log and forward") or alternatively, maybe a SQL trace file approach could be combined with a trace scanner that would repeat the SELECT queries to capture the results (against a replicated DB?). This would have some challenges of its own, I suspect.


Regards,

Tore Bostrup
Thomas Keller
Thomas Keller
SSC Journeyman
SSC Journeyman (78 reputation)SSC Journeyman (78 reputation)SSC Journeyman (78 reputation)SSC Journeyman (78 reputation)SSC Journeyman (78 reputation)SSC Journeyman (78 reputation)SSC Journeyman (78 reputation)SSC Journeyman (78 reputation)

Group: General Forum Members
Points: 78 Visits: 158
I have had to satisfy a similar select-logging requirement, in an environment where management was worried about field sales reps selecting the database and taking it with them if they left the company. Of course we had a stored procedure to do the select, which would stop at 500 rows, but in addition I used the same mechanism as my trigger-based transaction logging, which selects the INSERTED and DELETED into a temp table, then loops through the columns thereof. Since the select sproc was using a temp table to gather the results before returning them, I just passed that temp table to the same logging sproc.
Eric Stimpson
Eric Stimpson
Valued Member
Valued Member (67 reputation)Valued Member (67 reputation)Valued Member (67 reputation)Valued Member (67 reputation)Valued Member (67 reputation)Valued Member (67 reputation)Valued Member (67 reputation)Valued Member (67 reputation)

Group: General Forum Members
Points: 67 Visits: 115
SQL tracing in general, and server-side tracing in particular (simply because of its much lower overhead) is one of my favorite tools for a wide variety of tasks ranging from auditing, performance tuning, lock and deadlock investigations to looking under the hood at how a particular application is accessing a system, or even how some of the SQL tools perform their tasks. Creating SQL traces using the sp_trace_create proc can be a little tricky for a first timer, but creating your own scripts or procedures gives you much more control over how a trace works then using the profiler, so I highly recommend you give it a shot.

Here's an example script that I frequently use as a starting point:


declare @ProcReturn int,
@TraceID int

exec @ProcReturn = sp_trace_create
@TraceID = @TraceID out,
@Options = 2,
@TraceFile = N'e:\TraceData\LockingIssues',
@MaxFileSize = null

select @ProcReturn,
@TraceID

declare @Column int,
@On bit
set @Column = 1
set @On = 1
while @Column <= 44 begin

exec sp_trace_setevent @TraceID = @TraceID, @EventID = 10, @ColumnID = @Column, @On = @On
exec sp_trace_setevent @TraceID = @TraceID, @EventID = 12, @ColumnID = @Column, @On = @On
exec sp_trace_setevent @TraceID = @TraceID, @EventID = 41, @ColumnID = @Column, @On = @On
exec sp_trace_setevent @TraceID = @TraceID, @EventID = 43, @ColumnID = @Column, @On = @On
exec sp_trace_setevent @TraceID = @TraceID, @EventID = 50, @ColumnID = @Column, @On = @On
exec sp_trace_setevent @TraceID = @TraceID, @EventID = 59, @ColumnID = @Column, @On = @On

set @Column = @Column + 1
end

exec @ProcReturn = sp_trace_setstatus
@TraceID = @TraceID,
@Status = 1

select @ProcReturn,
@TraceID



This example captures all of the SQL 2000 columns for these events:
RPC: Completed, SQL: BatchCompleted, SQL: StmtCompleted, SP: Completed, SQL Transaction, and Lock: Deadlock Chain.

You can examine the contents of the trace after you've stopped it by using this select:

select *
from ::fn_trace_gettable('e:\TraceData\LockingIssues.trc', default)
where EndTime between '2007-11-07 07:56:00.000' and '2007-11-07 08:05:00.000'



Regards,
Eric
Go


Permissions

You can't post new topics.
You can't post topic replies.
You can't post new polls.
You can't post replies to polls.
You can't edit your own topics.
You can't delete your own topics.
You can't edit other topics.
You can't delete other topics.
You can't edit your own posts.
You can't edit other posts.
You can't delete your own posts.
You can't delete other posts.
You can't post events.
You can't edit your own events.
You can't edit other events.
You can't delete your own events.
You can't delete other events.
You can't send private messages.
You can't send emails.
You can read topics.
You can't vote in polls.
You can't upload attachments.
You can download attachments.
You can't post HTML code.
You can't edit HTML code.
You can't post IFCode.
You can't post JavaScript.
You can post emoticons.
You can't post or upload images.

Select a forum

































































































































































SQLServerCentral


Search