I use trace logs and trigger-based auditing. (I even wrote articles about it for this site last summer.) I've found those to be quite adequate to my needs.
I have a proc that takes a database and table name, a "main search field" (usually the PK) and a couple of other input parameters, and it does all the work for me of creating a a log table (in my DBALog database) for the database, creating the logging trigger (based on a sparse XML structure that only stores columns that have changed), and creating search and undo procs for any logged transaction, customized to the columns in the table being logged. Takes about 2 seconds to add logging to any table and is pretty much fire-and-forget. Of course, sometimes I'll modify the trigger so that it deviates from the default, but that's uncommon.
I also have the default trace running, and two custom traces running on the databases that need it the most. All are set to restart if the SQL Service restarts (from a reboot or whatever). It generally works out to keeping about 3-4 days of data.
And I have a DDL trigger in every production database and in "model" that logs schema/code changes, including who made the change, when, and the script used. I've had to make a few filters for this, because maintenance plan index rebuilds otherwise end up junking up the log, but beyond that it's been quite handy.
Those are what I use for auditing. Some of it may be overkill, but performance hasn't been hurt to an extent that any user can tell the difference, and it has come in quite handy quite a few times.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon