Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

SQL Stored Procedure to Log Updates, Independent of Database Structure Expand / Collapse
Author
Message
Posted Wednesday, December 20, 2006 11:41 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Wednesday, October 12, 2011 9:34 PM
Points: 3, Visits: 19
Comments posted here are about the content posted at http://www.sqlservercentral.com/columnists/kramot/2773.asp
Post #331882
Posted Monday, February 12, 2007 1:53 AM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Yesterday @ 12:24 PM
Points: 579, Visits: 2,518
Nigel Rivett's technique is worth taking a look at.. http://www.simple-talk.com/sql/database-administration/pop-rivetts-sql-server-faq-no.5-pop-on-the-audit-trail/ 


Best wishes,

Phil Factor
Simple Talk
Post #344096
Posted Monday, February 12, 2007 3:26 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Monday, June 9, 2014 6:02 AM
Points: 2,674, Visits: 697

we had a long discussion about cursors vs loops, a while loop is still a cursor and I doubt the impact within a single trigger would be much different.

My view = if you want to destroy your database performance add triggers - note that update triggers force all updates to be deferred , so no inplace updates, not sure if that is still so in sql 2005, this can give problems with transactional replication for instance.



The GrumpyOldDBA
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
Post #344115
Posted Tuesday, February 13, 2007 7:57 AM
SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Tuesday, May 6, 2014 5:51 AM
Points: 6,266, Visits: 2,028
I have to say that like colin this sort of operations are to be considered *very* well before you jump the gun. High Performance DB don't take this sort of things easily , you have to *work* to make it happen. (Been there done that)

For a small thing that is not too heavily abused you could use whatever makes you happy.



* Noel
Post #344541
Posted Tuesday, February 13, 2007 5:41 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Wednesday, October 12, 2011 9:34 PM
Points: 3, Visits: 19
Noel,
I agree it's not for everyone. I may have not stressed that enough, but I thought titling it "quick and dirty" implied that. This sort of solution would be required only if the architecture of the database is flawed to begin with, and if your DB is high traffic and flawedly designed it's likely to fail anyway. However I can not agree with colin's statement, because my CPU tests have shown that there is a huge difference between the cursor and the while loop - the loop preformed light years better then the cursor.
Thanks for reading
K
Post #344716
Posted Wednesday, February 14, 2007 1:09 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Monday, June 9, 2014 6:02 AM
Points: 2,674, Visits: 697

to be honest cursor / loop = row by agonising row operations within triggers would be my worst nightmare for a production database. The overhead of deferred updates is also critical as this will cause fragmentation of table/index structures thus degrading performance further. We did the threads over loops and cursors so I'm not going to comment further. Itzik Ben-Gan has some good examples of where a cursor works faster and better, however, that's not really part of this thread.

I agree auditing is important, SOX, and it's tricky, and there are not too many alternatives, but triggers have, in my experience, not been the best solution.



The GrumpyOldDBA
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
Post #344745
Posted Wednesday, February 14, 2007 2:20 AM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Yesterday @ 12:24 PM
Points: 579, Visits: 2,518

I think that Keren has a good point. A lot of database developers are dealing with the task of revising existing code in difficult circumstances, or trying to tie down a tricky bug, and I have experienced times myself when such code has been extremely useful. However, I'd never actually design a database with this sort of code in it. Who would?

 




Best wishes,

Phil Factor
Simple Talk
Post #344751
Posted Thursday, February 15, 2007 6:07 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Friday, March 8, 2013 5:22 AM
Points: 119, Visits: 9
A few other questions, as someone mentioned this isn't the ideal and I have to put auditing into our applications sometime soon!

When designing from scratch is the auditing best done as part of the calling stored procedure or is it best in the calling application?

Assuming the database structure is large and complex would the data still be pivoted or replicate the table structure (perhaps in a separate database/server) or storing versions in the same table?

How important is user friendliness when viewing of the audit trail (does anyone have any experience of end users wanting to view a record of changes without tech support)?

What additional functionality would you expect to provide such as archiving etc.




Phil Nicholas
Post #345097
Posted Wednesday, August 8, 2007 12:49 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Sunday, August 26, 2007 12:14 PM
Points: 5, Visits: 1

Hello forum,

I have a couple comments on Karen's posting.  First, bravo for taking on the hotly controversial topic of handling change logging with triggers - only the brave tread here.  And I think that you've adequately cautioned your readers that this is not the ideal solution, as most would agree that if you designed a brand new database with change logging you would abstract the application's direct access to the tables entirely and handle both the updates and change logging entirely through a batch of sp's.

I have taken Karen's code as a template and devised a solution in a client's existing database.  This is a case where I cannot change all the client-sourced update commands (which quite often reference the tables directly), so the solution here had to be based on triggers.  This is also a database that typically updates relatively few rows at a time and has plenty of horsepower on the server so there will be no appreciable performance degredation to the end user.  There are rare situations where bulk updates are performed, but I was careful to make sure that all these new triggers were diabled prior to - and reenabled after - the bulk update process is run.

I did also find one teensy little error in Karen's code:  Just before the WHILE loop, where the @CurrentRowID and @tmpname variables are initialized with this statement.

SELECT @CurrentRowId = colid,
                @tmpName = name
FROM      syscolumns
WHERE   colid = @NextRowId

The where clause is not specific enough to zero in on one row.  The colid column in syscolumns is definitely not unique in the table.  Here's a suggestion:

SELECT @CurrentRowId = colid, @tmpName = name
FROM syscolumns
WHERE id = object_id('''+@TableName+''') and objectproperty(id, ''IsUserTable'') = 1 and colid = @NextRowId

I think this is what Karen had in mind.

Lastly, thanks Karen for a great article!  Even though this solution has a specific niche where it might fit perfectly, we should know that there are very few "silver bullet" solutions out there for SQL Server and should carefully measure EVERY solution we borrow from someone else before throwing it into production!

Post #389029
Posted Wednesday, August 8, 2007 1:23 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Sunday, August 26, 2007 12:14 PM
Points: 5, Visits: 1

Sorry I forgot one more thing:  I noticed that Karen's tbl_log (and the code that writes to it) doesn't include the PK(s) for the table the change came from.  This makes it difficult at best to use the information gathered in your audit table to trace back to the row in the base table that the change was performed on.

This was not difficult to solve.  In my particular database, I know I have composite PK's with a maximum of 5 columns.  So I added PK1, PK2, PK3, PK4 and PK5 columns to my version of tbl_log, datatypes of varchar(60) or so.  Then in the trigger code, just before spinning through the columns in the table to see which ones have changed, I get the PK column values like so:

--Loop through the PK columns for the table and grab values from the temp tables for each

select @PKpos = min(ordinal_position), @PKposmax = max(ordinal_position)

       from information_schema.key_column_usage

       where objectproperty(object_id(constraint_name),''IsPrimaryKey'') = 1

       and table_name = '''+@TableName+'''

while @PKpos <= @PKposmax

       begin

              select @PKval = null

              select @PKcolname = quotename(column_name)

                     from information_schema.key_column_usage

                     where objectproperty(object_id(constraint_name),''IsPrimaryKey'') = 1

                     and table_name = '''+@TableName+''' and ordinal_position = @PKpos

             

              if exists(select * from #inserted'+@VarRandom+')

                     select @subsql = N''select @PKval = convert(varchar(60), d.''

                           + @PKcolname+'') from #inserted'+@VarRandom+' d ''

              else

                     select @subsql = N''select @PKval = convert(varchar(60), d.''

                           + @PKcolname+'') from #deleted'+@VarRandom+' d ''

 

                     exec sp_executesql @subsql, N''@PKval varchar(60) output'', @PKval output

             

              if @PKpos = 1

                     set @PK1val = @PKval

              else if @PKpos = 2

                     set @PK2val = @PKval

              else if @PKpos = 3

                     set @PK3val = @PKval

              else if @PKpos = 4

                     set @PK4val = @PKval

              else if @PKpos = 5

                     set @PK5val = @PKval

                    

              select @PKpos = @PKpos+1

       end

As you might guess I then wrote the @PK1val, @PK2val,... values to my ChangeLog table in corresponding columns.  It was not really necessary to record what the column name was for each part of the composite because I can derive that easily by the table name.  I can now join the ChangeLog table directly back to the row in the base table where the change was originally performed for reporting.

Post #389040
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse