Detecting Changes to a Table

  • Comments posted to this topic are about the item Detecting Changes to a Table

  • In this MSDN article there is information about what I understand is an "native" way of doing change tracking in relation to building applications for Sync Framework in SQL Server 2008, it that the CHECKSUM(), BINARY_CHECKSUM(), and CHECKSUM_AGG() functions mentioned in the article or is it a third way?

    How to: Use SQL Server Change Tracking http://msdn.microsoft.com/en-us/library/cc305322.aspx%5B/url%5D

  • Hi jongy,

    I'm afraid I am not very familiar with change tracking.

    I also skim read the article you listed, but can see no mention of the CHECKSUM functions discussed in this article.

    Regards,

    Lawrence

  • Lawrence,

    But do you then agree that the MSDN article outlines a third method for change tracking additional to the ones discussed in the SQL Central article or am I missunderstanding anything here?

    /jongy

  • Agreed. I believe that the change tracking functionality is designed primarily to act at a lower level of granularity, so that individual row changes to a table can be audited, but I imagine you could also use it to provide an aggregated, summary "table level" view to judge if any changes have been performed across the whole table.

    Thanks for pointing this out.

    Regards,

    Lawrence

  • What if you add a column UPDATED_ON of type datetime with default to GETDATE() ?

    I suppose that it would make it work.

  • Hi fmendes,

    That would cover inserted rows only, but not cater for updates on the row, nor row deletions.

    Regards,

    Lawrence.

  • SQL Server maintains statistics, which includes counts and timestamps, whenever table indexes are updated. This meta data can be queried from an interesting data management view called sys.dm_db_index_usage_stats. For some situations this would suit the purpose of detecting table changes.

    For example:

    select object_name(s.object_id) as table_name, i.name as index_name,

    last_user_update, user_updates

    from sys.dm_db_index_usage_stats as s

    join sys.indexes i on i.object_id = s.object_id and i.index_id = s.index_id

    where object_name(s.object_id) = 'InvHeader';

    table_name index_name last_user_update user_updates

    ---------- ----------------- ----------------------- ------------

    InvHeader pk_invheader 2011-05-20 15:50:07.210 3713

    InvHeader uix_invheader 2011-05-19 19:15:01.370 371

    There are other columns in this view that return the number of seeks, scans, etc. so it can also be levereaged to determine how often indexes or tables are being accessed.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Hi Eric,

    Many thanks for your post.

    It is true that DMVs offer lots of useful information, some of which could be applied for requirements discussed in my article.

    However, DMVs typically require elevated user permissions, such as VIEW SERVER STATE.

    Regards,

    Lawrence

  • Thanks! You're correct.

    I should have thought of timestamp/rowversion instead of datetime.

  • Lawrence,

    Thanks for taking the time to write this.

    I however do not totally agree. While in theory you are correct,

    best practise is off course to have a update datetime field and probably also a updated by

    column on tables. your stored procs or triggers should always update these fields.

    This should always give you a different checksum.

    Again theoretically you are right, but common "best practise" reality checksum is a viable option to track table changes.

    H.

  • I use system tables to see if the table has been updated:

    SELECT @expiration_dt = [modify_date]

    FROM [mydb].[sys].[tables]

    WHERE [name] = 'mytable'

    If I detect @expiration_dt to be newer than my stored data (which obviously is datetime'd), then I rerun my code.

  • Thanks HansB,

    It's a very good point you raise. Of course you are correct. However, I think it's still worthwhile highlighting the shortcomings of the CHECKSUM functions to further encourage the "best practice" approach to be followed. 😉

    Many thanks,

    Lawrence

  • Hi virtualjosh,

    I'd be very careful using the sys.tables.modify_date field.

    In my experience, it is not always kept up to date in realtime.

    For example, try the following:

    CREATE TABLE test1 (i INT, vc1 VARCHAR(10))

    SELECT modify_date FROM sys.tables WHERE name='test1'

    INSERT test1 VALUES (1, 'row1')

    SELECT modify_date FROM sys.tables WHERE name='test1'

    The values returned are the same....(?)

    Regards,

    Lawrence

  • virtualjosh (5/23/2011)


    I use system tables to see if the table has been updated:

    SELECT @expiration_dt = [modify_date]

    FROM [mydb].[sys].[tables]

    WHERE [name] = 'mytable'

    If I detect @expiration_dt to be newer than my stored data (which obviously is datetime'd), then I rerun my code.

    The modified_date column on the sys.tables or sys.objects catalog views contains the date/time the schema for an object was last altered. For example, if you add a new column. It doesn't contain the date/time of the last insert/update/delete.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

Viewing 15 posts - 1 through 15 (of 30 total)

You must be logged in to reply to this topic. Login to reply