January 4, 2008 at 8:03 am
I am not sure if you are talking about auditing or data integrity. If it is data integrity, you can acheive it by using transactions. Use primary key and foreign keys.
You can also use sp_getapplock to prevent race condition. You could use Encrytion to keep the data safe from prying eyes.
-Roy
January 4, 2008 at 8:16 am
Apologies for the ambiguity in my above post. I suppose what I am trying to achieve is what is quite commonly achieved with file systems. Imagine there is a text document sitting on a server. To verify that the file is un-changed since it's last authorised write we can quite simply do a hash on the document and verify the hash the next time the document is opened.
With a database this is much more difficult. Database are dynamic by nature and are not fixed (with some exceptions). There are multiple writes/reads occuring at any given time. What I am trying to ensure is that the data has not been tampered with in any way since it's last successfull/valid write (insert). I suppose encryption could ensure this is fulfilled, but was wondering if theres anything similar to the above case scenario.
January 4, 2008 at 8:21 am
Have a look at the hash_bytes function. Might do what you want. There's also checksum and checksum_agg
I'm not sure they're sufficient for what you're trying to do, but they might help.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 4, 2008 at 8:27 am
Thanks GilaMonster! Just checked on MSDN and have now found a bunch of useful functions that could help me with this. 🙂
Viewing 4 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply