• Richard Gardner-291039 (10/2/2009)


    Quite - as the DBA you should be revoking all privileges on all tables and only allowing pre designed updates via stored procedures - that's the only way you can guarantee data integrity. Are your developers DBAs? No. So why give them the opportunity to compromise your data? Yeah, yeah, rapid response yadda yadda, flexibility yadda yadda, all means nothing when your data is compromised.

    No, the only way you can get close to guaranteeing data integrity is to implement database constraints that, when active, actively prevent data which lacks integrity from being written to the database.

    Further, there is a huge difference in most actual businesses between whatever that businesses's security ideal is, and the security reality there. There is also a large degree of variation on what the security ideal is. Regardless of the desired state, the actual state in many places is different; even if you do and can locking SQL writing access down to DBA's doesn't guarantee that a DBA won't make a mistake at some point. All DBA's are humans, no humans are perfect, therefore, no DBA's are perfect.

    A function based CHECK constraint to verify that date ranges that don't overlap is philosophically not significantly different than choosing to use NUMERIC(9) instead of CHAR(9) for 9 digit numbers, or to have a CHECK constraint limiting an indicator field to only the four possible valid values. Yes, it's more expensive, but all methods I've seen are expensive; I prefer the data integrity method that is:

    A) Most likely to functional at any given point in time (I see triggers disabled more often than constraints, and I see stored procedures bypassed more often than I see triggers disabled)

    B) More easily checked in a generic fashion (DBCC CHECKCONSTRAINTS)

    C) Capable of preventing issues in the widest range of data write methods (write methods include but are not limited to stored procedures, triggers, ad-hoc SQL from applications, hand-written ad-hoc SQL, bcp/BULK INSERT, and so on).

    D) Closest to "guaranteeing" that the bad data does not currently exist, even if it existed prior to the most recent enabling/updating of the prevention mechanism.

    I've often seen instances of bad data, and when I ask someone else, the answer is "Oh! That shouldn't happen." Occasionally, the answer is "That can't happen," which is clearly false as it did happen. Database level constraints are the best way I know of to limit issues from happening; whether they're function based and expensive, whether they're an IN ('M','F','U') check, or whether they're choosing NUMERIC(9) or another datatype instead of CHAR(9).