Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

Enforcing Data Quality while using Surrogate Keys Expand / Collapse
Author
Message
Posted Tuesday, September 8, 2009 12:21 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Monday, January 21, 2013 1:51 PM
Points: 6, Visits: 33
You point out, with painful illustration, the "achilles heel" of using migrated primary keys in database design: the keys can multiply rapidly, making it very tiresome to construct "join clauses".

But as my article points out, you lose the relational database's capability of enforcing powerful "business rules" such as the Unification Rule, as I illustrate in my article.

Here is an article that explains and illustrates the Unification Rule. The article was written by a data modeling guru, Bert Scalzo (chief architect of the popular TOAD data modeling product):

http://www.toadworld.com/LinkClick.aspx?fileticket=hZvoqN6j0js=&tabid=321

Marvin Elder
Post #784472
Posted Tuesday, September 8, 2009 1:13 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, January 3, 2014 11:38 AM
Points: 5, Visits: 52
Thanks for the reference. Now I understand but I have rarely seen this pattern. As a DBA with 20 years experience I want as much data integrity in the database as possible but I think some business rules are more understandable in application code than in tables and foreign keys. I apologize that I do not have more time to study your model in detail but the business rule that only people in a department may access a database owned by the department does not require data to be persisted. I think it is easier in the application get the department of the user logged into the system and then allow access to database owned by the department. The information in the lower tables in your diagram have data that does not need to be persisted since it is easily inferred from data in the top tables.
Post #784506
Posted Tuesday, September 8, 2009 1:13 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Friday, July 6, 2012 9:43 AM
Points: 16, Visits: 89
I would have to agree with most of the commenters on this post. I am currently using SQL server 2005 for most of my development, and yes I do create logical data models and identify what the unique entity keys should be, and yes I use surrogate keys as my primary keys in all my tables and add a unique constraint on the entity key. And until someone shows me a SQL Server 2005 benchmark test that proves that joining two tables using three varchar(10) fields is faster, I'll most likely continue this practice.

For a significant number of database records in one or more tables being joined together, surrogate keys are vital for performance. The same applies, I'd argue, with a standard SQL Server Analysis Services data source.
Post #784507
Posted Tuesday, September 8, 2009 1:47 PM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Today @ 2:28 PM
Points: 1,945, Visits: 2,900
I agree with you that you should not use a count of the physical insertion attempts in a schema. At best it is metadata and usually it is a "pseudo-pointer chain" that you maintain by hand. At least back in the days of TOTAL, IDMS, IMS, etc. the DB handled the pointer chains for us.

I first thought was something like this skeleton:


CREATE TABLE Organizations
(org_id INTEGER NOT NULL PRIMARY KEY,
<< organizational data >>);

CREATE TABLE OrgDatabases
(org_id INTEGER NOT NULL
REFERENCES Organizations (org_id)
ON DELETE CASCADE
ON UPDATE CASCADE,
db_id INTEGER NOT NULL,
<< database data >>
PRIMARY KEY (org_id, db_id));

CREATE TABLE OrgDatabaseGroups
(org_id INTEGER NOT NULL
REFERENCES Organizations (org_id)
ON DELETE CASCADE
ON UPDATE CASCADE,
db_id INTEGER NOT NULL,
FOREIGN KEY (org_id, db_id)
REFERENCES Organizations (org_id, db_id)
ON DELETE CASCADE
ON UPDATE CASCADE,
grp_id INTEGER DEFAULT '{unassigned}' NOT NULL,
<< user groups data >>
PRIMARY KEY (org_id, db_id, grp_id));

CREATE TABLE DatabaseUsers
(org_id INTEGER NOT NULL
REFERENCES Organizations (org_id)
ON DELETE CASCADE
ON UPDATE CASCADE,
db_id INTEGER NOT NULL,
FOREIGN KEY (org_id, db_id)
REFERENCES OrgDatabase (org_id, db_id)
ON DELETE CASCADE
ON UPDATE CASCADE,
grp_id INTEGER DEFAULT '{unassigned}' NOT NULL,
FOREIGN KEY (org_id, db_id, grp_id)
REFERENCES OrgDatabaseGroups(org_id, db_id, grp_id)
ON DELETE CASCADE
ON UPDATE CASCADE,
user_id INTEGER NOT NULL,
<< user data >>
PRIMARY KEY (org_id, db_id, grp_id, user_id));

This is a sequence of nested compound keys. This should avoid what Tom Johnston called Non-Normal Form redundancies. It is also fast in products like Sybase where a REFERENCE is implemented as a pointer chain back to the unique row in the referenced table; it is more expensive in MS SQL Server thanks to the redundant copies of the FK values.

People are afraid of overlapping UNIQUE constraints, but they can be very useful.


Books in Celko Series for Morgan-Kaufmann Publishing
Analytics and OLAP in SQL
Data and Databases: Concepts in Practice
Data, Measurements and Standards in SQL
SQL for Smarties
SQL Programming Style
SQL Puzzles and Answers
Thinking in Sets
Trees and Hierarchies in SQL
Post #784549
Posted Tuesday, September 8, 2009 3:29 PM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 9:31 AM
Points: 11,194, Visits: 11,137
G² (9/8/2009)
I'm a big fan of using surrogate keys when designing databases, but they are not intended to enforce data integrity. They should be used to simplify joins and used solely to establish relationships to other entities in the database.

Example 1 should never happen because any table with a surrogate key should also have a unique constraint placed on the natural key of the table to enforce data integrity.

Example 2 should also never happen because it's not a good practice to use a surrogate key on a linking/intersection table. Those tables should always consist of the combination of the two surrogate keys from the two tables that you are establishing the relationship between.

Thank you for the article. It was definitely a interesting read. I've just never encountered either of the issues that you describe because of the practices that I've listed above.

Greg

+1




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #784605
Posted Tuesday, September 8, 2009 3:41 PM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Monday, June 30, 2014 12:41 AM
Points: 79, Visits: 166
If you are doing data warehousing then I think surrogate keys are a must for two reasons

1. Storage - particularly in large fact tables, a 4 byte int is going to be smaller and easyier to work with and so will the corresponding index you'll probably put on it.

2. Unknown Records - You have a nice easy way to reference unknown dimension values

I don't see any problem in a data warehouse of having some additional indexes to enforce integrity rules.

OLTP maybe a different story but thats not my area of expertise.



Kindest Regards,

Martin
Post #784613
Posted Thursday, September 10, 2009 8:52 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Monday, August 18, 2014 10:09 PM
Points: 72, Visits: 686
I can't say I spend any time on enforcing referential data integrity since I moved from Access to SQL years ago. I've never had a problem in 10 years. As I see it, enforcing data integrity is to compensate for something that should never happen in the first place. It would be like adding if/then/else statements in .NET that would never be executed unless somewhere else in your program you did something incorrect. What it comes done to, is that it is hard to justfy spending time on it on a per application basis.
Post #785749
Posted Thursday, September 10, 2009 10:35 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, November 1, 2013 6:32 AM
Points: 1, Visits: 18
baconm (9/8/2009)
This article makes it sound like you have to choose between a natural key with a unique index or a surrogate key with a unique index. Every time I use a surrogate primary key I create a unique key on the columns that comprise the natural key. Surrogate and natural keys each have their role and work best when used for the purpose they were intended. Natural keys are the basis of relational database theory and if you do not have them I do not Know why you would use a relational database. Surrogate keys can be used to simplify joins and allow the natural key to change without having to cascade all the dependent foreign keys.

I have to agree that surrogate keys have no role during logical modeling. All entities should have a unique identifier and at the physical level you can add surrogate keys for all tables that have children and multiple column unique identifiers. I have seen "designers" that immediately switch to surrogate keys in the logical modal as soon as the unique identifier goes beyond 2 or 3 attributes without ever identifying the full natural key. It always leads to duplicates and application failure when more than one row is returned. Yes, I agree, your application is perfect and doesn't allow duplicates but users are devious and will always find that odd navigation path that will allow them to insert duplicates!

It is also difficult to find the surrogate key for a given row if you do not know and are not enforcing uniqueness on the natural key. How often to you SELECT * FROM blahblah WHERE surrogate_key_id = 2468? Usually your first query is SELECT surrogate_key_id FROM blahblah WHERE COMPANY_NAME = "ACME" AND LOCATION = 'CINCINATTI' to get the surrogate key to use in subsequent queries. Company name and location are the natural key in BLAHBLAH and should be indexed. Using my method they are indexed as a result of being in a unique key and the surrogate key would also be indexed as the primary key.

As a practical matter, carrying all the unique identifier columns down through all generations can lead to a natural key of 10's of columns. Here is an extreme example using what I believe are natural keys for a person and a department. I propose the natural key of a person is first, middle and last name, suffix, date and time of birth, location of birth, father and mother (who also have the same natural identifier) of the person. In this case the father and mother natural keys are comprised of 8 columns which is 16 columns for two parents plus the other 6 columns for a person makes 22 columns to identify one employee. I propose that the natural key for a department is company name, company location, date incorporated, incorporation location, (natural key for company) department name, department location and date department created. That is 6 columns. Now associate an employee with a department and you have 28 columns in the natural key for the employee department intersection table. You are looking at 34 predicates in your where clause to join these 4 tables.

On the other hand, if you add a surrogate key to the person table (employee and parents) you drop the number of columns in the employee table to 2 surrogate columns for the parents and the 6 others for a total of 8. Now if the company table has a surrogate key the department table drops to 4 columns and if department has a surrogate key the intersaction table now has 2 columns. To join the same 4 tables there are only 3 predicates in the join clause.

I have a question because I am not a Business Rules person. What is this Unification Business Rule? I do not understand the purpose. I searched the web and didn't get any hits in the first few pages. Can anyone give me some references so I can understand?



I think exactly the same.
Post #785834
Posted Friday, September 11, 2009 7:39 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, January 3, 2014 11:38 AM
Points: 5, Visits: 52
Wow, never had a problem! Are you a one man shop? How big is you largest database? How much data? What tool do you use to develop applications? How many applications and what is the biggest? Why do you use access instead of just storing the data in a OS file with your own structure?

Today's RDBMS's have a lot of capabilities that you do not have to use but I prefer to have a safety net. I like to have constraint enforcement in the application to give the user prompt feedback when they enter something incorrectly and in the db so that when a developer forgets to enforce a data rule the mistake is caught before bad data is stored in the database. Even if we had perfect programmers, our users are very creative and they can find navigation paths that get around application constraints but they have never been able to bypass database constraints. As a matter of fact we have some data errors that we cannot reproduce and we had to add a unique constraint to prevent the problem. Also, the data rules are easy to find if they are located in one place, the database, and application developers can reference this information to make sure they are implementing all the data rules.

I have never had a severe wreck but I don't intend to disable the airbags in my car since there are idiots out there that I have no control over.
Post #786361
Posted Friday, September 11, 2009 9:06 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Monday, December 2, 2013 6:30 AM
Points: 346, Visits: 691
@baconm

If I understand Brian's post he's not entirely out in left field, although I think running with *no* data integrity is a huge mistake.

I think what he's saying is designing an application so that it's impossible for bad data to get to the SQL engine is the best solution.

And I half-way agree with him--but I'm a belt, suspenders and jumpsuit kind of guy myself... :)

I run a one man shop and it's very nice to be developer, DBA, and systems analyst all rolled into one. Of course most folks don't understand the impact of that "tiny" new feature, but hey, that's life...

At any rate I think letting the SQL engine handle basic integrity tasks is a much better solution. Things like referential integrity, bounds checking, unique constraints are no brainers for the DB, no question. Auditing's another DB level task that as a developer I'm *happy* to let the DB handle. The coding overhead is minimal and the results damn near foolproof.

Howsomever:

Some things are better handled by the application. There's no need to bother the DB with things like checking to see if input to a numeric field is actually numeric! Using combo boxes to make it impossible to choose an impossible field value is another way to leverage local processing power.

The database is busy enough, no need to make it work harder than it must. Not to mention eliminating excessive talk on the wire, which is a good thing too.

If you're a DBA you tend to want *everything* in the back end. Not a good idea. Your server is a limited resource being devoured by a swarm of users. Those users have abundent local resources. The best designs take advantage of those local resources.

It's all about balance. Let the DB do what it's good at and the application do what it's good at.
Post #786458
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse