Is having a lookup table that is for multiple entities a common practice?

  • Doctor Who 2 - Wednesday, February 8, 2017 9:57 PM

    I've been programming and designing databases of various sorts for several years. And I've been on teams who have designed databases. Whenever we needed to have a lookup table for something, we would always create a lookup table for whatever. For example, let's say we wanted to have a lookup table for the states of the United States. We'd create a lookup table for that, with maybe a CHAR(2) for the key and then a VARCHAR(50) for the full name of the table. That's the idea I'm trying to convey.

    At my current job my boss doesn't like doing that at all. Instead what he does is creates one, huge and wide (as in many columns) lookup table. Then he puts everything into it. For me, this is difficult because when I want to find something I've got to try and remember whatever special value in one or two columns it is that lets me know what other column has the value for the key (not the primarily key, but the designation that you'd use like 2 characters for a state abbreviation) and where the other column is for the full name, etc.

    However, I'm wondering, perhaps its just me. In the past I've worked for smaller IT shops, with at most 5 people. Now I work for a large agency with over 200 IT people. Is this the way that larger organizations design their databases?

    Please post DDL and follow ANSI/ISO standards when asking for help. 

  • cautionary note: it does not pay to be right IF the boss is wrong

  • It's kind of like asking if re-using cooking oil is a common practice.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • jcelko212 32090 - Wednesday, February 15, 2017 8:34 AM

    Doctor Who 2 - Wednesday, February 8, 2017 9:57 PM

    Let me cut & paste a chapter from my book, THINKING IN SETS 😀

    04.04. OTLT or MUCK Table Problems

    I think that Paul Keister was the first person to coin the phrase "OTLT" (One True Look-up Table) for a common SQL programming technique that is popular with Newbies. Don Peterson (www.SQLServerCentral.com) gave the same technique the name "Massively Unified Code-Key" or MUCK tables in one of his articles.

    The technique crops up time and time again, but I will give Paul Keister credit as the first writer to give it a name. Simply put, the idea is to have one table to do all of the code look-ups in the schema. It usually looks like this:

    CREATE TABLE Look-ups
    (code_type CHAR(10) NOT NULL,
    code_value VARCHAR(255) NOT NULL, -- notice size!
    code_description VARCHAR(255) NOT NULL, -- notice size!
    PRIMARY KEY (code_value, code_type));

    So if we have Dewey Decimal Classification (library codes), ICD (International Classification of Diseases), and two-letter ISO-3166 Country Codes in the schema, we have them all in one, honking big table.

    Let us start with the problems in the DDL and then look at the awful queries you have to write (or hide in VIEWs). So we need to go back to the original DDL and add a CHECK() constraint on the "code_type" column. Otherwise, we might "invent" a new encoding system by typographical error.

    Notice that we are already in trouble because a data element cannot be both a "<something>_code" and a "<something>_type"; it must be one or the other and it must be the code or type of some specific attribute. One of the nice features of the ISO-11179 rules is that they prevent this mixing of data and meta-data in a way that can be checked mechanically. Ignore this comment and continue heading for the edge of the cliff.

    The Dewey Decimal and ICD codes are digits and have the same format -- three digits, a decimal point and more digits (usually three); the ISO-3166 Country Codes are alphabetic. Oops, we now need another CHECK constraint that will look at the "code_type" and make sure that the string is in the right format. Now the table looks something like this, if anyone attempted to do it right, which is not usually the case:

    CREATE TABLE OTLT
    (code_type CHAR(10) NOT NULL
       CHECK("code_type" IN ('DDC','ICD','ISO3166', ..),
    code_value VARCHAR(255) NOT NULL,
      CHECK
      (CASE code_type
      WHEN 'DDC'
        AND code_value
         SIMILAR TO '[0-9][0-9][0-9].[0-9][0-9][0-9]'
      THEN 1
      WHEN 'ICD'
        AND code_value
         SIMILAR TO '[0-9][0-9][0-9].[0-9][0-9][0-9]'
      THEN 1
      WHEN 'ISO3166'
        AND code_value SIMILAR TO '[A-Z][A-Z]'
      THEN 1 ELSE 0 END = 1),
    code_description VARCHAR(255) NOT NULL,
    PRIMARY KEY (code_value, code_type));

    The "SIMILAR TO" predicate is the SQL-92 version of a regular expression parser based on the POSIX Standards, if you are not familiar with it. Since the typical application database can have dozens and dozens of codes in it, just keep extending this pattern for as long as required. Not very pretty is it? In fact, there is a good chance that you might exceed the number of WHEN clauses allowed in a CASE expression in a major corporation. That is why most OTLT programmers do not bother with this absolutely vital constraint.

    Now let us consider adding new rows to the OTLT.

    INSERT INTO OTLT (code_type, code_value, code_description)
    VALUES
    ('ICD', 259.0, 'Inadequate Genitalia after Puberty');

    and also

    INSERT INTO OTLT (code_type, code_value, code_description)
    VALUES
    ('DDC', 259.0, 'Christian Pastoral Practices & Religious Orders');

    If you make an error in the "code_type" during insert, update or delete, you have screwed up a totally unrelated value. If you make an error in the "code_type" during a query, the results could be interesting.

    This can be really hard to find when one of the similarly structured schemes had unused codes in it.

    The next thing you notice about this table is that the columns are pretty wide VARCHAR(n), or even worse, that they are NVARCHAR(n) which can store characters from a strange language. The value of (n) is most often the largest one allowed in that particular SQL product.

    Since you have no idea what is going to be shoved into the table, there is no way to predict and design with a safe, reasonable maximum size. The size constraint has to be put into the WHEN clause of that second CHECK() constraint between "code_type" and "code_value". Or you can live with fixed length codes that are longer (or fatally shorter) than what they should be.

    These large sizes tend to invite bad data. You give someone a VARCHAR(n) column, and you eventually get a string with a lot of white space and a small odd character sitting at the end of it. You give someone an NVARCHAR(255) column and eventually it will get a Buddhist sutra in Chinese Unicode.

    Now let’s consider the problems with actually using the OTLT in a query. It is always necessary to add the "code_type" as well as the value which you are trying to look-up.

    SELECT P1.ssn, P1.lastname, .., L1.code_description
    FROM OTLT AS L1, Personnel AS P1
    WHERE L1."code_type" = 'ICD'
     AND L1.code_value = P1.disease_code
     AND ..;

    In this sample query, you need to know the "code_type" of the Personnel table disease_code column and of every other encoded column in the table. If you got a "code_type" wrong, you can still get a result.

    You also need to allow for some overhead for data type conversions. It might be more natural to use numeric values instead of VARCHAR(n) for some encodings to ensure a proper sorting order. Padding a string of digits with leading zeros adds overhead and can be risky if programmers do not agree on how many zeros to use.

    When you execute a query, the SQL engine has to pull in the entire look-up table, even if it only uses a few codes. If one code is at the start of the physical storage, and another is at the end of physical storage, I can do a lot of caching and paging. When I update the OTLT table, I have to lock out everyone until I am finished. It is like having to carry an encyclopedia set with you when all you needed was a magazine article.

    Now consider the overhead with a two-part FOREIGN KEY in a table:

    CREATE TABLE EmployeeAbsences
    (..
    "code_type" CHAR(3) -- min length needed
     DEFAULT 'ICD' NOT NULL
     CHECK ("code_type" = 'ICD'),

    code_value CHAR(7) NOT NULL, -- min length needed
    FOREIGN KEY ("code_type", code_value)
     REFERENCES OTLT ("code_type", code_value),
    ..);

    Now I have to convert the character types for more overhead. Even worse, ICD has a natural DEFAULT value (000.000 means "undiagnosed"), while Dewey Decimal does not. Older encoding schemes often used all 9’s for "miscellaneous" so they would sort to the end of the reports in COBOL programs. Just as there is no Magical Universal "id", there is no Magical Universal DEFAULT value. I just lost one of the most important features of SQL!

    I am going to venture a guess that this idea came from OO programmers who think of it as some kind of polymorphism done in SQL. They say to themselves that a table is a class, which it is not, and therefore it ought to have polymorphic behaviors, which it does not.

    Please post DDL and follow ANSI/ISO standards when asking for help. 

  • jcelko212 32090 - Wednesday, February 15, 2017 3:43 PM

    jcelko212 32090 - Wednesday, February 15, 2017 8:34 AM

    Doctor Who 2 - Wednesday, February 8, 2017 9:57 PM

    Let me cut & paste a chapter from my book, THINKING IN SETS 😀

    04.04. OTLT or MUCK Table Problems

    I think that Paul Keister was the first person to coin the phrase "OTLT" (One True Look-up Table) for a common SQL programming technique that is popular with Newbies. Don Peterson (www.SQLServerCentral.com) gave the same technique the name "Massively Unified Code-Key" or MUCK tables in one of his articles.

    The technique crops up time and time again, but I will give Paul Keister credit as the first writer to give it a name. Simply put, the idea is to have one table to do all of the code look-ups in the schema. It usually looks like this:

    CREATE TABLE Look-ups
    (code_type CHAR(10) NOT NULL,
    code_value VARCHAR(255) NOT NULL, -- notice size!
    code_description VARCHAR(255) NOT NULL, -- notice size!
    PRIMARY KEY (code_value, code_type));

    So if we have Dewey Decimal Classification (library codes), ICD (International Classification of Diseases), and two-letter ISO-3166 Country Codes in the schema, we have them all in one, honking big table.

    Let us start with the problems in the DDL and then look at the awful queries you have to write (or hide in VIEWs). So we need to go back to the original DDL and add a CHECK() constraint on the "code_type" column. Otherwise, we might "invent" a new encoding system by typographical error.

    Notice that we are already in trouble because a data element cannot be both a "<something>_code" and a "<something>_type"; it must be one or the other and it must be the code or type of some specific attribute. One of the nice features of the ISO-11179 rules is that they prevent this mixing of data and meta-data in a way that can be checked mechanically. Ignore this comment and continue heading for the edge of the cliff.

    The Dewey Decimal and ICD codes are digits and have the same format -- three digits, a decimal point and more digits (usually three); the ISO-3166 Country Codes are alphabetic. Oops, we now need another CHECK constraint that will look at the "code_type" and make sure that the string is in the right format. Now the table looks something like this, if anyone attempted to do it right, which is not usually the case:

    CREATE TABLE OTLT
    (code_type CHAR(10) NOT NULL
       CHECK("code_type" IN ('DDC','ICD','ISO3166', ..),
    code_value VARCHAR(255) NOT NULL,
      CHECK
      (CASE code_type
      WHEN 'DDC'
        AND code_value
         SIMILAR TO '[0-9][0-9][0-9].[0-9][0-9][0-9]'
      THEN 1
      WHEN 'ICD'
        AND code_value
         SIMILAR TO '[0-9][0-9][0-9].[0-9][0-9][0-9]'
      THEN 1
      WHEN 'ISO3166'
        AND code_value SIMILAR TO '[A-Z][A-Z]'
      THEN 1 ELSE 0 END = 1),
    code_description VARCHAR(255) NOT NULL,
    PRIMARY KEY (code_value, code_type));

    The "SIMILAR TO" predicate is the SQL-92 version of a regular expression parser based on the POSIX Standards, if you are not familiar with it. Since the typical application database can have dozens and dozens of codes in it, just keep extending this pattern for as long as required. Not very pretty is it? In fact, there is a good chance that you might exceed the number of WHEN clauses allowed in a CASE expression in a major corporation. That is why most OTLT programmers do not bother with this absolutely vital constraint.

    Now let us consider adding new rows to the OTLT.

    INSERT INTO OTLT (code_type, code_value, code_description)
    VALUES
    ('ICD', 259.0, 'Inadequate Genitalia after Puberty');

    and also

    INSERT INTO OTLT (code_type, code_value, code_description)
    VALUES
    ('DDC', 259.0, 'Christian Pastoral Practices & Religious Orders');

    If you make an error in the "code_type" during insert, update or delete, you have screwed up a totally unrelated value. If you make an error in the "code_type" during a query, the results could be interesting.

    This can be really hard to find when one of the similarly structured schemes had unused codes in it.

    The next thing you notice about this table is that the columns are pretty wide VARCHAR(n), or even worse, that they are NVARCHAR(n) which can store characters from a strange language. The value of (n) is most often the largest one allowed in that particular SQL product.

    Since you have no idea what is going to be shoved into the table, there is no way to predict and design with a safe, reasonable maximum size. The size constraint has to be put into the WHEN clause of that second CHECK() constraint between "code_type" and "code_value". Or you can live with fixed length codes that are longer (or fatally shorter) than what they should be.

    These large sizes tend to invite bad data. You give someone a VARCHAR(n) column, and you eventually get a string with a lot of white space and a small odd character sitting at the end of it. You give someone an NVARCHAR(255) column and eventually it will get a Buddhist sutra in Chinese Unicode.

    Now let’s consider the problems with actually using the OTLT in a query. It is always necessary to add the "code_type" as well as the value which you are trying to look-up.

    SELECT P1.ssn, P1.lastname, .., L1.code_description
    FROM OTLT AS L1, Personnel AS P1
    WHERE L1."code_type" = 'ICD'
     AND L1.code_value = P1.disease_code
     AND ..;

    In this sample query, you need to know the "code_type" of the Personnel table disease_code column and of every other encoded column in the table. If you got a "code_type" wrong, you can still get a result.

    You also need to allow for some overhead for data type conversions. It might be more natural to use numeric values instead of VARCHAR(n) for some encodings to ensure a proper sorting order. Padding a string of digits with leading zeros adds overhead and can be risky if programmers do not agree on how many zeros to use.

    When you execute a query, the SQL engine has to pull in the entire look-up table, even if it only uses a few codes. If one code is at the start of the physical storage, and another is at the end of physical storage, I can do a lot of caching and paging. When I update the OTLT table, I have to lock out everyone until I am finished. It is like having to carry an encyclopedia set with you when all you needed was a magazine article.

    Now consider the overhead with a two-part FOREIGN KEY in a table:

    CREATE TABLE EmployeeAbsences
    (..
    "code_type" CHAR(3) -- min length needed
     DEFAULT 'ICD' NOT NULL
     CHECK ("code_type" = 'ICD'),

    code_value CHAR(7) NOT NULL, -- min length needed
    FOREIGN KEY ("code_type", code_value)
     REFERENCES OTLT ("code_type", code_value),
    ..);

    Now I have to convert the character types for more overhead. Even worse, ICD has a natural DEFAULT value (000.000 means "undiagnosed"), while Dewey Decimal does not. Older encoding schemes often used all 9’s for "miscellaneous" so they would sort to the end of the reports in COBOL programs. Just as there is no Magical Universal "id", there is no Magical Universal DEFAULT value. I just lost one of the most important features of SQL!

    I am going to venture a guess that this idea came from OO programmers who think of it as some kind of polymorphism done in SQL. They say to themselves that a table is a class, which it is not, and therefore it ought to have polymorphic behaviors, which it does not.

    Joe, you've taught me something new. Granted, I would never use an OTLT, but I'd never heard the term before.  So, thank you for teaching me something new.  I see your point about not using them, but in your OTLT table, you created a 10 + 255 byte primary key.  In SQL Server, unless you create a separate clustered index, creating a primary key creates the same as the clustering key, and this one is huge.  I've seen the kind of damage this can do when inserting data and pages start splitting all over the place.

    There are good practices to implement in a clustering key: narrow, non-volatile, unique and ever-increasing.  I like to add fixed-width as well.

  • WOW Joe, I really like your analysis, from what you quoted from your book! Let me say quickly that I didn't post any DDL simply because I felt my question was more one on principle, rather that specify to an individual case. Basically is what I've learned (since asking my question) is called a One True Lookup Table is something that some companies use or not. Basically, should I just accept this as something that large companies/agencies do. This is it something I should get used to working with.

    However, since you've asked I'll give the DDL for the OTLT that we're using, I'll give it here. It's been a while since I've looked at it. I now realize it isn't as wide, in terms of the number of columns it has, as I had thought. I'm going to name the table OurBigTable. This is everything, including the calls to sp_addextendedproperty, which gives some help in understanding what's the various columns mean, depending upon other column values.

    CREATE TABLE [OurBigTable](
     [ID] [bigint] IDENTITY(1,1) NOT NULL,
     [Group_Type] [varchar](40) NOT NULL,
     [Group_Name] [varchar](255) NOT NULL,
     [Item_Name] [varchar](40) NOT NULL,
     [Item_Order] [int] NULL,
     [Owner_Name] [varchar](80) NULL,
     [Blob_Value] [binary](4096) NULL,
     [Boolean_Value] [bit] NULL,
     [Char_Value] [varchar](100) NULL,
     [DateTime_Value] [datetime] NULL,
     [Longint_Value] [bigint] NULL,
     [Text_Value] [text] NULL,
     [XML_Value] [text] NULL,
     [Active] [bit] NOT NULL,
     [Created_By] [varchar](25) NOT NULL,
     [Created_On] [datetime] NOT NULL,
     [Updated_By] [varchar](25) NULL,
     [Updated_On] [datetime] NULL,
     CONSTRAINT [Key1] PRIMARY KEY CLUSTERED
    (
     [ID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Automatically generated unique numeric identifier for the table. Primary Key.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'ID'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Top level of the heirarchy of reference items. A common Group Type is "Core." Another common type is "User."' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Group_Type'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Second level of the heirarchy. Names the specific group. Common examples include "Versioning" for Core, and "Preferences" for Users.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Group_Name'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Third level of the heirarchy. Denotes a specific item in the list defined by the Group Type and Group Name. Examples would be "Core Code Version" or "DefaultWindowColor."' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Item_Name'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Order of the item if a group of items need to be presented or utilized in a sorted order.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Item_Order'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Allows values to be associated with a specific application function or user account.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Owner_Name'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Value if the item is a binary type, such as an icon.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Blob_Value'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Value if the item is a True/False type.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Boolean_Value'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Value if the item is a Character type.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Char_Value'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Value if the item is a DateTime type.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'DateTime_Value'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Value if the item is a numeric type.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Longint_Value'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Value if the item is a Text type.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Text_Value'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Typically used to store xml configuration blocks for specific user accounts.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'XML_Value'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'True if the item is currently in use. False if the item is no longer in use but has been retained for data integrity.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Active'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'AD Name of the user that created the record.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Created_By'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Date-Time stamp of when the record was added to the database.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Created_On'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'AD Name of the user that last modified the record.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Updated_By'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Date-Time stamp of when the record was modified.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable', @level2type=N'COLUMN',@level2name=N'Updated_On'

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'The Core Reference table manages data that supports base operations such as coordinating database upgrades due to version changes, storing single-use data such as that displayed in an About dialog, and so forth. An example would be the Core Code Version. This would have a Group Type of "Core," a Group Name of "Versioning," an Item Name of "Core Code Version," and an Alpha Value of "1.0.13." If the corresponding data version has an Item Name of "Core Data Version" and an Alpha Value of "1.0.11," then the system would need to run the methods "Upgrade to1.0.1.12" and then "UpgradeTo1.0.1.13" to alter tables and/or load appropriate data for the upgrade.' , @level0type=N'SCHEMA',@level0name=N'Core', @level1type=N'TABLE',@level1name=N'OurBigTable'

    (Well, I don't know why it didn't put all of that code into the code block, but it didn't.)

    The first thing I want to direct your attention to is the definition of the primary key, ID. Notice, it is a BIGINT, not an INT, data type. Suggesting that this table potentially could get very large.

    I believe I mentioned early on that I've been working with this table for over a year. However, I've never worked at a place where there might only be one table for lookups. Like Ed said I've never even heard of One True Lookup Table (OTLT) until people started replying to this thread. And although I knew that we differentiated what sort of data was in this table, based upon the value stored in the Group_Type column, it just didn't dawn on me what we were doing, until a few weeks ago. I felt like Emmet in "The LEGO Movie" who realized for the first time the threat the Lord Business represented. "He's great! He makes all of the history books, the news broadcasts, the voting machines... wait a minute." I kept expecting to see what I'm used to seeing for lookup tables, but they were never introduced. Then I had the epiphany that OurBigTable is meant to be for all lookups and more. We're using the OurBigTable in 4 different databases at this point. Here's the unique values stored in the table for Group_Type:

    test
    Queries
    ProductViewModel.setmanagement
    TemplateViewModel.setmanagement
    OrderViewModel.setmanagement
    Data Dictionary
    Landing Page
    Config
    Reports
    CustomerViewModel.setmanagement
    settings

    So let me take settings, for example. In some records (about 400 at this point) the values stored in XML_Value is the columns that are shown in a datagrid, per user. Things like sort order, what's seen/hidden, etc. Also stored here is configuration settings for where the application is supposed to open, when next the user gets into the application. I know that there's also a "Config" value in Group_Type, but well... I guess we just have  that mean the same thing. Fortunately for us that out of the tens of thousands of records in OurBigTable, only 2 records have a Group_Type of "Config". Anyway, records with a Group_Type of "settings" and "Config" are meant to be changed frequently. Likely very frequently.

    If Group_Type is "Data Dictionary", then its a list of all of the fields in the other tables. Whether they're required or not.

    If the Group_Type is "Queries" and Item_Name is the name of one of the other tables like "Customer", then the Char_Value can be used for dropdowns. So there's your lookups. These won't change much.

    Anyway, you get the idea. The OurBigTable is meant to have just about everything that isn't something like a Customer or Order data. Its more than a OTLT.

    Kindest Regards, Rod Connect with me on LinkedIn.

  • As said before, the OTLT (or anything similar) is a bad design.
    However, it shouldn't be confused with a "Junk Dimension" when building a data warehouse. The junk dimensions are actually a good design practice.

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • However, it shouldn't be confused with a "Junk Dimension" when building a data warehouse. The junk dimensions are actually a good design practice.   

    Agreed on both points.

  • Wow this brings back memories that I had blocked out. I'm trying to remember using this. I do know it was around 25 years ago and it was my first programming job. The system consisted of many VSAM files and IMS data files (I think that is what they were called, the parent/child concept. You had to Locate the Parent and read each child and their children until you found what you needed).

    Anyway I think this is what your boss was used to and all he knows. Might be a you can't teach an old dog new tricks problem.

    Good Luck!!

  • We have a OTLT where I work.  From what I understand they used to have multiple lookup tables (and we still have views that segment the OTLT into separate lookup tables).  The reason for consolidating was that it is easier to maintain.  Yea, right.

  • I ran into 2 of these recently.

    One, in the company I work for, is partially being rewritten.  The problem is that there are 2 codes, call them C and R, that are paired together.  And they come in as pairs on a spreadsheet from a government contractor.  But the OTLT is storing each of them individually, and then trying to map them back together into their pairs.  Wasn't working well.

    The other is in the parent organization that owns our company.  I will need to download data from them.  I just began looking into it as a new project.  But I saw they have some funky 10-character keys (yes, character) and somehow the OTLT has unique keys for all the data in it - whether it is holding language spoken, specialty, assignments, address types, department - whatever.  Some of the views they've given me need to join to the table 2 or 3 times to retrieve various items.  I don't know yet if I'm going to pull the data tables directly, or simply use the views and hope that they've maintained it correctly.

  • Luis Cazares - Thursday, February 16, 2017 12:20 PM

    As said before, the OTLT (or anything similar) is a bad design.
    However, it shouldn't be confused with a "Junk Dimension" when building a data warehouse. The junk dimensions are actually a good design practice.

    I've never heard of a "Junk Dimension" before. Interesting.

    Kindest Regards, Rod Connect with me on LinkedIn.

  • Lynn Pettis - Thursday, February 16, 2017 7:32 PM

    We have a OTLT where I work.  From what I understand they used to have multiple lookup tables (and we still have views that segment the OTLT into separate lookup tables).  The reason for consolidating was that it is easier to maintain.  Yea, right.

    This is actually one of my other concerns. Right now we're working on 4 different software solutions, all following this OurBigTable (as I called it yesterday) pattern. How and who is going to put that data, whether its a lookup item, configuration settings, color schemes or whatever, into OurBigTable?

    Kindest Regards, Rod Connect with me on LinkedIn.

  • When I think of a "lookup table", it's usually not something that models a complete entity, but rather just ID and Description.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • When I think of a "lookup table", it's usually not something that models a complete entity, but rather just ID and Description.                       

    Agreed.  I still don't see what is gained by overcomplicating things.

Viewing 15 posts - 16 through 30 (of 32 total)

You must be logged in to reply to this topic. Login to reply