Too Much Data

  • Comments posted to this topic are about the item Too Much Data

  • Practically all of the really large databases I've worked with in the past could have benefitted from better normalization and data type usage. For the most part, I think that poor data modeling is the primary problem. Many of the data modeling decisions that developers are making when designing data warehouses actually result in worse (not better) performance.

    For example, I've seen 'Person' tables that contain the full address and multiple phone numbers. Do your research before deciding to denormalize a table for performance reasons.

    I've seen tables containing various integer columns where the datatypes are all an 8 byte BigInt. For example: Sex BigInt, MaritalStaus BigInt, etc. The guy who did this explained the reasoning as follows: "because SQL Server is running on a 64bit operating system, it's more efficient to use 64bit integers". It was a specious claim that couldn't be proven, and even if it were marginally true, the data pages from this table were still comsuming more I/O and memory.

    Also, another big one is date/time values contained in VarChar columns, which not only consumes more resources, but it's problematic in terms of performance and data quality as well.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (11/13/2012)


    Practically all of the really large databases I've worked with in the past could have benefitted from better normalization and data type usage. For the most part, I think that poor data modeling is the primary problem. Many of the data modeling decisions that developers are making when designing data warehouses actually result in worse (not better) performance.

    For example, I've seen 'Person' tables that contain the full address and multiple phone numbers. Do your research before deciding to denormalize a table for performance reasons.

    I've seen tables containing various integer columns where the datatypes are all an 8 byte BigInt. For example: Sex BigInt, MaritalStaus BigInt, etc. The guy who did this explained the reasoning as follows: "because SQL Server is running on a 64bit operating system, it's more efficient to use 64bit integers". It was a specious claim that couldn't be proven, and even if it were marginally true, the data pages from this table were still comsuming more I/O and memory.

    Also, another big one is date/time values contained in VarChar columns, which not only consumes more resources, but it's problematic in terms of performance and data quality as well.

    Noticed same in may environment, even the data type of transactional database column was set to Numeric(11,0) or +! When inquired the answer was, we thought of that our bigint or likewise data types may cannot serve the future load of 1-2 TB of data! Which in-fact not possible but the reason exists :crazy:

  • Save the environment and enable free compression in the standard editions.

  • Jo Pattyn (11/14/2012)


    Save the environment and enable free compression in the standard editions.

    I wish. Not sure we'll see this anytime soon.

  • Eric M Russell (11/13/2012)


    Practically all of the really large databases I've worked with in the past could have benefitted from better normalization and data type usage.

    on the contrary , In my last company we had started to denormailzed the table . use the flat table concept at some places.. removing the foreign key an d maintain it logically and with application code.Also partitoning was being used extensively. we moved to replication from mirroring environment.

    -------Bhuvnesh----------
    I work only to learn Sql Server...though my company pays me for getting their stuff done;-)

  • Bhuvnesh (12/26/2012)


    Eric M Russell (11/13/2012)


    Practically all of the really large databases I've worked with in the past could have benefitted from better normalization and data type usage.

    on the contrary , In my last company we had started to denormailzed the table . use the flat table concept at some places.. removing the foreign key an d maintain it logically and with application code.Also partitoning was being used extensively. we moved to replication from mirroring environment.

    The problem with enforcing referential integrity at the application layer is the potential for bad data to fall through the cracks. For example, the primary source for data may be the application, but there may also be data originating from ETL processes. That means the data validation logic has to be coded in multiple locations and perhaps even maintained by seperate developers.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • From the article


    The solutions aren't new and innovative; they're back to basics ideas.

    It's funny (and tremedously rewarding in more ways than one) how it almost always comes back to that.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply