Fast Enough

  • Think it has more to do with the fact our methodologies with the traditional RDBMS is slow, not the tech itself in most cases that are not Google-like companies. In meaning, the turnaround time for change with the RDBMS is slower than those of other techs that adopt different methodologies with the data.

    For example, I adopted the document store for my data warehouse. The methodology for the warehouse didn't change, but the way we load data before its processed by the warehouse did. This allows the business to expose the data as it lands to where it can be accessed and changed before it even becomes a model. In return, the tech becomes more lean and can change at the speed of the business versus the warehouse, which cannot due to it's methodologies.

  • Shifting gears a bit but on the same subject, the code at the following link is a prime example of why it takes so long to make changes to database code and troubleshoot for performance issues.  This isn't an exception... on most forums, including this one, it's the norm and people just continue to generate this kind of code.  Want to know what the code actually does?  All you have to do is read the code, right?
    https://www.sqlservercentral.com/Forums/FindPost2017981.aspx

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • xsevensinzx - Friday, January 25, 2019 11:46 PM

    Think it has more to do with the fact our methodologies with the traditional RDBMS is slow, not the tech itself in most cases that are not Google-like companies. In meaning, the turnaround time for change with the RDBMS is slower than those of other techs that adopt different methodologies with the data.

    For example, I adopted the document store for my data warehouse. The methodology for the warehouse didn't change, but the way we load data before its processed by the warehouse did. This allows the business to expose the data as it lands to where it can be accessed and changed before it even becomes a model. In return, the tech becomes more lean and can change at the speed of the business versus the warehouse, which cannot due to it's methodologies.

    I think this nails it.  For as long as I can remember data responsibilities have been abdicated to a small technical group who try to guess/cater for the organisations requirements. This has caused an all-things-to-all-men approach which is heavy weight and slow.
    Work by Ronald Damhoff may offer a way out. http://www.b-eye-network.com/blogs/damhof/archives/2013/08/4_quadrant_mode.php

    My current role involves supplying data scientist with data as fast as possible. Part of their role is to determine whether there is enough value in the data to throw a more heavy weight process at it.  The thought process is that you don't need a cabinet maker to put up a garden shed

Viewing 3 posts - 16 through 17 (of 17 total)

You must be logged in to reply to this topic. Login to reply