Converting to data-driven code

  • Comments posted to this topic are about the item Converting to data-driven code

  • Little confused should the bottom line of your code be

    and p.surname like f.surname_fix;

    and not

    and p.surname like f.forename_fix;

    as posted

  • Just a general point about data driven code.

    This is a fairly trivial example, but some thought needs to be given about testing and management of applications with many data driven alternatives. (I speak as someone involved from the client side with assuring a financial application where a relatively small inconsistency in the many master files had the potential to send up to £100million a year in the wrong direction.)

    In addition, there's plenty of room for commercial dispute about who is/was responsible to maintain what unless things have been bolted down fully from the start. An associated problem is that of obtaining documentation which specifies the behaviour of the system in response to changes in reference data, how fully this has been tested, and any associated boundary conditions.

    This is not to decry the value of data driven code and applications - just a need to think through the technical, management and commercial issues which can arise.

  • Two things. (1) the use of upper indicates your database should be set to be case insensitive. (2) creating a table doesn't get you out of testing in all three environments. Given your use of like, I'd actually increase testing.

    The table can be replaced with values and a cte if using ms sql server.

    ; bad_values as (

    Select forename, surname

    From (values

    ( 'DAMAGED', 'RAILINGS')

    , ( 'DAMAGED', '')

    )bv(forename, surname)

    )

    update p

    set forename = '', surname = ''

    from sira_party_names p

    inner join bad_values f

    on p.forename = f.forename_fix

    and p.surname = f.forename_fix;

    While bad values might become a rather large list, the actual code is basically the same as yours. The main difference being that I stick with discrete values.

  • The use of Upper comes from an implementation of this on a different database that is case sensitive (Netezza) and a bit of (buggy) transcription. Whilst of course the CTE is far more elegant than the original code, nevertheless it is simply an improved variant of the original. The point of the article is to say that such coding can be made data driven, giving the opportunity for it to be maintained by other groups, and it removes the burden of urgent code releases each time there is a change.

    I'd fully agree that no matter how it is implemented it must still be tested; there is 'no royal road to testing'. However, it doesn't necessarily need to be IT that does the testing. Our organisation has a dearth of good testers but some very diligent staff in the Accounts department and it is their job to ensure that the data is delivered accurately, IT are providing a tool for them to do this.

    When validating fields with two or three values one may well do it in code, once it gets to a reasonable handful or more then it should generally go into a table that authorised users can maintain. It's also a lot more fun to implement solutions using a meta-data approach, although one can go too far.

    Ultimately every design decision must be taken individually, weighing up the pros and cons of each approach. This is another tool in the box that may be relevant in some cases.

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply