• I've been thinking about the original question, longevity of systems, and we have LOTS of examples of long-lived systems from various origins in this message thread. I know in my case when I'm designing a database system, I start at the bottom with the best database schema that I can do. I test it, I see that it does what I want and expect it to do, doing some revision as needed. THEN I start on the front end user code. Though it's never easy, I find it makes the front end much more straightforward. I'm not doing objects because the environments that I work in have much more straightforward records management requirements.

    So my thought is that with a good design that you can proverbially bolt-on any front end and have a usable database. Just to send a shiver down peoples' spines since we're approaching Halloween, if you look at Cobol, you have a file sitting on storage somewhere. Multiple programs in different languages can modify that file. Ultimately that file might migrate to DB2 or some other SQL system and become part of a larger whole. You can normalize file designs regardless of whether or not they're in a relational DB and benefit systems.

    We have an abstraction available that, if we have a solid data design on the bottom, that we have a tremendous variety of options that can be used to manipulate it. And that can contribute to longevity. The better normalized a design is, theoretically it should more easily be upgradable. No guarantees that the front end will move to a newer OS as easily, but I think languages have always had more fluidity for feature change than the underlying datastore. Of course there's always gotchas, like the cardinality estimator change between SQL '12 and '14. (Geez, am I channeling a shade of Celko?)

    -----
    [font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]