SQLServerCentral Editorial

Cone of Uncertainty

,

When we make decisions for how to build some feature in software or design an entity in a database, we are usually working with very incomplete information. We are more likely to be wrong than right in some way. It might not matter, but often we do end up adjusting our code in response to feedback from someone.  This could be in response to other developers, QA, or maybe customers.

For database design, this can be problematic. I believe that any database code that gets to production is likely to live there for 10 years. Maybe longer. Maybe shorted if we find a way to change it, but that can be tough, especially for tables. Since other software (apps, reports, etc.) get built on top of database structures, it becomes challenging to change these across time.

We need to use solid design and modeling principles we can rely on. That's good, in that it helps us produce reliable, understandable designs that our developers can count on. However, are we good at learning, growing, and changing to adapt to new requirements? Do we experiment at all?  How much can we try things without deploying something we have to live with for ten years? I don't know, but I find that too many people implement structures without thinking about the future.

DevOps tries to lead us to make small changes, adjusting as needed. That is tough with database structures, but not impossible. There are patterns for evolving out databases without breaking applications, but this does require some good application practices. No SELECT *, no insert statements without column lists, and a few other things. These aren't hard, but they do require some adherence to good data layer practices from other software.

We also need to balance the need to follow efficient modeling practices, with the understanding that clients rarely think about all the possibilities when they describe their needs and data relationships. Often they consider the happy path and describe that well, but forget the edge cases, the exceptions, the places where strict normalization can cause problems.

I treat all requests as though they are in the cone of uncertainty. This is an art, but I approach most data specifications from a client with a large grain of salt. I assume they are only thinking about 70-80% of the cases and I should think about where I can leave flexibility in my design.

We call ourselves software engineers and data engineers, and there is a case to be made for us following good design principles across projects. However, I view that most of our projects are often groundbreaking in some way, tackling problems in new ways and the "engineering" is balanced by a bit of art.

Approaching work with some respect for the uncertainty works well. Using database refactoring processes and being willing to adjust designs over time is a way that DevOps has worked well for me, allowing me to slowly tailor the database to meet the demands of an application. By assuming I will need to change my data model over time, I'm more willing to do so as the need arises.

Rate

5 (2)

You rated this post out of 5. Change rating

Share

Share

Rate

5 (2)

You rated this post out of 5. Change rating