April 16, 2025 at 12:00 am
Comments posted to this topic are about the item Database First or Application First
April 16, 2025 at 6:30 am
I would prefer DB first because getting the data model right is something that will serve the business functionality well for years and probably serve many generations of applications, not just the one that is being developed today.
One thing Kimball warned about was modelling for a report/dashboard produced stovepipe solutions. Modelling for a business process supports many reports/dashboards etc. I would worry, if the app went 1st, that the data model would be dictated by the app and would only be good for the app as it is today. The next generation of app would require throwing the DB away and starting again with all the pain that would cause for business functions that capitalise on the longevity of the data asset.
To be frank, team dynamics come into play too. I've worked with people where communication was lively, there was mutual respect and we worked well together. We discussed things, bounced scenarios off each other all to produce a quality system we were all proud of. These are teams who think multi-dimensionally, considering how their app is impacted and impacts others. Working with such a team I would not worry about which order things were deployed in.
Then there are teams which are a team according to the org chart, not in their behaviours. They make decisions as individuals and the quality of their app reflects this. To use an analogy, if they had to assemble a machine with 9mm bolts, they'd choose a 3/8" spanner and wonder why the nuts got rounded off.
April 16, 2025 at 6:43 am
Hi all,
Our approach is quite rigid: all components (web sites, services) must first be taken offline, then any upgrade scripts are applied to the database, then the components are each upgraded and brought back online. Any failures, it all stays offline until corrected or rolled back (database restore, redeploy previous package). We have developed a custom deployer service that manages this (except the rollback, that is currently still manual).
Downtime is usually less than 2 minutes (<30 seconds on premium hardware), but occasionally longer for some DB changes, e.g. a new index on a large table ... usually those are done separately if too long, which is low-risk as they should have minimal functional impact (ignoring performance).
This is a least-risk, catch-all approach that avoids any chance of an application code change running without its database change being in place yet, or visa-versa, either of which may have indeterminate results, especially in a 35 year old system.
Perhaps as you say, a more recent 'green-fields' development may be coded to handle database changes better than ours might, but where a 2 minute down-time late in the evening is not a drama, it still seems the least risky approach.
Regards,
Dan.
April 16, 2025 at 7:48 am
I would prefer DB first because getting the data model right is something that will serve the business functionality well for years and probably serve many generations of applications, not just the one that is being developed today.
One thing Kimball warned about was modelling for a report/dashboard produced stovepipe solutions. Modelling for a business process supports many reports/dashboards etc. I would worry, if the app went 1st, that the data model would be dictated by the app and would only be good for the app as it is today. The next generation of app would require throwing the DB away and starting again with all the pain that would cause for business functions that capitalise on the longevity of the data asset.
...
I tend to agree, though where I see problems is when the people doing the Db first rush it without modeling.
April 16, 2025 at 7:49 am
Hi all,
Our approach is quite rigid: all components (web sites, services) must first be taken offline, then any upgrade scripts are applied to the database, then the components are each upgraded and brought back online. Any failures, it all stays offline until corrected or rolled back (database restore, redeploy previous package). We have developed a custom deployer service that manages this (except the rollback, that is currently still manual)
....
Perhaps as you say, a more recent 'green-fields' development may be coded to handle database changes better than ours might, but where a 2 minute down-time late in the evening is not a drama, it still seems the least risky approach.
Regards, Dan.
I have always preferred having outage windows for maintenance, upgrades, etc. It certainly makes life simpler, and if you can do this, I think this is the best approach. Changing things live is always a riskier approach, and it requires planning and effort. If you can avoid that, you can spend more time on other things.
April 16, 2025 at 11:53 am
I too have been on a data centric team. They key to this success was both external and internal training with management support. The data modeling was mostly complete in the design/planning phase with a corporate model in Erwin. In the implementation phase developers were handed a data dictionary and the data folks would work with the developers on bridging the gap of Relational to Object models.
Having worked with an organization for over 10 years I have seen application systems disappear and new ones come into place with little or no change to the data model. With a single source of the truth and normalization I have witness high data accuracy and very sub-second query response on OLTP as well as reporting systems.
To quote Steven Covey on the 7 Habits of Highly Effective People "Slow is Fast and Fast is Slow". To do something right at the beginning can save one exponential amount of time in the future.
April 16, 2025 at 2:06 pm
Vote for DB-First too - as long it are additions to the database, e.g. new columns or tables etc. As long you provide good default values (for new columns), the app wouldn't bother if there is anything new.
When you have to delete a column / table the app needs to be modified first of course to prevent it from failing. Of course you could guarantee downwards compatibility by using views etc. that still provides the old columns/tables and may have an instead of insert/update trigger to prevent write access to no longer existing columns, but this is usually to much effort for a small gain.
And there may be changes that needs to be done at both levels at the same time, usually involving bigger refactorings that replace central tables / logic by something new.
God is real, unless declared integer.
April 16, 2025 at 9:02 pm
Why it matters:
That's a worst case scenario for getting a deployment wrong.
I prefer to be explicit - store application and databases versions separately in the database ({ "app_version": 10, "db_version": 86 }). On the application side encode the application and database versions as a constant (APP_VERSION = 10, DB_VERSION = 86).
When the application side starts get the latest version record and compare to the embedded constants. If APP_VERSION < app_version or DB_VERSION < db_version then exit with an error.
During the release any app instantiation will fail if one side or the other has not completed. Restart the failed execution after the deployment is complete and it's off to the races.
100+ deployments over three years without a deployment related error...
April 17, 2025 at 3:24 pm
It's easier to rollback an application deployment compared to a database change deployment, so I would say deploy the application first.
That way, if the application deployment fails (broken links to VCS repository, etc.), just re-deploy the previous release and you're back in business without all the muck of restoring the database.
But really, what gets deployed, when and how is up to the application team. Only rarely does the DBA get involved in deploying changes to the database, at least in my own observable universe.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply