There is a temptation, especially in technology, to mistake momentum for maturity.
Dario Amodei’s “The Adolescence of Technology” frames this moment in AI perfectly. We are not at the end of the story, nor at the beginning. We are in the awkward, volatile middle, while it’s a phase marked by enormous capability, it’s also marked by uneven judgment, and an uncomfortable gap between power and wisdom.
As technologists, this is where our responsibility becomes clearest.
Acceleration Isn’t New but the Slope Hurts
I’ve been writing about databases and data platforms long enough to remember when SQL itself was considered a limiting factor. Joins were expensive and indexes were misunderstood while storage was precious. We argued endlessly about normalization versus performance, about what could be done versus what should be done.
Over time, those limitations disappeared and not because we stopped caring, but because we built better abstractions, better engines, and better discipline. What once required heroic effort simply became routine. As many know, see patterns and AI is following that same pattern, but on a dramatically steeper curve.
The jump from rule-based systems to statistical learning to foundation models did not take decades but a few years. The acceleration is not just technical; it’s cultural. The time between “this is impossible” and “this is everywhere” has completely collapsed. That compression is why bluntness matters now more than ever.
Marketing Has No Place in Moral Decisions
One of the most dangerous trends around AI is not the technology itself, but the language surrounding it.
- Disruption
- Revolution
- Unlocking value
- Monetizing intelligence
These phrases are not new or neutral. They come from marketing buzz words and flatten nuance, as well as obscure tradeoffs. Where ARR and end of quarter numbers mean everything they encourage people to confuse revenue potential with societal benefit. As technologists, we should be allergic to this and yes, I’ve heard how we’re all in sales, but I’m going to hold the line here and say “full stop”.
AI’s value cannot be measured only in profit. If we frame it that way, we will build systems optimized for extraction rather than contribution. History has shown us repeatedly what happens when powerful tools are guided primarily by market incentives without ethical ballast.
Capitalism is not inherently evil, but it is a blunt instrument. Left unchecked, it optimizes for scale, dominance, and speed. AI, when paired exclusively with those incentives, risks amplifying exactly the behaviors we already struggle to control.
Intention Is the Hidden Multiplier
We talk endlessly about models, parameters, and compute, yet we rarely speak about intentions, even though it may be the most important variable of all. AI reflects the goals we set for it. Goals, when partnered with intention decides the outcome if we aim AI at:
- efficiency alone, we will get systems that discard context and humanity.
- power, we will get systems that centralize control.
- revenue above all else, we will get systems that quietly externalize harm.
One of my fundamental values, no matter if it is in technology or in life, is simple:
Do no harm.
That principle cannot be bolted on later. It must be embedded in design at the time of technological conception. And yes, it must sometimes slow us down. Optimization without restraint is how destabilization happens.
Real Risk Is Misguided Success
The most sobering possibility is not that AI fails but that it succeeds in the wrong direction. IT may end up a system that:
- generates enormous profit while eroding trust.
- automates decision-making while absolving humans of accountability.
- concentrates knowledge and power while claiming neutrality.
These outcomes won’t come from malicious intent alone. They will come from misguided priorities and from confusing growth with goodness, adoption with alignment. We’ve seen this movie before in smaller forms: social platforms optimized for engagement, algorithms tuned for outrage, systems that technically worked while socially unraveling their environments.
AI simply raises the stakes.
Optimism, Grounded in Responsibility
Despite all of this, I remain deeply optimistic. Not because AI is magical, and it is magical, but because technologists still have agency. We are not passive observers of this acceleration. We are its stewards. The choices we make about transparency, governance, deployment, and limitation matter just as much as the models we train.
The most important thing to take away from this is that optimism does not mean blind faith. It means believing we can do better and refusing to hide behind euphemisms when we fall short. It means that our AI is backed by governance and policy from the beginning so to ensure that we understand the guardrails that help when intentions go awry. As Amodei stated, the adolescence of technology is a dangerous phase precisely because it feels unstoppable. But adolescence is also when values are formed, challenged, and tested.
AI will reflect who we are and not just who we market ourselves to be.
The question is whether we are brave enough to be honest about that, and disciplined enough to build systems that serve more than balance sheets.