In his book, The Coming Wave, the CEO of Microsoft AI laid out the risks of AI tech bluntly. “These tools will only temporarily augment human intelligence. They will make us smarter and more efficient for a time, and will unlock enormous amounts of economic growth, but they are fundamentally labor-replacing,” he wrote. Suleyman advocated for regulatory oversight and other government interventions, such as new taxes on autonomous systems and a universal basic income to prevent a socioeconomic collapse. This book was published before Suleyman joined Microsoft.
Satya Nadella is more optimistic than his new deputy. In an interview at Microsoft headquarters, while sitting next to his human chief of staff, Nadella said that his Copilot assistants wouldn't replace his human assistant. As his chief of staff sat typing notes of the conversation on her tablet, Nadella acknowledged that AI will cause “hard displacement and changes in labor pools,” including for Microsoft. Judson Althoff, Chief Commercial Officer, said that Nadella was pressuring his team to find ways to use AI to increase revenue without adding headcount.
In 2025, Microsoft has reduced quite a bit of its workforce. Over 9,000 earlier this year, though perhaps there will be some hiring in the future, according to Nadella. Nadella contends that AI could end up delivering more societal benefits than the Industrial Revolution did. “When you create abundance,” Nadella said, “then the question is what one does with that abundance to create more surplus.”
As I discuss AI with different people, I get wildly different opinions. The pace of GenAI model growth across the last two years has led quite a few people to believe that the technology will approach mimicking the average human's intelligence in just a few years. That's a scary thought, and it certainly could lead a lot of executives to place a bet on fewer human employees and more digital ones.
However, many more people believe that the GenAI models still need a lot of guidance, and they are best suited for partnerships with humans. That's good, in a sense. If a smart or talented human can use an AI partner and get a lot done, that means we still need some humans.
Some.
That use of AI by a few talented people might also lead us to a reduction in labor for a lot of organizations. Maybe fewer humans get more done with AI, and it's possible organizations want to make that trade. It's easy to think we'll find things for more humans to do, but computers are incredible levers, and this worries me.
A little.
What I also think is that there is so much work we'd like to get done, but we can't, at least in the technology space. We don't have enough people to do the work, so GenAI agents or partners working with humans might let us catch up on the backlogs we have.
Of course, I don't know that all that backlogged software we went is something we need, if it's good for the world, and if it will end up putting even more people in the real world out of work.
Lots of challenges ahead. Let me know what you think.