The GenAI boom is growing like crazy. From hype to disasters to successes to investment to the embedding of GenAI tech into lots of products, it seems no one gets away from AI. My wife, kids, friends, they all talk about AI and alternately give me stories of huge successes or epic failures. Even those who just scroll through reels aren't immune as we see amazing things, but we can't trust them because of AI. Who knows what image/video/audio was actually recorded and what was generated.
Like many of you, I think AI can be amazing. Like more of you, I think it can be a really poor partner and it produces output I can't trust. I think one of the major challenges is learning to treat an AI like a colleague whose work quality is erratic. It's not that I can't work with them and use their work, but I need to test, validate, and verify the code they give me does what I need, at some acceptable quality level.
Microsoft is a company investing a lot in AI, and it's changing the company. Some of us might not like the direction as it seems that AI is being pushed for the sake of AI and to generate profits for Microsoft. Or at least revenue as I'm not sure how much profit there will be with all the compute costs of AI. However, it's certainly affecting every product development team.
I listened to a very interesting interview with Satya Nadella talking AI, globalization, and more, including a data center tour of their new AI site in the ATL. The data center tour with Satya and Scott Guthrie is at the beginning and it's amazing to see. The network connections in this data center are equivalent to all of Azure a few years ago. That's impressive, especially seeing they plan to link these new generation data center with petabit networks. For someone that grew up with 300baud modems and then 2.5Mbps Arcnet, I can't even conceive of these speeds.
As I listened to the interview, I was skeptical of Microsoft's efforts. The hosts were as well, as they pressed Microsoft to really give them a reason why all this AI investment makes sense. The interview is long (1:27:47), but includes some interesting statements.
Satya says that AI might be the biggest think since the industrial revolution. I could see that, and I'm not sure I disagree. AI tech, with the ability to reduce the requirements to interact with a computer for everyone, is incredible. It can dramatically reduce the UX issues we constantly see with developers building things that don't always make sense to users. For me, I love that it can handle my misspellings, something many traditionally coded systems cannot handle.
There's also a great quote that Satya uses from a CMU professor: AI is a guardian angel or a cognitive amplifier. I think it's both, as AI is a tool and it's something you can use well or not. If all you have is a hammer, everything looks like a nail; that's a famous quote. A hammer is a great tool.
Sometimes.
Sometimes it's not the tool, and something else is needed. AI can be a great cognitive amplifier, but if you treat all problems as nails, you will let AI create a lot of problems. However, if you use it for the appropriate task, it can really help you. The AI can also see or spot things that we can miss as humans. As the world gets more complex, we deal with more things at once, or the rate of information coming to us increases, we may (will) miss things. An AI can do a better job of catching things, just like another person might catch things you miss.
The last interesting thing is on models vs scaffolding where we look at what models mean and what scaffolding or infrastructure. The example is with Excel (which Satya wish had a database), but it's an interesting look at how we might get value from AI in getting tasks done, and saving labor with AI technology. It's worth the listen (or read the transcript).
I found myself seeing how this might not only benefit Microsoft, but perhaps will benefit the world as other companies embrace multiple models and facilitate the ability of more people to use AI tech. I still don't know if the ROI and costs make sense, but we will as the AI bubble bursts and this becomes a normal part of our lives in some way.