There is a ton of hype now about using GenAI for various tasks, especially for technical workers. There are lots of executives who would like to use AI to reduce their cost of labor, whether that's getting more out of their existing staff or perhaps even reducing staff. Salesforce famously noted they weren't hiring software engineers in 2025. I'm not sure they let engineers go, but it seems they did let support people go.
For many technical people, we know the hype of a GenAI agent writing code is just that: hype. The agents can't do the same job that humans do, at least not for some humans. We still need humans to prompt the AIs, make decisions, and maybe most importantly, stop the agents when they're off track. I'm not sure anyone other than a trained software engineer can do that well.
I was listening to a podcast recently on software developers using AI, and there was an interesting comment. "Data beats hype every time, " which is something I hope most data professionals understand. We should experiment with our hypothesis, measure outcomes, and then decide if we continue on with our direction, or if we need to rethink our hypothesis.
Isn't that how you query tune? You have an idea of what might reduce query time, you make a change, and check the results. Hopefully you don't just rewrite certain queries using a pattern because this has helped improve performance in the past without testing your choice. Maybe you default to adding a new index (or a new key column/include column) to make a query perform better? I hope you don't do those last two.
AI technology can be helpful, but there needs to be some thought put into how to roll it out, how to set up and measure experiments, and get feedback on whether it actually produces better code and helps engineers. Or if it's just hype that isn't helping.
Ultimately, I think that this is especially true for data professionals, as the training of models on SQL code isn't as simple or easy as it might be for Python, Java, C#, etc. For example, I find some models are biased more towards one platform (MySQL) than another (SQL Server). Your experiments should include using a few different models and finding out which ones work well and (more importantly) which ones don't. We also need to learn where models actually produce better-performing code for our platforms.
If you're skeptical of AI, then conduct some experiments. Try to learn to use the tool to help you, rather than replace you. Look for ways to speed up your development, or have an assistant handle tedious tasks. I have found that when I do that, I get benefits from AI that save a bit of typing.
From the Pragmatic Engineer podcast, the best way to deal with some of the hype on AI is with data, take a structured approach to rolling it out, throw in a lot of AB testing measures with different groups or cohorts, evaluate, and see what works well. One of the things the guest noted was that the most highly regulated and structured groups are having the most success with AI. Because they're careful about rollout, and they are measuring everything. They've been measuring time spent, accuracy of tasks and more. Then they decide where and when to use AI, which might be the best advice you get.