August 5, 2025 at 5:00 am
Comments posted to this topic are about the item The Problem with AI Job Loss Headlines?
August 5, 2025 at 7:53 am
If you search for a news article on a topic that you know a lot about, probably IT / Database related (but it could be Trains or Early Meadevial History depending on your hobbies and interests), you will probably find that it is vague, lacking in detail, misleading, or just plain wrong.
Apart from a few investigative journalists, most (there will be exceptions) are not experts in the field they report on because they are generalists, they are given a report from one of the news wires and told to write it up with the relevant bias to reflect the political leanings of the organisation that they work for, usually with rather short deadlines.
August 5, 2025 at 1:25 pm
At this point I take any "AI is going to take ALL the jobs" sort of headlines with a very, very large grain of salt. Typically such articles seem to only speak to the "rah rah AI" crowd that believe AI can do all the things they claim. Which, frankly, it can't and I don't see it doing those things any time soon, either.
The thing about AI (the current what people call AI) is that it's missing the "Intelligence" part. All it can do is regurgitate what it's been trained on, making connections between parts of that data, but it can't make a true leap of "AH HA" to come up with something NEW.
I commented on another discussion on AI here something that I had asked at a "rah rah AI" discussion I listened to and the speakers response, I asked them if current AI was really nothing more than an improved natural language search engine. The reply was not entirely surprising to me, their entire answer? "Yes."
Until the programmers figure out a way to capture intuition and creativity in code, I don't see AI "taking our jobs" any time soon and I don't expect we'll see Colossus or Skynet or HAL 9000 or Holly from Red Dwarf either.
I could see "AI" being used to improve monitoring and alerting applications, log analysis and such, if only because it can "read" an entire log much quicker than I can, and as far as monitoring system status it will be able to "stare" at the outputs of whatever monitoring is in place all, day, long, 24x7x365 and could be configured to notice trends in the data when determining whether or not to alert.
August 5, 2025 at 5:28 pm
A colleague asked where the boundary was between AI and ML. It's a good question, and I don't think there is a good answer to it.
Negativity sells newsprint. We are hardwired to respond to threats; it's a survival instinct. Unfortunately, marketers and news agencies know how to use that.
In addition to the Gartner Hype Cycle, there is also the CEO hype cycle. The latter is manna from heaven for the press. You get to sell the same story twice.
AI is a gift to the press because it is a label that covers such an immense range of things. Some of them are useful, even if they sound more like ML than AI. For example, predicting infrastructure failures and thus allowing timely fixes, neither too late nor too early.
I am getting a lot of benefit from using AI, but I am using it on my terms. I'm not getting a dictat that I MUST use AI for 60% of my work. It's a tool, albeit a clever one.
The danger I see is that the more credulous C-Suite will believe the hype and act without checking carefully. Back in the 1980s, a company bought into the entrepreneurial hype cycle and decided that anyone aged 40+ was obsolete, so they fired them. It didn't take long to find out that a huge amount of institutional knowledge and experience had been lost. Enough for the company to fail within 5 years. I can see the same thing happening with credulity and AI.
With guardrails, I know that a question asked of AI can be derived from the many documents to which AI is granted access. Those guardrails provide rules for how much leeway AI has to assemble its response. That response won't be perfect, but if a human gave the same response, the mistakes and omissions would be tolerated and the response highly praised.
AI faces one huge challenge, just like every data-driven initiative before it. Generations have failed to improve data quality, so garbage in, garbage out. In some respects, it is even worse. We ask AI to try and make sense of badly written, poorly structured documents, then wonder why the results don't meet our expectations.
There is polarised thinking on AI. Somewhere, there is a happy middle ground, but I don't see people coming together to make it possible.
August 5, 2025 at 8:10 pm
My opinion, AI is great for handing tasks that are slow and error-prone by users. Hand me a 1 GB log file and ask me to find the problem OR ask an AI to find the problem, I'll ask AI then review the findings. AI won't replace humans, but it can help humans. I've used AI to convert scripts between languages - bat file to powershell for example. It got it mostly right, but also had some bits wrong. I was working with an identity provider and had AI help me write an IAM policy - and it got it wrong. I told the AI it was wrong and it apologized and confirmed it was wrong and that there was no solution to my IAM policy query. I tried 3 different AI's with the same result, and then figured it out myself. To be fair to the AI's though - the official docs agreed with the AI, they were just wrong too.
AI is like asking for advice on something BUT you still have to know what to ask it to get a good answer AND you have to understand the topic you are asking about to make sure the result is good. If you ask AI to give you SQL to do something, is it TSQL, PLSQL, some other SQL, or some bastardization of various SQL languages? And even if you ask for strict Microsoft SQL Server TSQL but forget to give a version, you may end up with features you don't have.
AI for art though is a bit harder - it uses copyright material to learn then produces stuff that is next to impossible for you to have any ownership of because someone else with a similar prompt could generate the same (or very similar) image. With videos, do you have rights to make a realistic movie with AI that has the likeness of another person without their consent? And it goes downhill from there if you don't have REALLY good filters on the AI.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
August 6, 2025 at 2:42 am
Good article and a good reminder to take what we read in the news media, with a lump of salt.
Having said that I still see and experience negative things concerning AI. A person at work took patient data and fed it to ChatGPT, asking it to generate a report. Clearly a terrible thing to do. However, when management learned about this, they decision was to block all AI for everyone, indefinitely. So now, no one can use it.
Denying access to something which is only going to get larger and more important in our lives and careers, isn't the right thing to do, either. It hurts all of us not being able to use AI responsibly.
Rod
August 6, 2025 at 5:05 pm
We've had some success with downloading LLMs into our infrastructure, so their interactions are private to us. Of course, this means that tuning them and configuring them to lessen hallucinations becomes our responsibility.
Viewing 7 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply