I will admit that I don't know a lot about AI (Artificial Intelligence) systems and how they are built. I've been playing with them a bit and haven't been overly impressed with the results. I think some of this is that the my work is creative and I'm both used to being creative and I find the AIs less creative. And less accurate. And require a lot of editing. I don't mind editing, but not if it takes longer than just writing things myself.
From my understanding, a lot of the models behind AI systems (chatbots, recommenders, etc.) are built with humans giving them feedback on their responses in what's known as RLHF (Reinforcement Learning from Human Feedback). Essentially paid (often low paid) people that help to "guide" the AI into responses that are useful.
I don't quite know how that looks, and I certainly don't want a job doing that. Definitely not if it's looking at a lot of UIs like the one in this article. Can you imagine being paid to read things like this and then try to rank them? I can't imagine they keep getting great input from evaluators across the day. Maybe 9am-10am, but I'd bet the 4pm-5pm responses are quick clicks.
There was an article about a company trying something different: constitutional AI training. There's a better description on the Anthropic website. It seems in this case they are creating some principles and limited human feedback, but then relying on an AI to give feedback to another AI? Or itself? I have to admin that I'm not completely sure of what happens here.
Ultimately, I like the idea here, but I think the idea of a single LLM/AI model that suits every situation, or one that works in every geography doesn't make sense. We have different thoughts among people and different cultures all over the world. I'd expect that we might have different types of AIs in different situations or environments. The one that helps decide how to deal with nuclear safety likely needs to be different from the one governing traffic signals. I certainly don't want one TruthGPT to be the one true voice on all things.
The idea of AI systems, assistants and more seems more complex and more strange than anything I'd have imagined from reading science fiction. As with many things, the reality is far different from the speculation I've had about how I would respond or want a system to behave. I think that's the nature of science fiction; it picks specific situations and tailors the story to fit. The real world is much more messy.
I don't know where we go, but I'm curious as many of you have had more exposure to AI. Is it helping? Hurting? Useful? Are you excited or worried for the future? I'm curious what you think, mostly because I'm not sure what I think.