October 15, 2020 at 12:00 am
Comments posted to this topic are about the item The Degradation of the Turing Test
October 15, 2020 at 8:43 am
There used to be a real annual Turing Test called the Loebner Prize (see https://en.wikipedia.org/wiki/Loebner_Prize or https://www.chatbots.org/awards/loebner_prize/loebner_prize/ ). This had 4 judges, 4 AI's and 4 "real people". Each judge would simultaneously chat to an AI and a real person and had to select which was which based purely on the chat. The contest stopped a few years ago following Dr Loebner's death and the lack of sponsorship, but during its time none of the AI's fooled the judges.
IMO the Loebner was the best approach to a "real" Turing test, and it was really interesting to take part in it and see it working. It's a pity it stopped and it would be really good if someone sponsored it again.
October 15, 2020 at 11:54 am
Actually...the ability of people to distinguish between a person and AI is already minimal as Google demonstrated during one of their conferences in 2018. The pauses, repetitions, and uhms are very natural. This brings up a lot of ethical questions. https://www.youtube.com/watch?v=D5VN56jQMWM
October 15, 2020 at 2:56 pm
From the article:
Is this AI bot intelligent?
It's a funny thing... especially the Turing Test. My answer is that if it mimics humans, it's probably not "intelligent". 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
October 15, 2020 at 2:56 pm
Actually, much (if not most) of the posts in internet discussion forums or social media by actual humans is not really conversational. Folks are triggered by a keyword in someone else's post (perhaps taken out of context), and then they reply with some loosely related drivel.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply