Who's Smarter? Humans or AI Systems?

  • David Burrows - Tuesday, August 1, 2017 1:39 AM

    David.Poole - Tuesday, August 1, 2017 12:23 AM

    I think we have to consider AI as being on a spectrum from idiot to gifted.  A self adjusting algorithm to decide whether indexes need defragmenting will be at the idiot end.  A device that starts evolving it's own language is at the other.

    https://www.techspot.com/news/70359-facebook-shuts-down-ai-system-after-invents-own.html
    :Whistling:

    That's where Microsoft went wrong, they should have gave Tay a buddy to chat with.

    https://gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160

  • Jeff, I think the definition of the term AI has "drifted" over time.  I don't know offhand when it was first used, but I feel that originally it was defined as an artificial, human-level, intelligence.  Of course, the problem becomes, defining "intelligence."  Arguably, using the truck-bed caulking robot as an example, it is "intelligent" enough to be taught how to caulk a new truck-bed, similar to a person.  Now, can it determine what the best way to caulk said truck-bed is from nothing, or previous experience?  No, but arguably neither could a human doing the same thing.

    Your line-worker would need to be shown by someone else, who was probably shown by yet another person, who was shown by the engineer who designed the bed, the best way to caulk it.

    I would say the current level of AI is equivalent to an idiot-savant.  Incredibly good, and focused, on one particular thing, but completely incapable of much of anything else (in the case again of your caulk-bot, someone steps inside the safety cage, it's not bright enough to realize this and proceeds to try to caulk the person, or clobbers them when it's swinging around, nor is it capable of realizing it needs maintenance of itself, it would keep working until it tore itself apart.)

    Will we ever reach an AI of the sort Jeff is thinking of?  Perhaps more to the point, will we *recognize* such an intelligence, or will be alien enough we won't know what we've got?  To the first, I think eventually it's possible, but certainly not within my lifetime.  To the second, I think that might be more likely, and perhaps, scarier.

  • David.Poole - Tuesday, August 1, 2017 12:23 AM

    Of course, it failed that basic "Turing Test" by asking me why I wanted to know instead of answering the question and the directive I gave it of "Just answer the question.".

    @jeff you must be kidding.  My teenagers fail that test, does that prove they are lacking in.....oh, they take after their mother.

    I think we have to consider AI as being on a spectrum from idiot to gifted.  A self adjusting algorithm to decide whether indexes need defragmenting will be at the idiot end.  A device that starts evolving it's own language is at the other.

    The problem is that marketeers have got hold of terms like "Machine Learning" and are badging everything up with the same zeal that they peeled off the "cloud enabled" stickers to replace them with "Big Data ready"!

    Someon who can play a grade one piano piece can be considered musical though not to the same degree as someone who can play Rachmaninoff's Piano Concerto No2.  That's how I think of AI.  Just because there's a lot of marketing for the equivalent of 1st round X-Factor rejects doesn't invalidate the principle

    Exactly and especially the labels that people have thrown around. 

    On the subject of AI, I think the term "Idiot Savant" is what I was talking about NOT being AI and is exactly the term I was looking for with respect to "single purpose bots" that having been trained to do one thing well.  While they are certainly clever and can replace humans in the job that they do, I don't consider them to be "intelligent", artificially or otherwise.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden - Monday, July 31, 2017 5:08 PM

    That's not machine learning.  That's machine training and there's quite the difference, IMHO.  The machine did not learn on its own so it's not AI.  It's only doing what it was programmed to do.  Part of the programming was to take addition "parameters" from humans without having to change the programming.  I think people confuse ease of use, flexibility, machine training, and some very clever programming with AI.

    They had a really cool robot at GM when I had a private tour of the production line.  It was the machine that added caulking to pickup truck beds.  It was a wonder to watch.  A truck bed would be shifted onto the stand, cameras would look at the mounting holes in the truck bed to figure out where in 3 dimensional space the bed was, and it would lay the prettiest beads of caulk you ever saw and about 100 times faster than a human could.  But, that's still not AI.  It couldn't tell what kind of truck bed it was.  All that was encoded on the bed carrier stand.  If you put a new bed on the line and a human didn't put in the caulking coordinates, it couldn't figure out how to caulk the bed on it's own.    Really cool robot and great programming but not AI in my opinion.

    You're playing with semantics. No machine learns without feedback. Even in training sets, there is feedback to teach the model what is correct and what isn't A huge part of ML, as used in the industry, is training the machine. It learns from this feedback to adapt to new situations.

  • Steve Jones - SSC Editor - Tuesday, August 1, 2017 9:54 AM

    Jeff Moden - Monday, July 31, 2017 5:08 PM

    That's not machine learning.  That's machine training and there's quite the difference, IMHO.  The machine did not learn on its own so it's not AI.  It's only doing what it was programmed to do.  Part of the programming was to take addition "parameters" from humans without having to change the programming.  I think people confuse ease of use, flexibility, machine training, and some very clever programming with AI.

    They had a really cool robot at GM when I had a private tour of the production line.  It was the machine that added caulking to pickup truck beds.  It was a wonder to watch.  A truck bed would be shifted onto the stand, cameras would look at the mounting holes in the truck bed to figure out where in 3 dimensional space the bed was, and it would lay the prettiest beads of caulk you ever saw and about 100 times faster than a human could.  But, that's still not AI.  It couldn't tell what kind of truck bed it was.  All that was encoded on the bed carrier stand.  If you put a new bed on the line and a human didn't put in the caulking coordinates, it couldn't figure out how to caulk the bed on it's own.    Really cool robot and great programming but not AI in my opinion.

    You're playing with semantics. No machine learns without feedback. Even in training sets, there is feedback to teach the model what is correct and what isn't A huge part of ML, as used in the industry, is training the machine. It learns from this feedback to adapt to new situations.

    Of course I'm playing on semantics.  Heh... I have to to match your play on semantics.  😉

    Even humans don't learn without feedback.  The thing is, the machines that we've spoken of so far can't learn something that they weren't programmed to learn whereas humans and other intelligent creatures can.  To wit, machines can't solve problems unless they've been taught to do so.  Humans and other intelligent creatures can.  Your mark reading machine can't do anything but light up the WTF LEDs when it comes across a mark it doesn't recognize.  A human can figure out, "Oh... my bad... that's not actually a mark... it's just a strange sap-bleed pattern in the wood that looks like a mark".

    And THAT's the problem, IMHO.  People confuse "machine learning" (which is really just "flexible automation") with "artificial intelligence".  Most machines capable of "machine learning" are not a form of "artificial intelligence".

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden - Monday, July 31, 2017 6:06 PM

    According to the Merriam-Webster dictionary,  my personal definition of AI is incorrect and you are correct.  Apparently, a machine doesn't actually have to have any form of even simple self-learning to be classified as "AI".  It only needs to do something that would ordinarily require a human to do.  Even "S-Voice" on my Android phone is considered to be a form of artificial intelligence.

    Guess I can add AI to my resume for the file systems that I wrote.  Technically, even a dynamic CROSS TAB would be a form of AI because it "learns" what the data looks like on it's own and then programs itself to handle it.  BWAAA-HAAAA!!!!   I even have a hammer that's AI... it knows that it can pound in screws as well as nails. 😉

    https://www.merriam-webster.com/dictionary/artificial%20intelligence

    artificial intelligence

    Definition of artificial intelligence for English Language Learners

    • : an area of computer science that deals with giving machines the ability to seem like they have human intelligence

    • : the power of a machine to copy intelligent human behavior

    That definition is actually consistent with Turing's definition (the turing test was really an imitation test).

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Ok, this is way over simplified, but...

    ALL AI systems so far, whether neural nets, conventional program/memory, or quantum are still algorithmic and are essentially Turing machines. Godel demonstrated that some insights cannot be derived from algorithmic processes, and this is a leap that humans seem to be able to make (Godel's theorem may be an example of such a jump). By comparison, AI machines are essentially pattern matching power tools.

    ...

    -- FORTRAN manual for Xerox Computers --

  • If you're saying that a computer must duplicate a human in every way, then we're not close. We may never get there, as I'm not sure sentience is possible.

    Machines, robots,  comptuers, they are often built for a task, without physical capabilities to do everything a human can. However, within the domain of their task, they can learn and get better at tasks if they have what we refer to as capabilities in the AI/ML space. We are not attempting in this space to ensure they are perfect at a task or can duplicate a human, but do a better job than humans over time.

    They also don't require reprogramming in AI/ML, but feedback on what's right and wrong. The code doesn't change to improve the capabilities within a domain.

    Image  recognition is  a space here.  Can a compute recognize images of kittens better than a human? Not in all cases, and certainly not for most humans. However, across scale and time, the computer doesn't get tired or distracted. It can learn about things like facial measurements and do a better job in some spaces, like a casino. The machine can spot things a human wouldn't.

    In the car space,  vehicles are learning to better handle most situations. Will they do better in all? No. But they will do a better job that most humans, most of the time. They will have better reactions and more consistent actions taken. However, that doesn't mean they're a better solution in current situations. It means they are learning and getting better, and perhaps they will be a better solution in certain situations (city centers, limited highways, etc).

    Threat detection, fraud, these are other areas where systems can learn and do a better job than humans, perhaps a much better job. Certainly at scale. However, they also do make mistakes, and humans need to be able to review the results to determine if there are false reports (positive/negative).

  • jay-h - Tuesday, August 1, 2017 2:15 PM

    Ok, this is way over simplified, but...

    ALL AI systems so far, whether neural nets, conventional program/memory, or quantum are still algorithmic and are essentially Turing machines. Godel demonstrated that some insights cannot be derived from algorithmic processes, and this is a leap that humans seem to be able to make (Godel's theorem may be an example of such a jump). By comparison, AI machines are essentially pattern matching power tools.

    I know you warned it was simplified, but I simply can't pass on this 🙂

    What Godel showed was very specific, and attempts to apply it like this to the "computer vs brain" discussions are highly, highly controversial, with a long line of very intelligent people debating both sides going back a long way.

    Lucas and Penrose are the two most well-known proponents of the sort of application of Godel's findings mentioned in the quote above, but to the extent that there is any general consensus on this it is that their arguments fail to show what they claim (fair warning: this is a general impression; I haven't actually gone through and tallied the agrees and disagrees in the literature, as if that would even mean much).

    There are several potential problems with it, a couple of the major ones being 1) that a strict requirement of Godel's finding is that the formal system in question is consistent, and whatever system our brains implement may well not be (could be paraconsistent or goodness knows what else...Graham Priest would be so happy) and 2) for all we know, our brains DO implement some consistent formal system that is subject to Godel's findings (i.e., there is some Godel sentence for the system implemented by our brains, but we'll never find it; the mere fact that there are other formal systems for which we CAN identify Godel sentences does not mean we're not implementing some system with its own Godel sentence).

    The latter is roughly the response given by Benacerraf a long time ago, and all of this has been hotly debated in the academic world for, well, a very long time. 

    As the tone of my response probably indicates, I definitely fall in the camp that thinks using Godel to show some qualitative difference between human and machine "intelligence" is inapt, but more intelligent people than I have argued the other side, and even this very particular topic has a sizable literature I've not read completely, so who knows 🙂

    Cheers!

  • Androids anyone?
    The facebook story about bots inventing their own language is fake news as well. That's hyped up to indicate that someone had to pull the plug because bots were somehow discussing things in their own language. No doubt plotting the end of mankind... mou ah ha ha. Clearly it's nonsense!

  • allinadazework - Wednesday, August 2, 2017 6:40 AM

    Androids anyone?
    The facebook story about bots inventing their own language is fake news as well. That's hyped up to indicate that someone had to pull the plug because bots were somehow discussing things in their own language. No doubt plotting the end of mankind... mou ah ha ha. Clearly it's nonsense!

    https://www.cnbc.com/2017/08/01/facebook-ai-experiment-did-not-end-because-bots-invented-own-language.html

  • Since the original piece was written theres been the Microsoft AI bot that learned to express extreme right wing views after being exposed to social media.

    Grady Booch is quite vocal on the limitations of AI.  The Boson Dynamics robots are both inspiring and scary.

    I feel that the terms Machine Learning (ML) and AI tend to get muddled up.  I've done a few courses on ML.  They range from little more than an iterative regression model that explores the perms and combs of the supplied parameters to something way beyond my understanding.  To me AI is in the league above that!

    There's also the marketing of the terms.  We used to joke that if we peeled off the Big Data sticker from a software product we'd find a "NOSQL enabled" sticker hiding the "OO DB compatible" sticker.  To a large extent I expect an AI/ML Driven sticker slapped on the same old box.

    I think what businesses need is ML rather than AI.  But what ML needs is high quality, well curated data and that has been a problem for the decades I have been in work.  That shows no sign of improving.

  • Great point on data. That's the limiting factor in many of these technologies. Second to that, I find that people building the models often don't have a great idea of how to ask enough questions for some of these tools to work well in an unbounded real world. In bounded types of situations, they can be very powerful.

     

  • An AI would have to be extremely evolved before it could follow the Three Laws. I mean, entire generations of AI implementations will be deployed in industrial or service industries which can't follow the Three Laws, because they don't know what a "human" is - much less the full range of scenarios that would harm a human.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • I can't wrap my head around how humans could create something smarter than humans.  I am very skeptical regarding the whole concept because it is bound to introduce bias by its very nature.  Intelligence is NOT artificial.

    Further, if I should ever be injured as the result of failed AI attempts and survive, I intend to become VERY wealthy as a result.

    Rick

    One of the best days of my IT career was they day I told my boss if the problem was so simple he should go fix it himself.

Viewing 15 posts - 46 through 60 (of 63 total)

You must be logged in to reply to this topic. Login to reply