The Downsides of AI

  • jasona.work - Thursday, August 24, 2017 11:22 AM

    jay-h - Thursday, August 24, 2017 10:52 AM

    jasona.work - Thursday, August 24, 2017 7:19 AM

    xsevensinzx - Thursday, August 24, 2017 6:55 AM

    Good data professionals are able to fully understand what their unsupervised learning techniques are doing because the code they are using is open source.

    I think one of the points you missed in the article is, the sorts of systems the article was talking about, for want of a better term, evolve themselves.  So the programmer may know what it's doing and why initially, but after a few thousand, or million, training runs, he won't be able to say "this is the portion of the code that made it do X" anymore.  This isn't an "open source vs closed source" sort of thing, it's more a "I built a tool and now the tool has learned how to do things I didn't originally build it to do."

    You built a piano playing robot that can be "told" by passerby whether they like what it's playing or not from a built-in library of tunes, and now it's creating it's own piano concertos from whole cloth.

    One of the potential issues of this is that it can pick up 'bad habits'. If, for whatever reason, it gets a lot of data that is skewed in one way or another, it starts altering its priorities. If people keep 'liking' crappy music, the robot will become a crappy music machine. One only needs to look at the disaster of 'Tay' to see what can happen. Yet if you put a lot of restrictions on what Tay or other device can learn, you've essentially defeated the purpose of learning.

    Translation:  People can be a**holes. 😉
    Seriously though, it's in many ways no different than teaching anyone anything.  Teach a developer to use NOLOCK on every select query, and it'll take a long time to break them of that habit.  Teach a child to throw a ball sidearm, that's how they'll do it.  Arguably, the advantage to AI/ML is, you can "train" it by using data that you know what the end result is / should be (medical information, such as what the referenced article talked about,) then turn it loose on data for which you don't yet know the result and see what it comes back with.

    Well, no. Machines do not have our powerful brains. They are mostly working the way you design them to work. That means, you can break the bad habits if you go a more supervised route in saying, NOLOCK IS BAD! Though that's not always the case in AI realm like when the Facebook bots or whatever accidently developed their own language and started communicating with each other. :laugh:

    https://www.cnet.com/news/what-happens-when-ai-bots-invent-their-own-language/

  • xsevensinzx - Thursday, August 24, 2017 8:38 PM

    jasona.work - Thursday, August 24, 2017 11:22 AM

    jay-h - Thursday, August 24, 2017 10:52 AM

    jasona.work - Thursday, August 24, 2017 7:19 AM

    xsevensinzx - Thursday, August 24, 2017 6:55 AM

    Good data professionals are able to fully understand what their unsupervised learning techniques are doing because the code they are using is open source.

    I think one of the points you missed in the article is, the sorts of systems the article was talking about, for want of a better term, evolve themselves.  So the programmer may know what it's doing and why initially, but after a few thousand, or million, training runs, he won't be able to say "this is the portion of the code that made it do X" anymore.  This isn't an "open source vs closed source" sort of thing, it's more a "I built a tool and now the tool has learned how to do things I didn't originally build it to do."

    You built a piano playing robot that can be "told" by passerby whether they like what it's playing or not from a built-in library of tunes, and now it's creating it's own piano concertos from whole cloth.

    One of the potential issues of this is that it can pick up 'bad habits'. If, for whatever reason, it gets a lot of data that is skewed in one way or another, it starts altering its priorities. If people keep 'liking' crappy music, the robot will become a crappy music machine. One only needs to look at the disaster of 'Tay' to see what can happen. Yet if you put a lot of restrictions on what Tay or other device can learn, you've essentially defeated the purpose of learning.

    Translation:  People can be a**holes. 😉
    Seriously though, it's in many ways no different than teaching anyone anything.  Teach a developer to use NOLOCK on every select query, and it'll take a long time to break them of that habit.  Teach a child to throw a ball sidearm, that's how they'll do it.  Arguably, the advantage to AI/ML is, you can "train" it by using data that you know what the end result is / should be (medical information, such as what the referenced article talked about,) then turn it loose on data for which you don't yet know the result and see what it comes back with.

    Well, no. Machines do not have our powerful brains. They are mostly working the way you design them to work. That means, you can break the bad habits if you go a more supervised route in saying, NOLOCK IS BAD! Though that's not always the case in AI realm like when the Facebook bots or whatever accidently developed their own language and started communicating with each other. :laugh:

    https://www.cnet.com/news/what-happens-when-ai-bots-invent-their-own-language/

    One thing missing from all AI so far is actual comprehension. We have pattern matching on top of pattern matching, which from the outside looks like understanding--but that's only if we allow ourselves to anthromorphize the machinery. Tay was taught to say some really nasty things, but Tay had absolutely no comprehension of what it was saying --  unlike the people who were doing the teaching (I suspect many of those people were primarily having fun gaming the system.)

    It is quite a leap to consider the reinforced word patterns between bots as a language. There is no evidence that there was any actual intent to communicate information or understanding of information received. But to a human it superficially looks like a language and a conversation. 
    [As a side point, conversation is one of the most significant social instincts that humans have. We cannot directly see into other people's minds but through conversation we actually build mental models of their thoughts. This is not a trivial thing even though for most of us it comes naturally. People with certain developmental problems simply cannot comfortably engage in conversation, even though they're fully capable of words and sentences -- conversation is far more complex than parsing language]

    ...

    -- FORTRAN manual for Xerox Computers --

  • jasona.work - Thursday, August 24, 2017 11:22 AM

    jay-h - Thursday, August 24, 2017 10:52 AM

    One of the potential issues of this is that it can pick up 'bad habits'. If, for whatever reason, it gets a lot of data that is skewed in one way or another, it starts altering its priorities. If people keep 'liking' crappy music, the robot will become a crappy music machine. One only needs to look at the disaster of 'Tay' to see what can happen. Yet if you put a lot of restrictions on what Tay or other device can learn, you've essentially defeated the purpose of learning.

    Translation:  People can be a**holes. 😉
    Seriously though, it's in many ways no different than teaching anyone anything.  Teach a developer to use NOLOCK on every select query, and it'll take a long time to break them of that habit.  Teach a child to throw a ball sidearm, that's how they'll do it.  Arguably, the advantage to AI/ML is, you can "train" it by using data that you know what the end result is / should be (medical information, such as what the referenced article talked about,) then turn it loose on data for which you don't yet know the result and see what it comes back with.

    Completely agree.  We see this all the time in the real world. Abuse children, teach them to hate or love, or believe x or y, and they learn them.

    This is part of the issue with the "loan application AI" system. It's got plenty of prejudice and bias from the data in history and the people performing the action. As with many ML things, we need to decide how we want things to work first, then use data and algorithms to train the system.

Viewing 3 posts - 16 through 17 (of 17 total)

You must be logged in to reply to this topic. Login to reply