AI Helpers or Replacements

  • Comments posted to this topic are about the item AI Helpers or Replacements

  • Have worked a few decades in this industry as well I have seen more roles come and go than actual jobs i.e. somebody might stop doing X as a job but they still have a tech job. Often people move into positions before their existing one disappears, take on additional roles which morph from the current ones or the roles have been temporarily absorbed by someone else before disappearing completely.

    I cannot see tech jobs disappearing, however, I often suggest that I am this site's worst predictor.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • One new job I can think of, machine learning psychiatrist. A big category of AI, machine learning using neural networks are really interesting because we have essentially thrown together a network of back propagating weighted gates and then thrown training at them. Its brilliant, it works much of the time, but we can't troubleshoot it easily (or much at all?) by examining its internal state, as its essentially millions upon millions of values that are produced not by entering them by a keyboard, but by training the network with "input" coupled with what we want the "output" to be.

    A particularily illustrative example is training a network to categorize animals present in pictures. When shown many pictures of animals and what they are (ie., training the network), the network internally assigns weights in a particularily non transparent manner that with each iteration allows the network to become better at recognizing animals. How good does this neural network deep learning ai get?

    http://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.html

    So this is a new thing, we can't seem to troubleshoot these things by reading the internal state (because essentially they're just millions of "weights" attached to digital constructs that emulate neuron like behavior (I guess, feel free to correct me there if you actually know about these things), but rather we become behavioral psychiatrists to these silicon constructs.

    https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

  • patrickmcginnis59 10839 - Thursday, April 13, 2017 7:39 AM

    One new job I can think of, machine learning psychiatrist.
    ...

    One of the big areas seems to be medicine. The number of things to remember, the obscurity of many, I'd think Watson provides great help here, and other deep learning will as well.

  • Wow, there is a LOT that I could say in response to this, but I think I'll restrict myself to just one. At my old job we had 15 years worth of data. I urged management to do some data mining, but they never took an interest. Too bad. It was an opportunity lost.

    Kindest Regards, Rod Connect with me on LinkedIn.

  • The Electric Monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.

    AI is the Electric Monk.  If only we could set it to work decyphering user requirements!

  • patrickmcginnis59 10839 - Thursday, April 13, 2017 7:39 AM

    One new job I can think of, machine learning psychiatrist. A big category of AI, machine learning using neural networks are really interesting because we have essentially thrown together a network of back propagating weighted gates and then thrown training at them. Its brilliant, it works much of the time, but we can't troubleshoot it easily (or much at all?) by examining its internal state, as its essentially millions upon millions of values that are produced not by entering them by a keyboard, but by training the network with "input" coupled with what we want the "output" to be.

    A particularily illustrative example is training a network to categorize animals present in pictures. When shown many pictures of animals and what they are (ie., training the network), the network internally assigns weights in a particularily non transparent manner that with each iteration allows the network to become better at recognizing animals. How good does this neural network deep learning ai get?

    http://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.html

    So this is a new thing, we can't seem to troubleshoot these things by reading the internal state (because essentially they're just millions of "weights" attached to digital constructs that emulate neuron like behavior (I guess, feel free to correct me there if you actually know about these things), but rather we become behavioral psychiatrists to these silicon constructs.

    https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

    Interesting article I saw on something similar recently (at work so can't dig it up,) researchers found they could "fool" Googles' video image categorization software by inserting about 1 frame in every couple dozen, a picture of something else.
    Thus, they could get the "AI" to categorize videos of a tiger as an Audi automobile, simply by sticking in a couple frames of the car.  And bear in mind, it was just a *single* frame here and there, not several frames in a row.

    So "AI" I think is likely not in my lifetime going to rise to the level in fiction, the level where it is able to make the sort of intuitive leaps organic minds seem to make, but I do think something along the lines of say a Terminator-style intelligence (something that could pass the Turing test, but is unable to innovate) is quite possible.  Further, I do see "idiot-savant" AIs coming into use in manufacturing and even in the server room.  Manufacturing would be something capable of adapting to changing conditions (say mining machines, or construction equipment,) while the server room would have such AIs handling monitoring of the servers health, both physical and application / OS.

    Imagine a system that could monitor and self-resolve some of your more common problems and is also "intelligent" enough to resolve some not-so-common problems, all without your intervention.  If a problem rises to a certain level, it instead alerts you, then learns by watching you, so that in the future perhaps you no longer need to intervene in that particular issue.

    As for such AI "taking our careers?"
    Not gonna happen.  What's the saying?
    If you build a fool-proof widget, they'll just build a better fool?

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply