Human and Machine Learning

  • Comments posted to this topic are about the item Human and Machine Learning

  • I know from my end where I work primarily with data science, we are using the methodologies and data to improve how we advertise and interact with the consumer. Through this effort, maybe we can help make finding what you want easier and make that purchasing experience more pleasant for you.

    Others, especially in my neck of the woods with marketing, Baidu (like Google for China) is creating deep learning products that analyze the surroundings of a blind person, send those images up to the cloud, analyze it and tell the person what they are seeing through headsets. This is helping the blind see when they are walking around in the city with the help of cloud computing and machine learning.

    I'm sure, in time, we will see many other uses that have a major impact on our lives. One in particular will surely be medical diagnosis and treatment. We have a lot of scientific data, studies, patient history and so forth that can help forecast a treatment plan, outcome and so forth. Where it may not be 100%, at least will give you a good idea of what to expect and maybe what to try.

  • xsevensinzx - Sunday, February 26, 2017 12:10 AM

    I know from my end where I work primarily with data science, we are using the methodologies and data to improve how we advertise and interact with the consumer. Through this effort, maybe we can help make finding what you want easier and make that purchasing experience more pleasant for you.

    Others, especially in my neck of the woods with marketing, Baidu (like Google for China) is creating deep learning products that analyze the surroundings of a blind person, send those images up to the cloud, analyze it and tell the person what they are seeing through headsets. This is helping the blind see when they are walking around in the city with the help of cloud computing and machine learning.

    I'm sure, in time, we will see many other uses that have a major impact on our lives. One in particular will surely be medical diagnosis and treatment. We have a lot of scientific data, studies, patient history and so forth that can help forecast a treatment plan, outcome and so forth. Where it may not be 100%, at least will give you a good idea of what to expect and maybe what to try.

    That actually scares the bejezus out of me.  Who's programming the computers to do medical diagnosis?  If it's derived from the same idiots that couldn't tell the difference between bronchitis caused by bugs that respond to antibiotics and the type of bronchitis caused by simple non-severe acid reflux or that can't tell the difference between a heart attack, a gall bladder attach, and a minor electrolyte imbalance, that misdiagnosed me over and over, we're going to be in deep Kimchi.  It cost me my teeth and damned near my life.  They also said I had a neuropathy in my ankles and feet and wanted to start me on all sorts of drugs.  It turned out that my socks were a bit too tight.  Another incident was when I had a horrible allergic reaction to something and got what looked like 2nd degree burns in certain areas of my body.  Again, they wanted to load me up with drugs.  They couldn't figure it out. It turned out to be because I let my whites soak in bleach too long and it formed chlorimides... that's the same thing that burns your eyes in a poorly maintained swimming pool and it'll burn the hell out of your skin if left in contact for a long time like when you wear white underwear.

    Such mistakes also caused the death of my Father.  He was misdiagnosed as having a bout with pneumonia.  It turn out to be cancer and it got to stage 4 before they finally decided to do a biopsy.  Some drugs did prolong his life for several months (Zalkorie took a 6CM tumor down to nothing in less than a week) and he was doing really well but then he caught an infection in the hospital and that killed him.

    If you can't trust the human experts, what makes anyone think they can build a machine to do better?  Hell... I can't even find a decent DBA that knows how to get the current date and time. 🙁

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Data Scientists can do amazing things and AI has huge potential.

    My question is of the people demonstrating great enthusiasm for AI and machine learning how many of them are also proponents of the boring stuff like data quality?

  • There are opportunities for both attended and unattended applications of AI/Machine Learning. As previous posters have said, we have to be very careful as to the applications considered. Early knowledge based systems targeted GP level diagnosis. I cannot recall hearing of this being in place anywhere a quarter of a century later. There will be valid applications. We just have to be careful both as to which ones and the controls around them.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Data scientists are still on the front lines. But by playing with machine learning for the same tasks we can start seeing what works, what doesn't, and what's just weird.

    But I do believe it's going to take a while working in parallel before there's any confidence in the systems. That's a good thing since it gives us a chance to try to understand what's going on under the hood so we know where it adds value and where it should be actively discouraged.

  • As developers, we are consumers of Microsoft's services.  Those services are likely already in commercial products before being released generally.  My car has voice command and bluetooth so I can do hands free cell texting and calling.  GPS devices have had voice command for years.  So far the services being offered are low level speech-to-text, text-to-speech, grammar, and language services.  We are not being offered expert systems nor advice systems.  Microsoft may offer some neural network tools since they're already widely available through other sources.  But these come untrained...and believe me, intelligence is more about experience and context training than anything else.  Context plays a heavy role in intelligence since there are so many more exceptions than rules to anything.

  • Jeff Moden - Sunday, February 26, 2017 11:22 PM

    Who's programming the computers to do medical diagnosis?  If it's derived from the same idiots that couldn't tell the difference between bronchitis caused by bugs that respond to antibiotics and the type of bronchitis caused by simple non-severe acid reflux or that can't tell the difference between a heart attack, a gall bladder attach, and a minor electrolyte imbalance, that misdiagnosed me over and over, we're going to be in deep Kimchi.

    Back in the '80s, during the first major wave of AI research, I took a class in AI programming.  The chairman of the department greeted us on our first day and gave us an introduction to the subject.  One of the things he said was that within the community, there was a move to get away from calling the research "Artificial Intelligence." He said that the current preference is to refer to it as "Knowledge Engineering."  He pointed out that, for one thing, once you say you are working on Artificial Intelligence, one of the first things people expect of you is to do something intelligent, and that's a lot of pressure.  Further, he pointed out, that next they are going to expect your software to do something intelligent and he said that the current research was many years away from being able to do something truly intelligent.  Unfortunately, he said that marketing people really liked the term Artificial Intelligence and since much of the research is commercially-funded, it's difficult to change.  He also added that thus inflating expectations was one of the greatest risks they faced since once people realized the reality it could result in all funding drying up and research being shut down.  That pretty much happened and until the term AI started resurfacing recently, I was unaware that anyone was even still doing AI research, other than people finding new uses for neural nets in pattern recognition.

    The same risks exist in commercial applications -- inflated expectations.  I, too, am worried about things like AI-based medical diagnosis because, even though it could be a very useful tool for doctors, it runs the risk of causing doctors to give it more credit than it is due.  If a doctor simply accepts a diagnosis from a machine, out of a belief that the machine is more intelligent than he is, we could indeed be in very "deep Kimchi."

  • This kind of stuff always reminds me of the short story "the Machine that Won the War" by Isaac Asimov.  How can we ever get complete enough information for the computer to be the primary decision making entity?  Working as an assistant to people, sure, no problem, but ultimately at some level people will not trust the information going into or out of the system, and will more likely rely on their own gut instincts and experience to make real decisions.

  • Chris Harshman - Monday, February 27, 2017 1:01 PM

    This kind of stuff always reminds me of the short story "the Machine that Won the War" by Isaac Asimov.  How can we ever get complete enough information for the computer to be the primary decision making entity?  Working as an assistant to people, sure, no problem, but ultimately at some level people will not trust the information going into or out of the system, and will more likely rely on their own gut instincts and experience to make real decisions.

    I agree with your assessment but a lot of consideration needs to go into what is being replaced.  Right now, self-driving cars are being heavily covered in the news.  On the one hand I look at the situation and think, "I know how a lot of engineers write software -- there's no way that I want to be on the road with cars that were programmed by the  likes of them."  On the other hand, I know how a lot of people drive and I think, "We should give the self-driving cars a chance."

  • lnoland - Monday, February 27, 2017 1:14 PM

    Chris Harshman - Monday, February 27, 2017 1:01 PM

    This kind of stuff always reminds me of the short story "the Machine that Won the War" by Isaac Asimov.  How can we ever get complete enough information for the computer to be the primary decision making entity?  Working as an assistant to people, sure, no problem, but ultimately at some level people will not trust the information going into or out of the system, and will more likely rely on their own gut instincts and experience to make real decisions.

    I agree with your assessment but a lot of consideration needs to go into what is being replaced.  Right now, self-driving cars are being heavily covered in the news.  On the one hand I look at the situation and think, "I know how a lot of engineers write software -- there's no way that I want to be on the road with cars that were programmed by the  likes of them."  On the other hand, I know how a lot of people drive and I think, "We should give the self-driving cars a chance."

    What I'd say to both of you is that we're not programming cars to work like other applications. There is no complex IF..THEN series of things. Instead, the cars get the rules of the road, the ways in which we want to prioritize various decisions and they work mostly off data. In most cases, they work better and make better decisions than people. There are redundancies, and the hopefully ways in which they degrade gracefully if there are issues.

    Does that mean that the systems are perfect? No, but neither are people. People make plenty of mistakes, and certainly fail pretty well with repeating things over and over. Look at the rates of accidents, often many easily prevented, and think about how machine learning systems might do better. Also, these are limited domains. A computer system isn't making every decision, it's making some of them. If you doubt this works well, most planes, many (maybe most) financial systems, and plenty others have computers making the decisions. We are really good at this in some ways.

  • Jeff Moden - Sunday, February 26, 2017 11:22 PM

    xsevensinzx - Sunday, February 26, 2017 12:10 AM

    I know from my end where I work primarily with data science, we are using the methodologies and data to improve how we advertise and interact with the consumer. Through this effort, maybe we can help make finding what you want easier and make that purchasing experience more pleasant for you.

    Others, especially in my neck of the woods with marketing, Baidu (like Google for China) is creating deep learning products that analyze the surroundings of a blind person, send those images up to the cloud, analyze it and tell the person what they are seeing through headsets. This is helping the blind see when they are walking around in the city with the help of cloud computing and machine learning.

    I'm sure, in time, we will see many other uses that have a major impact on our lives. One in particular will surely be medical diagnosis and treatment. We have a lot of scientific data, studies, patient history and so forth that can help forecast a treatment plan, outcome and so forth. Where it may not be 100%, at least will give you a good idea of what to expect and maybe what to try.

    That actually scares the bejezus out of me.  Who's programming the computers to do medical diagnosis?  If it's derived from the same idiots that couldn't tell the difference between bronchitis caused by bugs that respond to antibiotics and the type of bronchitis caused by simple non-severe acid reflux or that can't tell the difference between a heart attack, a gall bladder attach, and a minor electrolyte imbalance, that misdiagnosed me over and over, we're going to be in deep Kimchi.  It cost me my teeth and damned near my life.  They also said I had a neuropathy in my ankles and feet and wanted to start me on all sorts of drugs.  It turned out that my socks were a bit too tight.  Another incident was when I had a horrible allergic reaction to something and got what looked like 2nd degree burns in certain areas of my body.  Again, they wanted to load me up with drugs.  They couldn't figure it out. It turned out to be because I let my whites soak in bleach too long and it formed chlorimides... that's the same thing that burns your eyes in a poorly maintained swimming pool and it'll burn the hell out of your skin if left in contact for a long time like when you wear white underwear.

    Such mistakes also caused the death of my Father.  He was misdiagnosed as having a bout with pneumonia.  It turn out to be cancer and it got to stage 4 before they finally decided to do a biopsy.  Some drugs did prolong his life for several months (Zalkorie took a 6CM tumor down to nothing in less than a week) and he was doing really well but then he caught an infection in the hospital and that killed him.

    If you can't trust the human experts, what makes anyone think they can build a machine to do better?  Hell... I can't even find a decent DBA that knows how to get the current date and time. 🙁

    I think you're over-generalizing here. Certainly not all experts are alike, just as most people aren't alike. Medicine is full of general protocols, without really understanding things in a specific way for a specific person. Doctors learn to look at xx symptoms and think it's likely yy. Only when they get more information, like a treatment not working, do they get more information and then refine their differential diagnosis. Unfortunately, that can mean that they sometimes make a few mistakes in a row and we suffer.

    I'd counter that machines can do better here because they don't forget, or they don't have to limit their differential to the 10 obvious things they likely see. They can think wider, perhaps ask more questions because they can understand more specifics, or understand more potential issues.

    Don't forget the machines don't  really work with programming, nor are they intelligent. Instead, what they start to do with machine learning is learn to do the things humans have learned at a more repeatable, reliable pace. They can also process much more information, like considering more tailored drugs, than human ever could. In many ways, we get the best of humans, in a way that retains the memory of all humans.
    Is this perfect? No. Will there be mistakes? Yes. How is that different from today? What you tend to propose, Jeff, is that we get humans perfect first, and then we consider how machines could work. Except that what we're learning is that we can let machines start to process more and more data, and mimic the best humans. We take collective knowledge, lots of it, and get the machines to synthesize it, and then test this against humans to see how much better it can be. If we got 10% better than the average doctor, but 10% worse than the best, I think it's a huge win.

  • Steve Jones - SSC Editor - Monday, February 27, 2017 4:32 PM

    lnoland - Monday, February 27, 2017 1:14 PM

    Chris Harshman - Monday, February 27, 2017 1:01 PM

    This kind of stuff always reminds me of the short story "the Machine that Won the War" by Isaac Asimov.  How can we ever get complete enough information for the computer to be the primary decision making entity?  Working as an assistant to people, sure, no problem, but ultimately at some level people will not trust the information going into or out of the system, and will more likely rely on their own gut instincts and experience to make real decisions.

    I agree with your assessment but a lot of consideration needs to go into what is being replaced.  Right now, self-driving cars are being heavily covered in the news.  On the one hand I look at the situation and think, "I know how a lot of engineers write software -- there's no way that I want to be on the road with cars that were programmed by the  likes of them."  On the other hand, I know how a lot of people drive and I think, "We should give the self-driving cars a chance."

    What I'd say to both of you is that we're not programming cars to work like other applications. There is no complex IF..THEN series of things. Instead, the cars get the rules of the road, the ways in which we want to prioritize various decisions and they work mostly off data. In most cases, they work better and make better decisions than people. There are redundancies, and the hopefully ways in which they degrade gracefully if there are issues.

    Does that mean that the systems are perfect? No, but neither are people. People make plenty of mistakes, and certainly fail pretty well with repeating things over and over. Look at the rates of accidents, often many easily prevented, and think about how machine learning systems might do better. Also, these are limited domains. A computer system isn't making every decision, it's making some of them. If you doubt this works well, most planes, many (maybe most) financial systems, and plenty others have computers making the decisions. We are really good at this in some ways.

    Isn't that pretty much what I said?

    On the other hand, while people make plenty of mistakes, if a person causes multiple accidents due to persistent bad judgment we take his license away and he may face some lawsuits.  If a self-driving car is given bad judgment which causes one or more accidents, that bad judgment could be replicated thousands of times before it is stopped.  And that presumes that it is stopped -- look at the Audi 5000 fiasco.  Audi pretty much began by blaming everything on the drivers (an incompetent group apparently largely unique to Audi 5000 purchasers); then their actions suggested that Audi believed that their physical design layout might be contributing to driver error so they did a recall to make some changes to pedal positions; NHTSA suggested that it went beyond that to an actual failure which was then exacerbated by the driver's response.  To my knowledge, Audi never  admitted to a problem and it was only the many lawsuits which convinced them to do anything at all.

  • lnoland - Monday, February 27, 2017 10:15 AM

    Jeff Moden - Sunday, February 26, 2017 11:22 PM

    Who's programming the computers to do medical diagnosis?  If it's derived from the same idiots that couldn't tell the difference between bronchitis caused by bugs that respond to antibiotics and the type of bronchitis caused by simple non-severe acid reflux or that can't tell the difference between a heart attack, a gall bladder attach, and a minor electrolyte imbalance, that misdiagnosed me over and over, we're going to be in deep Kimchi.

    Back in the '80s, during the first major wave of AI research, I took a class in AI programming.  The chairman of the department greeted us on our first day and gave us an introduction to the subject.  One of the things he said was that within the community, there was a move to get away from calling the research "Artificial Intelligence." He said that the current preference is to refer to it as "Knowledge Engineering."  He pointed out that, for one thing, once you say you are working on Artificial Intelligence, one of the first things people expect of you is to do something intelligent, and that's a lot of pressure.  Further, he pointed out, that next they are going to expect your software to do something intelligent and he said that the current research was many years away from being able to do something truly intelligent.  Unfortunately, he said that marketing people really liked the term Artificial Intelligence and since much of the research is commercially-funded, it's difficult to change.  He also added that thus inflating expectations was one of the greatest risks they faced since once people realized the reality it could result in all funding drying up and research being shut down.  That pretty much happened and until the term AI started resurfacing recently, I was unaware that anyone was even still doing AI research, other than people finding new uses for neural nets in pattern recognition.

    The same risks exist in commercial applications -- inflated expectations.  I, too, am worried about things like AI-based medical diagnosis because, even though it could be a very useful tool for doctors, it runs the risk of causing doctors to give it more credit than it is due.  If a doctor simply accepts a diagnosis from a machine, out of a belief that the machine is more intelligent than he is, we could indeed be in very "deep Kimchi."

    When I studied in this area in the early to mid 90s Knowledge Based Systems and Neural Networks were treated as two distinct areas that were proper terms under the academically unpopular, but commercially popular, term Artificial Intelligence. It was a great way to get students involved (enrolled?) and, possibly, outside funding.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • "Computers make excellent and efficient servants ... but I have no desire to serve under them." Mr. Spock, The Ultimate Computer  🙂

Viewing 15 posts - 1 through 15 (of 23 total)

You must be logged in to reply to this topic. Login to reply