• Jeff Moden - Sunday, February 26, 2017 11:22 PM

    Who's programming the computers to do medical diagnosis?  If it's derived from the same idiots that couldn't tell the difference between bronchitis caused by bugs that respond to antibiotics and the type of bronchitis caused by simple non-severe acid reflux or that can't tell the difference between a heart attack, a gall bladder attach, and a minor electrolyte imbalance, that misdiagnosed me over and over, we're going to be in deep Kimchi.

    Back in the '80s, during the first major wave of AI research, I took a class in AI programming.  The chairman of the department greeted us on our first day and gave us an introduction to the subject.  One of the things he said was that within the community, there was a move to get away from calling the research "Artificial Intelligence." He said that the current preference is to refer to it as "Knowledge Engineering."  He pointed out that, for one thing, once you say you are working on Artificial Intelligence, one of the first things people expect of you is to do something intelligent, and that's a lot of pressure.  Further, he pointed out, that next they are going to expect your software to do something intelligent and he said that the current research was many years away from being able to do something truly intelligent.  Unfortunately, he said that marketing people really liked the term Artificial Intelligence and since much of the research is commercially-funded, it's difficult to change.  He also added that thus inflating expectations was one of the greatest risks they faced since once people realized the reality it could result in all funding drying up and research being shut down.  That pretty much happened and until the term AI started resurfacing recently, I was unaware that anyone was even still doing AI research, other than people finding new uses for neural nets in pattern recognition.

    The same risks exist in commercial applications -- inflated expectations.  I, too, am worried about things like AI-based medical diagnosis because, even though it could be a very useful tool for doctors, it runs the risk of causing doctors to give it more credit than it is due.  If a doctor simply accepts a diagnosis from a machine, out of a belief that the machine is more intelligent than he is, we could indeed be in very "deep Kimchi."