• From reading the linked article, it sounds like potentially one of the things that AI / Deep Learning is doing (using the medical examples) is finding patterns in the patient records that a person would likely never find.  Patterns in phrasing, in test results, etc.  As for why a person wouldn't find it, how long would it take you to look through all the records for 700K patients?  Would you remember enough detail from patient #257s records to realize that there's a similarity to patient #401567?  I wouldn't.  But a machine, a computer, won't forget.

    As the article indicated, it's going to come down to a very nebulous thing.  Do we *TRUST* what these machines are telling us / doing behind the scenes?  Do you trust Siri's / Googles recommendation to go to that new Hawaiian / German fusion restaurant?  Do you trust the AI in your shiny new self-driving car to safely get you to work and home again?  Do you trust the AI that denied you a loan for a boat?  Especially when the system can't tell you *why* it did something.  Why did you get denied the loan when the human loan officer who looked over your paperwork said it all looked good and you were likely to be approved, why did your car suddenly slam on the brakes on the freeway in the left lane with no apparent traffic ahead, why did it suggest that restaurant when you have never had spam and pineapple before in your life?

    Sure, they're working on methods to get some of the *why* from these systems, but it sounds like it borders on the output boiling down to a "because" answer.  Not enough detail to really get a handle on the reasons, but enough to give an inkling.
    Not sure I'd be happy with that little of an answer.