SQL Clone
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 


The Downsides of AI


The Downsides of AI

Author
Message
Steve Jones
Steve Jones
SSC Guru
SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)

Group: Administrators
Points: 224284 Visits: 19634
Comments posted to this topic are about the item The Downsides of AI

Follow me on Twitter: @way0utwest
Forum Etiquette: How to post data/code on a forum to get the best help
My Blog: www.voiceofthedba.com
Kyrilluk
Kyrilluk
SSC-Addicted
SSC-Addicted (496 reputation)SSC-Addicted (496 reputation)SSC-Addicted (496 reputation)SSC-Addicted (496 reputation)SSC-Addicted (496 reputation)SSC-Addicted (496 reputation)SSC-Addicted (496 reputation)SSC-Addicted (496 reputation)

Group: General Forum Members
Points: 496 Visits: 369
Very nice article. In banking, in loan application, you are not allowed to use algorithm such as Neural Networks which "hide" the processing of the data. Which means that you have to use algorithms that are easily interpretable such as Trees, etc The problem is that this lead to a sub-optimized result. GDPR compliance needs to be taken into account as well. From next year, we won't be allowed to use such black box algorithms on any client data either. Unless we come up with a software that "interpret" the result of these AI.
Robert Sterbal-482516
Robert Sterbal-482516
SSC Eights!
SSC Eights! (903 reputation)SSC Eights! (903 reputation)SSC Eights! (903 reputation)SSC Eights! (903 reputation)SSC Eights! (903 reputation)SSC Eights! (903 reputation)SSC Eights! (903 reputation)SSC Eights! (903 reputation)

Group: General Forum Members
Points: 903 Visits: 292
Is there enough variance in the results for loans that it pays to do them both ways?
jasona.work
jasona.work
SSCoach
SSCoach (16K reputation)SSCoach (16K reputation)SSCoach (16K reputation)SSCoach (16K reputation)SSCoach (16K reputation)SSCoach (16K reputation)SSCoach (16K reputation)SSCoach (16K reputation)

Group: General Forum Members
Points: 16207 Visits: 13106
From reading the linked article, it sounds like potentially one of the things that AI / Deep Learning is doing (using the medical examples) is finding patterns in the patient records that a person would likely never find. Patterns in phrasing, in test results, etc. As for why a person wouldn't find it, how long would it take you to look through all the records for 700K patients? Would you remember enough detail from patient #257s records to realize that there's a similarity to patient #401567? I wouldn't. But a machine, a computer, won't forget.

As the article indicated, it's going to come down to a very nebulous thing. Do we *TRUST* what these machines are telling us / doing behind the scenes? Do you trust Siri's / Googles recommendation to go to that new Hawaiian / German fusion restaurant? Do you trust the AI in your shiny new self-driving car to safely get you to work and home again? Do you trust the AI that denied you a loan for a boat? Especially when the system can't tell you *why* it did something. Why did you get denied the loan when the human loan officer who looked over your paperwork said it all looked good and you were likely to be approved, why did your car suddenly slam on the brakes on the freeway in the left lane with no apparent traffic ahead, why did it suggest that restaurant when you have never had spam and pineapple before in your life?

Sure, they're working on methods to get some of the *why* from these systems, but it sounds like it borders on the output boiling down to a "because" answer. Not enough detail to really get a handle on the reasons, but enough to give an inkling.
Not sure I'd be happy with that little of an answer.
chrisn-585491
chrisn-585491
SSCertifiable
SSCertifiable (5.3K reputation)SSCertifiable (5.3K reputation)SSCertifiable (5.3K reputation)SSCertifiable (5.3K reputation)SSCertifiable (5.3K reputation)SSCertifiable (5.3K reputation)SSCertifiable (5.3K reputation)SSCertifiable (5.3K reputation)

Group: General Forum Members
Points: 5293 Visits: 2608
jasona.work - Thursday, August 24, 2017 5:57 AM
From reading the linked article, it sounds like potentially one of the things that AI / Deep Learning is doing (using the medical examples) is finding patterns in the patient records that a person would likely never find. Patterns in phrasing, in test results, etc. As for why a person wouldn't find it, how long would it take you to look through all the records for 700K patients? Would you remember enough detail from patient #257s records to realize that there's a similarity to patient #401567? I wouldn't. But a machine, a computer, won't forget.

As the article indicated, it's going to come down to a very nebulous thing. Do we *TRUST* what these machines are telling us / doing behind the scenes? Do you trust Siri's / Googles recommendation to go to that new Hawaiian / German fusion restaurant? Do you trust the AI in your shiny new self-driving car to safely get you to work and home again? Do you trust the AI that denied you a loan for a boat? Especially when the system can't tell you *why* it did something. Why did you get denied the loan when the human loan officer who looked over your paperwork said it all looked good and you were likely to be approved, why did your car suddenly slam on the brakes on the freeway in the left lane with no apparent traffic ahead, why did it suggest that restaurant when you have never had spam and pineapple before in your life?

Sure, they're working on methods to get some of the *why* from these systems, but it sounds like it borders on the output boiling down to a "because" answer. Not enough detail to really get a handle on the reasons, but enough to give an inkling.
Not sure I'd be happy with that little of an answer.

I don't trust Google for multiple reasons. Which is sad, because at one time I may have trusted them more than Microsoft. The last thing I need is an AI controlling anything that may coded with someone's political/business biases the same way their search engine is. Or that smart phones aren't allowed in meetings, not because of the distractions, but that they "listen" in covertly. Not to mention security lapses and poor code. In an era where cars, drones and information are becoming more weaponized, I'm not happy with how AI can be misused.

xsevensinzx
xsevensinzx
SSCertifiable
SSCertifiable (7.3K reputation)SSCertifiable (7.3K reputation)SSCertifiable (7.3K reputation)SSCertifiable (7.3K reputation)SSCertifiable (7.3K reputation)SSCertifiable (7.3K reputation)SSCertifiable (7.3K reputation)SSCertifiable (7.3K reputation)

Group: General Forum Members
Points: 7278 Visits: 3316
Hrrm, I don't know how to feel about this article.

On one hand, you have the ability to look under the hood. On the other hand you may not. You take any of the Windows applications we may use on a daily basis. It's not open source and we don't know what's going on. Why would AI or ML change the fact we don't know what's going on with the applications we use to make our business thrive? We just trust in the ability to work and make the magic happen regardless of how it was programmed.

Though, I do understand what most are coming from. Something is making a prediction, a recommendation, a decision and we sometimes are wary that outcome is skewed, made bias, or just flat out wrong because we can't see or maybe don't know how to interpret.

Regardless, I work within the data science wing within advertising. There are certainly plenty of AI and ML services out there where we cannot look under the hood. But then again, we do have the ability to create those same services now. Good data professionals are able to fully understand what their unsupervised learning techniques are doing because the code they are using is open source. Really good data professionals are able to actually use math to not only interpret what's going on for the end user who may be wary, but also prove it with science.

Though, there is a lot of data professionals just running things with code only without any thought or proper testing or real understanding of the math and algorithms used to make everything work behind the scenes. Just throwing things out there because it looks right. Unfortunately, the end user who doesn't know what's going on, may get the shaft in this case.
patrickmcginnis59 10839
patrickmcginnis59 10839
SSCertifiable
SSCertifiable (6.6K reputation)SSCertifiable (6.6K reputation)SSCertifiable (6.6K reputation)SSCertifiable (6.6K reputation)SSCertifiable (6.6K reputation)SSCertifiable (6.6K reputation)SSCertifiable (6.6K reputation)SSCertifiable (6.6K reputation)

Group: General Forum Members
Points: 6621 Visits: 6150

On one hand, you have the ability to look under the hood. On the other hand you may not. You take any of the Windows applications we may use on a daily basis. It's not open source and we don't know what's going on. Why would AI or ML change the fact we don't know what's going on with the applications we use to make our business thrive? We just trust in the ability to work and make the magic happen regardless of how it was programmed.

With closed source apps, you can still log, set flags, get dumps (that are useful anyways). Neural nets aren't like this, values given to interconnections aren't specifically set, they're implicitely set via the learning process. With closed source, you can often duplicate issues that the vendor can then subsequently act upon (that is, if the bugs are deterministically enough, race conditions and the like are often difficult to recreate by their very non deterministic nature). With neural nets, its teriffically difficult to understand them even when the individual nodes can dump their values regarding their inner connections. (obviously probably not even using the correct terminology here). Maybe one path would be to log all inputs to the network and try to replay, but what if the neural network is such that it CONTINUES to update its learning while in use, the exact state of failure might not even be reproduceable.

to properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
jasona.work
jasona.work
SSCoach
SSCoach (16K reputation)SSCoach (16K reputation)SSCoach (16K reputation)SSCoach (16K reputation)SSCoach (16K reputation)SSCoach (16K reputation)SSCoach (16K reputation)SSCoach (16K reputation)

Group: General Forum Members
Points: 16207 Visits: 13106
xsevensinzx - Thursday, August 24, 2017 6:55 AM
Good data professionals are able to fully understand what their unsupervised learning techniques are doing because the code they are using is open source.

I think one of the points you missed in the article is, the sorts of systems the article was talking about, for want of a better term, evolve themselves. So the programmer may know what it's doing and why initially, but after a few thousand, or million, training runs, he won't be able to say "this is the portion of the code that made it do X" anymore. This isn't an "open source vs closed source" sort of thing, it's more a "I built a tool and now the tool has learned how to do things I didn't originally build it to do."

You built a piano playing robot that can be "told" by passerby whether they like what it's playing or not from a built-in library of tunes, and now it's creating it's own piano concertos from whole cloth.

Steve Jones
Steve Jones
SSC Guru
SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)

Group: Administrators
Points: 224284 Visits: 19634
patrickmcginnis59 10839 - Thursday, August 24, 2017 7:04 AM

On one hand, you have the ability to look under the hood. On the other hand you may not. You take any of the Windows applications we may use on a daily basis. It's not open source and we don't know what's going on. Why would AI or ML change the fact we don't know what's going on with the applications we use to make our business thrive? We just trust in the ability to work and make the magic happen regardless of how it was programmed.

With closed source apps, you can still log, set flags, get dumps (that are useful anyways). Neural nets aren't like this, values given to interconnections aren't specifically set, they're implicitely set via the learning process. With closed source, you can often duplicate issues that the vendor can then subsequently act upon (that is, if the bugs are deterministically enough, race conditions and the like are often difficult to recreate by their very non deterministic nature). With neural nets, its teriffically difficult to understand them even when the individual nodes can dump their values regarding their inner connections. (obviously probably not even using the correct terminology here). Maybe one path would be to log all inputs to the network and try to replay, but what if the neural network is such that it CONTINUES to update its learning while in use, the exact state of failure might not even be reproduceable.


The other issue is that with coded apps, if I don't upgrade, I can usually get a sense of the determinism that exists for that app and its behavior. I know when it will do x based on y.

For ML/AI, they can grow and change later, and because there are often multiple data inputs (features), I may see behavior changes over time that I can't explain, and a scientist might struggle with as well. It's not likely to be dramatically different, but you never know.

Follow me on Twitter: @way0utwest
Forum Etiquette: How to post data/code on a forum to get the best help
My Blog: www.voiceofthedba.com
Steve Jones
Steve Jones
SSC Guru
SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)SSC Guru (224K reputation)

Group: Administrators
Points: 224284 Visits: 19634
One of the other issues for using this in places, let's say a loan app, is that as humans, we produce data from past results. Those are often based on inconsistent choices for some, and deliberate prejudices. We are all prejudiced, and some don't know it, but the ML algorithms reveal some of this when they act on the data.

Like children, the systems learn based on the inputs and our feedback of whether things are moving in the right direction or not. That means we're teaching them to behave as we do, with our leanings on certain subjects.

Follow me on Twitter: @way0utwest
Forum Etiquette: How to post data/code on a forum to get the best help
My Blog: www.voiceofthedba.com
Go


Permissions

You can't post new topics.
You can't post topic replies.
You can't post new polls.
You can't post replies to polls.
You can't edit your own topics.
You can't delete your own topics.
You can't edit other topics.
You can't delete other topics.
You can't edit your own posts.
You can't edit other posts.
You can't delete your own posts.
You can't delete other posts.
You can't post events.
You can't edit your own events.
You can't edit other events.
You can't delete your own events.
You can't delete other events.
You can't send private messages.
You can't send emails.
You can read topics.
You can't vote in polls.
You can't upload attachments.
You can download attachments.
You can't post HTML code.
You can't edit HTML code.
You can't post IFCode.
You can't post JavaScript.
You can post emoticons.
You can't post or upload images.

Select a forum

































































































































































SQLServerCentral


Search