The Creepiness of AI

, 2019-03-05

Last year, I watched a keynote talk from Matthew Renze about AI. In his talk, there were examples of the amazing things that Artificial Intelligence can do, as well as some of the creepier things have have been developed. It was an interesting talk, one that gives me inspiration and hope for the benefits of better computer algorithms as well as the concerns for various issues that we may be unprepared to deal with as a society.

One of the more controversial items that occurred recently with AI was the Google phone call, where a computer answers a call and interacts with a human. What's disconcerting here is that the person doesn't know this is a computer, and there are speech patterns the computer uses, like um interspersed in the answers, that deceive someone. While this certainly might be helpful in scheduling situations shown in the call, there is a downside. Could you imagine artificial personas used in telephone scams or phishing situations? A help desk knowing some information and then asking for verification of other data?

There are perhaps greater concerns, such as the work done with imitations of President Obama. There are fake speeches, generated by computer. While movie studios might want fake actors used to reduce labor costs, do we worry about the implications of a computer actually being able to imitate one of us in a video call?

The use of AI and ML, with lots of data an organization might have gathered could be good and bad, but certainly opens the world to more problems than benefits if there isn't mandatory disclosure of the cases where this is used. Since there are always going to be criminal elements that don't obey rules, this might be very scary.

There are certainly other issues, such as Target predicting a pregnancy, which was the first really, creepy data analysis thing I saw. That one is a few years old, and still bothers me as it was accurate, but an unrefined use of the data. A good example of where marketing groups are a bit too excited to use AI/ML technologies and don't think through the implications. Fortunately this case seems to have dampened some of enthusiasm for prediction in retailing.

Perhaps this item that is a bit funny, but it is also very worrisome for me. It's the case of an AI system playing video games. The AI system decided the best way to get the best score was to pause the game. Rather than compete and try to do better, the computer decided to just stop. A completely unexpected outcome, probably because the feedback and expectations weren't explicit. Since it seems quite often humans don't specify their requirements or expectations very well, I could imagine this being a very large issue in AI systems as they are used more often. It could even be deadly or problematic when a system does something we didn't anticipate, and impacts human health.

Most of us won't work with AI much as a technician, other than providing or managing some of the data. I do expect AI and ML systems to touch more and more of our lives, perhaps using our data for good, perhaps not. Hopefully we can help steer applications into the former more than the latter situation.

Rate

Share

Share

Rate

Related content

Mini-Me

Will the next version of Windows be a "Mini-Me" version of Vista? Who knows, and it's too early to tell, but apparently there's a mini-kernel version of Windows 7, the one after Vista, which fits into 25MB on disk. That's a touch lower than the 4GB that Vista takes up. Granted it's not a full […]

2007-10-25

60 reads

An Hour in Time

Daylight Savings time switches a little later this year. In fact it's November 4th this year, after having been in October for all of my life. In case you don't remember which way we move the clocks, here's a saying: Spring forward, fall back.

5 (1)

2007-10-17

199 reads

Software is Like Building a House

One of the really classic analogies in software is that it's like building a house. You have a foundation, multiple teams, lots of contractors that specialize in something, etc. And it's an analogy that's debated as to its relevance over and over. I won't go into the correctness of this analogy, but I wanted to comment on it.

2012-10-08 (first published: )

291 reads