Robohub.org
 

How science can help us make AI more trustworthy

by
22 July 2016



share this:
Baxter throwing shade? Source: YouTube

Baxter throwing shade? Source: YouTube

Stories about racist Twitter accounts and crashing self-driving cars can make us think that artificial intelligence (AI) is a work in progress. But while these headline-grabbing mistakes reveal the frontiers of AI, versions of this technology are already invisibly embedded in many systems that we use everyday.

These everyday uses include everything from fraud detection systems that monitor credit card transactions to email filters that learn not to swamp your inbox with spam. You’ve probably already interacted with an AI system today without even knowing it and probably enjoyed the experience.

One increasingly common form of AI can be found in chatbots, a type of software that lets you interact with it by having a conversation. The iPhone assistant technology, Siri, is an obvious example. Microsoft’s experimental Twitter account that learned how to speak from other users and ended up spouting racist phrases is another. But many websites and apps are now using chatbots to let people order services or locate specific information – without descending into bigotry.

For example, Amy is an AI assistant that schedules meetings for you via email exchanges with your contacts. Very few of these chatbots could pass themselves off completely as a human, however, so their designers need to think carefully about how people react to AI if they want their creations to be accepted. Otherwise it ends up feeling like you’re talking to a really bad PA.

Teaching a machine

There are many different approaches to make these digital machines behave in an intelligent way that mimics human behaviour. But what all of them have in common is that they base what they are doing on huge amounts of data that they have gathered from their environment.

Chatbots are often “trained” by being given months of Twitter traffic as examples which is then analysed using complex statistical methods to find frequent patterns of behaviour. For example “fine, thank you” is a frequent response to a question such as “how are you?”. Quite often, AI will not truly understand what it is saying, it will simply repeat what it has seen.

Having a conversation with another human is actually quite complex. You need to first recognise the words in a sentence, know when it is your turn to answer, then generate your own appropriate response that relates to the point of the conversation. Several things can go wrong, from simply not knowing a word to getting the intent of the conversation wrong. Obviously, the more errors there are, the less you think the conversation is going well, and in the worst case, you might stop interacting.

We already know that people will interact differently with a human than a machine. They trust AI less, they do not engage as deeply with it, and they will talk to it in a simpler way than with real humans. In fact, there is evidence that the more the machine tries to mimic a real human conversation, the more off-putting it is, similar to the “uncanny valley” effect that happens the more humanoid robots look.

So how can we design an AI system that is more acceptable to people? First, better and more examples of correct behaviour are needed so that it makes fewer errors. People need to start working hand-in-hand with machines to shape the behaviour of AI systems.

What also seems to matter is how much a user understands how a system works. For example, a recent study on conversational agents found that people wanted to know what the system could do, what is was doing, how it was doing it and whether it was changing due to how the user was interacting with it in the past. This point seems to apply to all kinds of AI, as transparency of an AI system seems to have a positive impact on user satisfaction.

Make it less human

Obviously, people are less likely to trust error-prone systems. But they also don’t want AI to act by itself without any confirmation. For example, if you know a system often misunderstands you then you would not want it to dial a phone number without first checking it is correct. The system also needs to make clear to the user that it’s a robot. It won’t be like talking to another human, and that’s quite ok.

We can expect to see AI systems become more accurate and more integrated into everyday life, but there will also be spectacular failures. Mostly, these systems work fine but what do we do when they don’t? Since the dawn of science fiction, there have been questions about the ethics and laws of AI and how we can control it, which continue to this day. These are still open research questions that have to be answered, along with where AI should and shouldn’t be used, and who is responsible for making decisions and ultimately answerable for mistakes.

In the meantime, more and more companies are starting to integrate AI into their systems and products, with some success. Google’s Nest Learning Thermostat – which memorises your schedule and changes depending on how you use it – is one obvious example but there are scores of start-ups that now leverage the power of AI to provide a personalised experience for consumers. And thanks to the rise in data science that provides the information that will teach these systems, there has never been a better time for firms to turn to the power of AI.

This article was originally published on The Conversation. Read the original article.



tags: , ,


Simone Stumpf is a Senior lecturer in the Department of Computer Science, City University London.
Simone Stumpf is a Senior lecturer in the Department of Computer Science, City University London.





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association