Robohub.org
 

In automation we trust: senior thesis project examines concept of over-trusting robotic systems

by
26 May 2016



share this:
Serena Booth and her robot, Gaia, in its cookie-delivery disguise. (Photo by Adam Zewe/SEAS Communications.)

Serena Booth and her robot, Gaia, in its cookie-delivery disguise. (Photo by Adam Zewe/SEAS Communications.)

By Adam Zewe

Hollywood is to be believed, there are two kinds of robots, the friendly and helpful BB-8s, and the sinister and deadly T-1000s. Few would suggest that “Star Wars: the Force Awakens” or “Terminator 2: Judgment Day” are scientifically accurate, but the two popular films beg the question, “Do humans place too much trust in robots?”

The answer to that question is as complex and multifaceted as robots themselves, according to the work of Harvard senior Serena Booth, a computer science concentrator at the John A. Paulson School of Engineering and Applied Sciences. For her senior thesis project, she examined the concept of over-trusting robotic systems by conducting a human-robot interaction study on the Harvard campus. Booth, who was advised by Radhika Nagpal, Fred Kavli Professor of Computer Science, received the Hoopes Prize, a prestigious annual award presented to Harvard College undergraduates for outstanding scholarly research.

During her month-long study, Booth placed a wheeled robot outside several Harvard residence houses. While she controlled the machine remotely and watched its interactions unfold through a camera, the robot approached individuals and groups of students and asked to be let into the keycard-access dorm buildings.

Booth's robot, Gaia, waits outside the entrance to Quincy House. (Image courtesy of Serena Booth.)

Booth’s robot, Gaia, waits outside the entrance to Quincy House. (Image courtesy of Serena Booth.)

When the robot approached lone individuals, they helped it enter the building in 19 percent of trials. When Booth placed the robot inside the building, and it approached individuals asking to be let outside, they complied with its request 40 percent of the time. Her results indicate that people may feel safety in numbers when interacting with robots, since the machine gained access to the building in 71 percent of cases when it approached groups.

“People were a little bit more likely to let the robot outside than inside, but it wasn’t statistically significant,” Booth said. “That was interesting, because I thought people would perceive the robot as a security threat.”

In fact, only one of the 108 study participants stopped to ask the robot if it had card access to the building.

But the human-robot interactions took on a decidedly friendlier character when Booth disguised the robot as a cookie-delivering agent of a fictional startup, “RobotGrub.” When approached by the cookie-delivery robot, individuals let it into the building 76 percent of the time.

“Everyone loved the robot when it was delivering cookies,” she said.

The cookie delivery robot successfully gained entrance into the residence hall. (Image courtesy of Serena Booth.)

The cookie delivery robot successfully gained entrance into the residence hall. (Image courtesy of Serena Booth.)

Whether they were enamored with the knee-high robot or terrified of it, people displayed a wide range of reactions during Booth’s 72 experimental trials. One individual, startled when the robot spoke, ran away and called security, while another gave the robot a wide berth, ignored its request, and entered the building through a different door.

Booth had thought individuals who perceived the robot to be dangerous wouldn’t let it inside, but after conducting follow-up interviews, she found that those who felt threatened by the robot were just as likely to help it enter the building.

“Another interesting result was that a lot of people stopped to take pictures of the robot,” she said. “In fact, in the follow-up interviews, one of the participants admitted that the reason she let it inside the building was for the Snapchat video.”

While Booth’s robot was harmless, she is troubled that only one person stopped to consider whether the machine was authorized to enter the dormitory. If the robot had been dangerous—a robotic bomb, for example—the effects of helping it enter the building could have been disastrous, she said.

A self-described robot enthusiast, Booth is excited about the many different ways robots could potentially benefit society, but she cautions that people must be careful not to put blind faith in the motivations and abilities of the machines.

“I’m worried that the results of this study indicate that we trust robots too much,” she said. “We are putting ourselves in a position where, as we allow robots to become more integrated into society, we could be setting ourselves up for some really bad outcomes.”



tags:


Harvard SEAS creates collaborative bridges across Harvard and educates the next generation of global leaders.
Harvard SEAS creates collaborative bridges across Harvard and educates the next generation of global leaders.





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association