Robohub.org
 

From literature to living rooms: Perceptions of robots in society

by
18 December 2014



share this:

As drones have become increasingly accessible, media outlets have been preoccupied with news stories that fuel our fears about the prospect of privacy invasion and physical harm. Although drones have only recently become mainstream, society has endured a long-held fixation with the need to regulate robots in order to save itself from coming into harm’s way.

This dystopian view of robots originates in Golem literature and the romantics. In 16th Century Jewish literature, Rabbi Loew of Prague created the Golem, a creature constructed from clay to protect the community from being expelled by the Roman Emperor. Rabbi Loew would deactivate the Golem on Friday evenings in preparation for the Sabbath. One Friday, the Rabbi forgot to deactivate the Golem, and it became a violent monster that needed to be destroyed. A similar theme emerged in Marry Shelley’s Frankenstein, in which a man-made monster turned against its creator.

The blueprints outlined in Golem literature and the romantics were further refined in the realm of science fiction. Writing just prior to the advent of the modern robotics industry, Asimov advanced three laws to negotiate the dangers associated with the introduction of robots into society proper. Asimov’s Three Laws of Robotics provide that:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm;
  • A robot must obey orders given to it by human beings; except where such orders would conflict with the First Law; and
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov later added a Zeroth Law that would supersede the Three Laws: a robot may not harm humanity, or by inaction, allow humanity to come to harm.

I posed the following question to Tony Dyson, designer of R2-D2, the brave and lovable droid who many perceive as the true hero of Star Wars: As robots become increasingly autonomous, do you think we will need Asimov’s laws? Here is what Dyson had to say:

I would love to say yes, all intelligent machines (autonomous robots) that are programmed to think for themselves, must also have an overriding ‘hard wired’ set of rules to work with. These should not be guidelines, but must be a set of laws, clearly defined by the ruling body. However the practical problem is, as Rodney Brooks, co-founder of iRobot has alluded to: ‘People ask me about whether our robots follow Asimov’s laws. There is a simple reason [they don’t]: I can’t build Asimov’s laws in them.’

 

So we ask the question, do we face any danger from robots without Asimov’s laws? I don’t see our AI research progressing into ‘Skynet Terminator’ anytime soon, but I may be just saying that, as part of my evil plan – there is a good reason why I share the same name as the ‘Head Robotic Scientist’ in the film Terminator.



Why do we fear robots? The term robot comes from the Czech word robota, which means forced labour. Simply put, we create robots to serve and fulfill our needs. However, advances in artificial intelligence are bringing us closer to achieving autonomous robotics. If and when robots become truly autonomous, we fear that they will no longer serve us – or worse that they will turn against us and destroy us. The consequence of our fear of robots is that we will systematically resist technological advances that may prove beneficial. The debate is yet to be settled on whether robot surgeons will err less frequently than their human counterparts, or whether driverless cars will decrease the number of accidents on our roads. The point is that if we resist these advances, such questions will remain unanswered.

How can we move forward and change our perceptions about robots? In Japan, robots are highly integrated into society and this may have something to do with the different cultural outlook on human-robot interaction. For instance, in 2007, Japan’s Ministry of Foreign Affairs designated Astro Boy as the nation’s envoy for safe overseas travel. In North America, Hollywood could play an important role in shaping positive attitudes towards consumer drones and robots.

Earlier this year, Clive Thompson published an article in the Smithsonian titled “Why Do We Love R2-D2 and Not C-3PO? Thompson explored how the design of robots impacts our reaction to them, arguing that: “R2-D2 changed the mold. Roboticists now understand it’s far more successful to make their contraptions look industrial—with just a touch of humanity. The room-cleaning Roomba looks like a big flat hockey puck, but its movements and beeps seem so “smart” that people who own them give them names.” And it appears that Hollywood does in fact inspire robot makers… Co-founder of iRobot, Helen Greiner recently posted a note on Dyson’s LinkedIn profile, stating: “Because of Tony’s compelling emotive design, I fell in love with R2D2 when I was 11. This enabled my whole career in robotics from attending MIT to cofounding iRobot, the company that makes the Roomba vacuuming robot. I hope you see a little of R2D2 in your Roomba!”



tags:


Diana Marina Cooper is Vice President of Legal and Policy Affairs at PrecisionHawk.
Diana Marina Cooper is Vice President of Legal and Policy Affairs at PrecisionHawk.





Related posts :



Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by

Robot Talk Episode 94 – Esyin Chew

In the latest episode of the Robot Talk podcast, Claire chatted to Esyin Chew from Cardiff Metropolitan University about service and social humanoid robots in healthcare and education.
18 October 2024, by

Robot Talk Episode 93 – Matt Beane

In the latest episode of the Robot Talk podcast, Claire chatted to Matt Beane from the University of California, Santa Barbara about how humans can learn to work with intelligent machines.
11 October 2024, by

Robot Talk Episode 92 – Gisela Reyes-Cruz

In the latest episode of the Robot Talk podcast, Claire chatted to Gisela Reyes-Cruz from the University of Nottingham about how humans interact with, trust and accept robots.
04 October 2024, by

Robot Talk Episode 91 – John Leonard

In the latest episode of the Robot Talk podcast, Claire chatted to John Leonard from Massachusetts Institute of Technology about autonomous navigation for underwater vehicles and self-driving cars. 
27 September 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association