Robohub.org
 

Self-awareness, robot rights, and today’s robots

by
06 November 2012



share this:

This past weekend, I have been a little bit occupied with the idea of self-awareness and robots. The above video is just for fun of course. But this post isn’t really about the video and how entertaining it is (sorry if I disappointed you). Rather, it’s more about the idea of self-aware robots and our use of the word ‘self-awareness’ (and other similar words) when it comes to talking about robots.

Let’s get started.

 

Last Friday, I was sitting in a seminar room reading up on an article that introduced me to a group called the American Society for the Prevention of Cruelty to Robots (ASPCR). The message of the group and its position is very clear from the website. The front page of the website reads in large letters “Robots are people too! Or at least they will be someday” and “Upholding Robotic Rights Since 1999”.

 

I was fascinated to find that such a group was formed when I was just finishing up my elementary school (i.e., quite a few years ago). Or maybe I should have expected similar groups to have formed years earlier, since 1999 is about 78 years after Karel Čapek‘s famous play R.U.R (a.k.a., Rossum’s Universal Robots) was first premiered. I recommend everyone interested in robots/roboethics to read R.U.R if you haven’t (it’s a fairly short read), since it is an entertaining read and is apparently the book that first introduced the word ‘robot’ into our dictionary (oh, and did I mention that an ebook version of it is free and online?).

 

For those of you who haven’t read it, it’s a story of a robot uprising where the robots in the play are almost indistinguishable from humans in appearance. They are all manufactured by a company, R.U.R., located on a remote island. Only a handful of people live and work there, and the rest of the company is run by robots

 

Of the many memorable scenes in the play, reading up on ASPCR reminded me of a particular part of the introductory scene. It’s the part where one of the characters, Helena, daughter of the president and a visitor to the island, decides to reveal the true reason why she came to see the robots at the factory.

Helena: Your position here. You are people just like we are, for God’s sake, just like anyone else in Europe, anyone else in the world! It’s a scandal, the way you have to live, it isn’t worthy of you!

Helena: Brothers, I haven’t come here on behalf of my father. I’m here on behalf of the League of Humanity. Brothers, the League of Humanity now has more than two thousand members. There are two thousand people who are standing up for you and want to help you.

Now, in this scene, Helena thinks she is talking to robots, but is really talking to the handful of humans, directors of various departments within the company. So she ends up mistakenly revealing her plan to the people who run the factory. But the idea is that Helena, a member of the society at large and not an expert in the technology, has formed an organization to advocate for the rights of robots. She goes on to say that the League of Humanity aims to set robots free and have them be treated like people. Of course, the directors (humans) respond by having a good laugh and going on to outline the differences between humans and robots and why they don’t think the League’s goal is a worthwhile endeavor.

 

As I read through the article and thought about the similarities between Helena’s League of Humanity and ASPCR, I felt the need to take on the role of one of the directors of R.U.R. and talk about how it will be very far into the future before we can legitimately consider robots as self-aware and sentient beings that should have their own set of human-equivalent rights. But ASPCR isn’t necessarily saying that today’s robots are already in need of protection. It just seems that if, hypothetically, we were to have robots that have “genuine intelligence and self-awareness”, then we should give them rights that “we take for granted as humans”.

 

I am very much open to such discussion of robot rights if and when such machines are developed. I think advocating for robot rights would become necessary if robots are sentient beings that are equal to humans.

 

However, if and before we are to get to that point, maybe we should be asking ‘why’ we would want such a machine in the first place?

 

Anyways, we are nowhere close to building robots that genuinely have human qualities such as self-awareness, morality, etc. There are engineering projects that are inspired by such concepts (e.g., Justin Hart’s project), but their efforts are to implement certain features of these ideas of self-awareness, ethical behaviour etc. on a robot via technical means.

 

But just because someone is inspired by these ideas does not necessarily mean that we will have robots with human-like self-awareness or other qualities in the near future (near, as in, by the time someone finished his/her PhD on the topic). Just as an FYI, only few animals have passed the mirror test (a classic self-awareness test) to date, and no robots have passed it yet.

 

It has been my assumption that most engineers understand this difference well, because they know the technical details that an engineering product consists of. Just because I programmed a robot to move in a way that communicates a state of hesitation to people does not mean I built a robot that is capable of hesitating. It just means that I came up with humanlike motion trajectories a robot can follow, and that people perceive the motions as hesitations. Mind you, the robot I used for my study definitely does not have any intentions or higher order cognitive capability that makes it hesitant/communicative.

 

Likewise, just because a robot is programmed by someone with an algorithm that can select the highest priority task between ‘attend to a crying baby’, ‘turn off the oven’, and ‘get water for the elderly’, does not mean the robot is suddenly equipped with a sense of intention to select a task appropriate for the situation. Today’s robots don’t have intentions. But it does mean that the programmer’s intention to build a robot that accurately selects a high priority task, one that hopefully results in the best or a positive outcome, is reflected in the task selection algorithm.

 

But as I was pondering about this, I was struck with the realization that this assumption perhaps is not true, and that engineers should really be careful about the way we present our robotics research.

 

During the seminar, a fellow mechanical engineering student started her presentation on implementing self-awareness for homecare robots.

 

“What a coincidence,” I thought, “I was just thinking about self-aware robots”.

 

It turns out that her use of the term ‘self-awareness’ is — somewhat disappointingly — very different from the commonly known definitions of the term:

“an awareness of one’s own personality or individuality” [Merriam-webster] or “the capacity for introspection and the ability to reconcile oneself as an individual separate from the environment and other individuals.” [Wikipedia]

Her model of self-awareness came down to a set of variables in her fuzzy logic. She was using ‘self-awareness’ as a collective term to refer to a programmed set of variables in her system.

 

As interesting as it was to listen to her presentation, I couldn’t help but feel concerned. Perhaps more technically trained individuals today are using vocabulary that could give false impression of their research/work to the lay audience.

 

Using this kind of vocabulary makes for great headline titles for magazine/newspaper articles. However, they aren’t reflective of what the society would consider to be an acceptable truth. Maybe this is a side effect of the way we, engineers, try to present our work in a more public friendly way. But that doesn’t excuse us from not paying careful attention to whether the public/audience is mislead by our statements (e.g., the audience believing that self-aware robots are a near future issue etc.)

 

Although we make fun of research sometimes and point out how far and removed some research seems to be from practical application, I think robotics research do make a difference in the world and is one of the research fields most closely connected to the products and applications of today. But the more this is true, the more we should strive to make positive contributions to the society, not just with our technical contributions, but also in ensuring that the public isn’t misguided (by our research) from discerning the realities of what robots are, and can do today.

 

This article first appeared on Roboethics Info Database.



tags: , ,


AJung Moon HRI researcher at McGill and publicity co-chair for the ICRA 2022 conference
AJung Moon HRI researcher at McGill and publicity co-chair for the ICRA 2022 conference





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association