Human-robot interaction is a fascinating field of research in robotics. It also happens to be the field that is closely related to many of the ethical concerns raised with regards to interactive robots. Should human-robot interaction (HRI) practitioners keep in mind things such as human dignity, psychological harm, and privacy? What about how robot design relates to racism and sexism?
We are moving closer to having driverless cars on roads everywhere, and naturally, people are starting to wonder what kinds of ethical challenges driverless cars will pose. One of those challenges is choosing how a driverless car should react when faced with an unavoidable crash scenario. Indeed, that topic has been featured in many of the major media outlets of late. Surprisingly little debate, however, has addressed who should decide how a driverless car should react in those scenarios. This who question is of critical importance if we are to design cars that are trustworthy and ethical.
In this episode, Ron Vanderkley speaks with Bill Reith, an engineer at Backyard Brains. The company develops RoboRoach, the world’s first commercially available “cyborg”, which was successfully backed on KickStarter.
A large robot comes out of an office mailroom carrying a package marked “Urgent” to deliver to the boss upstairs. After navigating down the hall at maximum speed, it discovers someone is already waiting for the elevator. If they cannot both fit in the elevator, is it acceptable for the robot to ask this person to take the next elevator so it can fulfill its urgent delivery duty?
What does it mean to have giants like Google, Apple and Amazon investing in robotics? Since last December, Google alone has acquired a handful of companies in robotics, home automation and artificial intelligence. This can be pretty exciting for robotics. But what exactly is the internet giant planning to do with this technology? Is there something we should be worried about? If there is, what can we do about it?
We have reasons to feel both excited and uneasy about giant corporations’ investment in robotics.
It’s exciting for the robotics community that the giants (Google, Apple, and Amazon) are actively investing in robotics.
cy·borg - ˈsīˌbôrg/ - noun
a fictional or hypothetical person whose physical abilities are extended beyond normal human limitations by mechanical elements built into the body
This month we asked our Robotics by Invitation experts to tell how they would use robotics to enhance themselves. Here’s what they have to say …
As a researcher in robotics, I tend to cringe whenever someone asks how long it will take until people start to see terminator-like robots on the streets. It’s a fun question to think about, but it is often asked with all too much seriousness, as though the world with terminators is the inevitable future that lies ahead of us.
But when I was asked this month’s Robotics by Invitation question, I gladly put on my imagination hat without much hesitation or cringing. Part of it might have something to do with the fact that no one will come after me and ask “so, when do you think that kind of technology will be available in the future?” So I felt very much free to let my imagination do what it does best.
The first thing that crossed my mind was a vision or an idea Mr. John S. Canning of the Naval Surface Warfare Center Dahlgren Division had discussed many years ago (in 2009 I believe) at a talk he titled “A Concept of Operations for Armed Autonomous Systems”. After thirty-something powerpoint slides, he summarized the talk with “Let the machines target machines – not people”. I think it’s a cool notion to think about building robots that are not built as ultimate killing machines, but built as the ultimate weapon-neutralizing machines. Imagine that, instead of targeted killing of humans, you send robots for targeted neutralization of weapons?
After coming across that summary, I remember thinking how useful it would be if I had an expandable, hidden robotic device implanted on my forearm, such that when I (if ever) need to go neutralize someone’s weapon, or protect myself from someone attacking me (for whatever reason), the device will automatically activate, expand into a bullet-proof shield, and help me detect dangerous weapons in the area to neutralize. If it comes with a mini jet-pack that allows me to fly, that’s even better. I’d be the ultimate superwoman whose day-job is to do research in robotics, but with a side job to fly to random places and help out with conflict situations. Ok, that sounds like a plot from a comic book.
Some of you might think I sound like I’m dreaming to be a female version of Iron Man. But I am thinking of something more subtle (at least while the device isn’t activated), like the Inspector Gadget (for those of you who don’t know him, Inspector Gadget was a cartoon character that could hide all of his cyborg gadgetry inside his trench coat). I would look just like a normal person, except that, when necessary, my ‘implanted devices’ would activate to serve whatever various purposes I need.
That’s only if you are asking me about implants. But if you are asking me about robotic accessories, then that’s a whole different story. Wouldn’t it be amazing if there was a foldable and light pocket-sized device that you could carry with you while travelling (or grocery shopping), so that when you don’t want to carry heavy things, you could just activate it, and it would become a full sized stair-climber and a follow-bot? It would have come in very handy if I had such a device during my trip to Europe, hopping between trains and planes with my luggage. I don’t think I’d use anything bigger or heavier than my purse for this purpose, because that defeats the purpose.
Anyone have one of these available for testing yet?
2013 was a year filled with talk of drones.
I’m not saying this just because I’m biased by the recent news reporting on how large companies (Amazon, DHL, and UPS to be exact) are exploring the use of drones as a new delivery mechanism. If this is news to you, don’t worry. The robotics community came across this only a couple of weeks ago.
In this episode, AJung Moon talks to Julie Carpenter, a recent graduate of the University of Washington who interviewed 23 U.S. Military Explosive Ordnance Disposal personnel to find out how they interact with everyday field robots. Julie is currently writing a book on the topic that is scheduled to be published next year.
I’ve been talking about robot ethics for several years now, but that’s mostly been about how we roboticists must be responsible and mindful of the societal impact of our creations. Two years ago I wrote – in my Very Short Introduction to Robotics - that robots cannot be ethical. Since then I’ve completely changed my mind*. I now think there is a way of making a robot that is at least minimally ethical. It’s a huge technical challenge which, in turn, raises new ethical questions. For instance: if we can build ethical robots, should we? Must we..? Would we have an ethical duty to do so? After all, the alternative would be to build amoral robots. Or, would building ethical robots create a new set of ethical problems? An ethical Pandora’s box.
Posting on the Slate blog Future Tense, James Bessen takes issue with the notion that technology causes unemployment, illustrating his point by debunking a pair of frequently cited examples, textile workers in the early nineteenth century and telephone operators during the mid-twentieth century.
In a response titled “Luddites Are Almost Always Wrong: Technology Rarely Destroys Jobs” on TechDirt’s Innovation blog, Bessen’s thesis is roundly applauded, but he is taken to task for failure to make the connection between the process which prevents net job destruction (the creation of new jobs) and reasonable access to intellectual property, currently endangered by nonpracticing patent owners (a.k.a. “patent trolls”).