We have reasons to feel both excited and uneasy about giant corporations’ investment in robotics.
It’s exciting for the robotics community that the giants (Google, Apple, and Amazon) are actively investing in robotics.
As a researcher in robotics, I tend to cringe whenever someone asks how long it will take until people start to see terminator-like robots on the streets. It’s a fun question to think about, but it is often asked with all too much seriousness, as though the world with terminators is the inevitable future that lies ahead of us.
But when I was asked this month’s Robotics by Invitation question, I gladly put on my imagination hat without much hesitation or cringing. Part of it might have something to do with the fact that no one will come after me and ask “so, when do you think that kind of technology will be available in the future?” So I felt very much free to let my imagination do what it does best.
The first thing that crossed my mind was a vision or an idea Mr. John S. Canning of the Naval Surface Warfare Center Dahlgren Division had discussed many years ago (in 2009 I believe) at a talk he titled “A Concept of Operations for Armed Autonomous Systems”. After thirty-something powerpoint slides, he summarized the talk with “Let the machines target machines – not people”. I think it’s a cool notion to think about building robots that are not built as ultimate killing machines, but built as the ultimate weapon-neutralizing machines. Imagine that, instead of targeted killing of humans, you send robots for targeted neutralization of weapons?
After coming across that summary, I remember thinking how useful it would be if I had an expandable, hidden robotic device implanted on my forearm, such that when I (if ever) need to go neutralize someone’s weapon, or protect myself from someone attacking me (for whatever reason), the device will automatically activate, expand into a bullet-proof shield, and help me detect dangerous weapons in the area to neutralize. If it comes with a mini jet-pack that allows me to fly, that’s even better. I’d be the ultimate superwoman whose day-job is to do research in robotics, but with a side job to fly to random places and help out with conflict situations. Ok, that sounds like a plot from a comic book.
Some of you might think I sound like I’m dreaming to be a female version of Iron Man. But I am thinking of something more subtle (at least while the device isn’t activated), like the Inspector Gadget (for those of you who don’t know him, Inspector Gadget was a cartoon character that could hide all of his cyborg gadgetry inside his trench coat). I would look just like a normal person, except that, when necessary, my ‘implanted devices’ would activate to serve whatever various purposes I need.
That’s only if you are asking me about implants. But if you are asking me about robotic accessories, then that’s a whole different story. Wouldn’t it be amazing if there was a foldable and light pocket-sized device that you could carry with you while travelling (or grocery shopping), so that when you don’t want to carry heavy things, you could just activate it, and it would become a full sized stair-climber and a follow-bot? It would have come in very handy if I had such a device during my trip to Europe, hopping between trains and planes with my luggage. I don’t think I’d use anything bigger or heavier than my purse for this purpose, because that defeats the purpose.
Anyone have one of these available for testing yet?
2013 was a year filled with talk of drones.
I’m not saying this just because I’m biased by the recent news reporting on how large companies (Amazon, DHL, and UPS to be exact) are exploring the use of drones as a new delivery mechanism. If this is news to you, don’t worry. The robotics community came across this only a couple of weeks ago.
My bet is that by the time you buy your very first robotic butler, it will have a friendly head on it that moves. In fact, it would be a good idea to make robots with heads if they are intended to share spaces and objects with people. That’s because the head is a really expressive part of our body we naturally use (a lot) to convey essential information to each other. Robots will need to do the same if they are going to hang out with us soft-tissued human beings at our homes and offices.
For example, when people are attending to something, they tend to be looking at the thing they are attending to. People also look at the direction they are headed when they walk, and make eye contact when they talk. People nod with their head when they want to show agreement about what is being said. Without these nonverbal cues from the head interacting with each other would be much more difficult, because we wouldn’t know what each other are doing.
Rodney Brooks, a pioneer in robotics and now Chairman and CTO of Rethink Robotics, had this in mind when he built Baxter. Although Baxter’s arms are as bulky-looking as its traditional industrial robotics predecessors, one of the innovative components of it is the fact that it features a moving head that makes its interaction with not-so-trained users very intuitive
If robots are to do meaningful things around us in a safe manner, it’s essential that we know what the robot is attending to, where it is headed, and what it is about to do – a lot of which a robot head can help with. That way, we won’t have to be a roboticist to know when it is safe to be around a robot holding on to a giant knife to make you cucumber salad. <optionally embed cucumber slicing robot
On April 8-9, Stanford Law School held the second annual robotics and law conference, We Robot. This year’s event focused on near-term policy issues in robotics and featured panels and papers by scholars, practitioners, and engineers on topics like intellectual property, tort liability, legal ethics, and privacy. The full program is here.
Looking at the two words together is enough to conjure up images of chaos and destruction. They’re an image far too familiar in science fiction settings such as Isaac Asimov or Arthur C. Clarke. It’s also a concept many A.I. researchers will gladly tell you they’ve been plagued with at least once by friends or colleagues. However, how much of a real ethical concern do they pose for society?
On April 10th, Robot Block Party 2013 took place right after We Robot conference.
Of course, I had an extra day to spend at Stanford University after the conference and couldn’t miss out on the event.
The fun really began when I got there. I was greeted by a gigantic inflatable Keepon, followed by booth after booth of robots. Among them were Puzzlebox, a robot controlled using EEG, PR2 from Willow Garage, and a self-driving car demonstrating LIDAR technology from Velodyne. With a lot of help from Dr. Peter Asaro, an expert in roboethics and professor at The New School, and my labmate Mr. Ergun Calisgan from the CARIS lab (University of British Columbia) I captured some of the highlights from Robot Block Party on video.
Robot Futures is a new book written by Dr. Illah Nourbakhsh, a professor at Carnegie Mellon University who has been teaching roboethics at the university for many years. According to Dr. Noel Sharkey, this book is “[a]n exhilarating dash into the future of robotics from a scholar with the enthusiasm of a bag of monkeys. It is gripping from the start with little sci-fi stories in each chapter punching home points backed up forcefully by factual reality. This is an entertaining tour de force that will appeal to anyone with an interest in robots.”
This past weekend, I have been a little bit occupied with the idea of self-awareness and robots. The above video is just for fun of course. But this post isn’t really about the video and how entertaining it is (sorry if I disappointed you). Rather, it’s more about the idea of self-aware robots and our use of the word ‘self-awareness’ (and other similar words) when it comes to talking about robots.
Let’s get started.