Dear Robohub readers: We need your help. Robohub is growing. Robohub is now a community with over 70 contributors and more than 30,000 unique visitors each month. In order for us to continue covering the latest in robotics research, bring in-depth coverage of conferences worldwide, and showcase interviews with leading roboticists, we need your support.
Your donation will go towards expanding our coverage, paying the salaries of our dedicated staff, and maintaining the website.
Keep Robohub alive for another year by donating to our campaign today.Thank you!
AJung Moon is a Ph.D. student in Mechanical Engineering at the University of British Columbia, Vancouver. She recently completed her Master’s in Applied Science at UBC where she designed robots to ‘hesitate’ when it is about to collide into people. Prior to entering the world of research, she received her Honours Mechatronics Engineering degree and a minor in Philosophy from the University of Waterloo. Her interdisciplinary interests in exploring how robots affect people and how this knowledge should inform interactive robot design fuels her passion in human-robot interaction and roboethics.
She has been passionate about discussing roboethics issues since undergrad, and has been blogging about social, legal, and ethical issues pertaining to robotics on Roboethics Info Database.
I’m not saying this just because I’m biased by the recent news reporting on how large companies (Amazon, DHL, and UPS to be exact) are exploring the use of drones as a new delivery mechanism. If this is news to you, don’t worry. The robotics community came across this only a couple of weeks ago.
Are you curious about what your future robotic assistants will look like?
My bet is that by the time you buy your very first robotic butler, it will have a friendly head on it that moves. In fact, it would be a good idea to make robots with heads if they are intended to share spaces and objects with people. That’s because the head is a really expressive part of our body we naturally use (a lot) to convey essential information to each other. Robots will need to do the same if they are going to hang out with us soft-tissued human beings at our homes and offices.
For example, when people are attending to something, they tend to be looking at the thing they are attending to. People also look at the direction they are headed when they walk, and make eye contact when they talk. People nod with their head when they want to show agreement about what is being said. Without these nonverbal cues from the head interacting with each other would be much more difficult, because we wouldn’t know what each other are doing.
Rodney Brooks, a pioneer in robotics and now Chairman and CTO of Rethink Robotics, had this in mind when he built Baxter. Although Baxter’s arms are as bulky-looking as its traditional industrial robotics predecessors, one of the innovative components of it is the fact that it features a moving head that makes its interaction with not-so-trained users very intuitive
If robots are to do meaningful things around us in a safe manner, it’s essential that we know what the robot is attending to, where it is headed, and what it is about to do – a lot of which a robot head can help with. That way, we won’t have to be a roboticist to know when it is safe to be around a robot holding on to a giant knife to make you cucumber salad. <optionally embed cucumber slicing robot
On April 8-9, Stanford Law School held the second annual robotics and law conference, We Robot. This year’s event focused on near-term policy issues in robotics and featured panels and papers by scholars, practitioners, and engineers on topics like intellectual property, tort liability, legal ethics, and privacy. The full program is here.
Looking at the two words together is enough to conjure up images of chaos and destruction. They’re an image far too familiar in science fiction settings such as Isaac Asimov or Arthur C. Clarke. It’s also a concept many A.I. researchers will gladly tell you they’ve been plagued with at least once by friends or colleagues. However, how much of a real ethical concern do they pose for society?
On April 10th, Robot Block Party 2013 took place right after We Robot conference.
Of course, I had an extra day to spend at Stanford University after the conference and couldn’t miss out on the event.
The fun really began when I got there. I was greeted by a gigantic inflatable Keepon, followed by booth after booth of robots. Among them were Puzzlebox, a robot controlled using EEG, PR2 from Willow Garage, and a self-driving car demonstrating LIDAR technology from Velodyne. With a lot of help from Dr. Peter Asaro, an expert in roboethics and professor at The New School, and my labmate Mr. Ergun Calisgan from the CARIS lab (University of British Columbia) I captured some of the highlights from Robot Block Party on video.
Robot Futures is a new book written by Dr. Illah Nourbakhsh, a professor at Carnegie Mellon University who has been teaching roboethics at the university for many years. According to Dr. Noel Sharkey, this book is “[a]n exhilarating dash into the future of robotics from a scholar with the enthusiasm of a bag of monkeys. It is gripping from the start with little sci-fi stories in each chapter punching home points backed up forcefully by factual reality. This is an entertaining tour de force that will appeal to anyone with an interest in robots.”
This past weekend, I have been a little bit occupied with the idea of self-awareness and robots. The above video is just for fun of course. But this post isn’t really about the video and how entertaining it is (sorry if I disappointed you). Rather, it’s more about the idea of self-aware robots and our use of the word ‘self-awareness’ (and other similar words) when it comes to talking about robots.