Robot selfies, and the road to self-recognition

09 June 2014

share this:

People take selfies with smartphones and digital cameras (or even with flying robots), and share them on social media, blogs, microblogs and image platforms for social purposes, and though selfies may just be a trend, they say a lot about the narcissism of people and the zeitgeist of the media age.

But could selfies be used more productively? What does it mean for a robot to take a selfie? What good would that be? And is it even all that new?

Robot selfies in space

Space robots have been known to take selfies for some time. They are far away, alone, and what could be more important than taking a picture of one’s self and sending it to the carbon units back home? The web abounds with “space robot selfies” and “rover selfies”, such as the Top 10 Space Robot Selfies from Discovery News.

Debatably, the first robot space selfie was shot in 1976 by the Viking 2 on the planet Mars, but lacking an arm-mounted camera, the photo shows only part of the lander’s deck, the bottom of the robot’s high-gain antenna, the American flag, and a boulder-littered Martian landscape. Source: NASA, JPL

Space robot selfies are usually meant for engineers who want to check the status of instruments with their own eyes and not rely on feedback alone. Space scientists also welcome the views of the ground, mountains, and space – reflections on a robot’s surface can provide information about the light conditions and the atmosphere, while imprints can inform about the characteristics of the ground – but they could do without the vanity of robots, even if it makes relating the dimensions easier.

Just like Transformers

Humans are transformative beings by nature: we grow up and form ourselves, we wrinkle and decompose. The selfies we boast today will one day show us how old we have become and how much we have changed. Transformation will also likely become a key trait in robots. Software robots (bots) are already adept at it – the avatar is a cover donned by a god to discover the world without drawing attention to his mission, and the computer god can chose any cover he wishes. We are likely to expect hardware robots to master transformation just as easily; think only of the popular toy characters marketed through animated films or movies.

The top photo, taken in 2005, shows the shiny solar array of the Mars exploration rover Spirit two years after it had landed. The bottom photo, taken in 2011 by Spirit’s twin rover, Opportunity, shows the solar array covered in dust. Source: NASA, JPL

One would have to reach out far to explain and solidify this thesis. 3D printers would be needed to produce parts that can be combined in different ways. Robots would be needed to assemble, disassemble, or reassemble other robots. The future would have to be depicted as a permanently changing environment, to which modern machines adjust, all on their own or with the help of their fellow species.

In a world where a robot may have to be small one day and tall the other, fast at one hour and slow at the next, or ugly in one second and pretty in the next, a selfie will allow a robot to remember who it is, whom it encountered with this appearance and what it did under this cover. Selfies will show a robot how old it has become, how much it has changed, and they will help it maintain its identity.

Facebook for robots

Just when a trend seems to be wearing out, it can be transferred to another context where it causes a new stir.



Could selfies provide relevant information not just to engineers and scientists, but to the robots, too? What if they landed on a platform similar to Facebook, where the robots could network with each other, and share data and functions? Could they contribute to robot development and self-learning?

This is not farfetched … already cloud robotics research platforms like RoboEarth and commercial services like are creating new ways for robots to exchange information and learn from the experience of other robots. One day robot selfies could be part of the information they share, perhaps allowing them to gain new knowledge about their own state and their immediate environment.

On the road to self-recognition

The field of sociable robotics investigates how people and robots interact with each other, and is becoming increasingly important as robots enter our homes and workplaces. What if, by taking a selfie, a robot could interpret its own gestures, and reflect and optimize its behavior? Or if it could study its selfie and learn to make its smile more credible? Could it gain “self-awareness” by recognizing its own reflection in the mirror?

Taken by a human at ICRA 2013, this photo is not a robot selfie, but it does point to a growing body of research around human-robot interaction, and the need for robots to be able to respond appropriately to human expressions. Source: Robohub archives.

A view from a mirror, one might object, should be enough for the standard robot, which can freeze the perceived reflection and evaluate it as long as it wants. Yet with a selfie (taken by means of an arm or a mirror), it could do more than that. It could show other machines (as well as humans) what it looks like. It could draw attention to itself and advertise itself. It could make an impression and obtain feedback.

Once an android pulls a duckface and takes a selfie, any roboticist will know the breakthrough has been made.

If the robot selfie proliferates, it may be that we humans will get to know hundreds or thousands of artificial beings. We will look into their faces – provided they have faces – and we will learn something about their self-reflection.

And we might even find these images more exciting than the selfies taken by humans.

tags: , , , ,

Oliver Bendel is a philosopher and a literary scholar with a doctor’s degree in business informatics.
Oliver Bendel is a philosopher and a literary scholar with a doctor’s degree in business informatics.

Related posts :




NVIDIA and ROS Teaming Up To Accelerate Robotics Development, with Amit Goel

Amit Goel, Director of Product Management for Autonomous Machines at NVIDIA, discusses the new collaboration between Open Robotics and NVIDIA. The collaboration will dramatically improve the way ROS and NVIDIA's line of products such as Isaac SIM and the Jetson line of embedded boards operate together.
23 October 2021, by

One giant leap for the mini cheetah

A new control system, demonstrated using MIT’s robotic mini cheetah, enables four-legged robots to jump across uneven terrain in real-time.
23 October 2021, by

Robotics Today latest talks – Raia Hadsell (DeepMind), Koushil Sreenath (UC Berkeley) and Antonio Bicchi (Istituto Italiano di Tecnologia)

Robotics Today held three more online talks since we published the one from Amanda Prorok (Learning to Communicate in Multi-Agent Systems). In this post we bring you the last talks that Robotics Today...
21 October 2021, by and

Sense Think Act Pocast: Erik Schluntz

In this episode, Audrow Nash interviews Erik Schluntz, co-founder and CTO of Cobalt Robotics, which makes a security guard robot. Erik speaks about how their robot handles elevators, how they have hum...
19 October 2021, by and

A robot that finds lost items

Researchers at MIT have created RFusion, a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper, that fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.
18 October 2021, by

Robohub gets a fresh look

If you visited Robohub this week, you may have spotted a big change: how this blog looks now! On Tuesday (coinciding with Ada Lovelace Day and our ‘50 women in robotics that you need to know about‘ by chance), Robohub got a massive modernisation on its look by our technical director Ioannis K. Erripis and his team.
17 October 2021, by

©2021 - ROBOTS Association


©2021 - ROBOTS Association