Pepper, Jibo and Milo make up the first generation of social robots, leading what promises to be a cohort with diverse capabilities and future applications. But what are social robots and what should they be able to do? This article gives an overview of theories that can help us understand social robotics better.
I most like the definition which describes social robots as robots for which social interaction plays a key role. So social skills should be needed by the robot to enable them to perform a function. A survey of socially interactive robots defines some key characteristics that of this type of robot. A social robot should show emotions, have capabilities to converse on an advanced level, understand the mental models of their social partners, form social relationships, make use of natural communication cues, show personality and learn social capabilities.
Understanding Social Robots offers another interesting perspective of what a social robot is: Social robot = robot + social interface
In this definition, the robot has its own purpose outside of the social aspect. Examples of this can be care robots, cleaning robots in our homes, service desk robots at an airport or mall information desk or chef robots in a cafeteria. The social interface is simply a kind of familiar protocol which makes it easy for us to communicate effectively with the robot. Social cues can give us insight into the intention of a robot, for example shifting gaze towards a mop gives a clue that the robot is about to change activity, although they might not have eyes in the classical sense.
These indicators of social capability can be as useful as actual social ability and drivers in the robot. As studies show, children are able to project social capabilities onto simple inanimate objects like a calculator. A puppet becomes an animated social partner during play. In the same way, robots only have to have the appearance of sociability to be effective communicators. An Ethical Evaluation of Human–Robot Relationships
Masahiro Mori defined the Uncanny Valley theory in 1970, in a paper on the subject. He describes the effects of robot appearance and robot movement on our affinity to the robot. In general, we seem to prefer robots to look more like humans and less like robots. There is a certain point at which robots look both human-like and robot-like, and it becomes confusing for us to categorise them. This is the Uncanny Valley – where robot appearance looks very human but also looks a bit ‘wrong’, which makes us uncomfortable. If robot appearance gets past that point, and looks more human, likeability goes up dramatically.
In Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley (2) we learn that there is a similar effect between robot appearance and trustworthiness of a robot. Robots that showed more positive emotions were also more likeable. So it seems like more human looking robots would lead to more trust and likeability.
Up to this point, we assume that social robots should be humanoid or robotic. But what other forms can robots take? The robot should at least have a face to give it an identity and make it an individual. Further, with a face, you can indicate attention and imitate the social partner to improve communication. Most non-verbal cues are relayed through the face and creates an expectation of how to engage with the robot.
The appearance of a robot can help set people’s expectations of what they should be capable of, and limit those expectations to some focused functions which can be more easily achieved. For example, a bartender robot can be expected to be able to have a good conversation and serve drinks, take payment, but probably it’s ok if it can only speak one language, as it only has to fit the context it’s in.
In Why Every Robot at CES Looks Alike, we learn that Jibo’s oversized, round head is designed to mimic the proportions of a young animal or human to make it more endearing. It has one eye to prevent it from triggering the Uncanny Valley effect by looking too robotic and human at the same time. Also, appearing too human-like creates the impression that the robot will respond like a human, while they are not yet capable.
Another interesting example is of Robin, a Nao robot being used to teach children with diabetes how to manage their illness (6). The explanation given to the children is that Robin is a toddler. The children use this role to explain any imperfections in Robin’s speech capabilities.
A survey of socially interactive robots contains some useful concepts in defining levels of social behaviour in robots:
Clearly, social behaviour is nuanced and complex. But to come back to the previous points, social robots can still make themselves effective without reaching the highest levels of social accomplishment.
To close, de Graaf poses a thought-provoking question (4):
How will we share our world with these new social technologies and how will a future robot society change who we are, how we act and interact—not only with robots but also with each other?
It seems that we will first and foremost shape robots by our own human social patterns and needs. But we cannot help but be changed as individuals and a society when we finally add a more sophisticated layer of robotic social partners in the future.
If you enjoyed this article, you may also enjoy:
See all the latest robotics news on Robohub, or sign up for our weekly newsletter.