Robohub.org
 

On Pepper, Aldebaran and emotional robotics: Interview with Bruno Maisonnier


by
24 June 2014



share this:
Bruno_Maisonnier_Aldebaran_TedEx
Bruno Maisonnier at TedExConcorde. Photo credit: Rodrigo SEPÚLVEDA SCHULZ.

Last week I dropped by Aldebaran’s studio to get a glimpse of Pepper in action, and was pretty excited about this robot. But then I talked with Bruno Maisonnier, the CEO of Aldebaran. And then I got really excited: what Pepper represents is another iteration in the realization of the roboticists’ dream.

Aldebaran built Pepper for Japanese telecom company SoftBank, which will start selling the robot to consumers in Japan next February for about US $1,900. The robot will be produced by Taiwanese electronics manufacturer Foxconn (who manufactures the iPhone and other Apple products). So with the marketing presence, technical abilities, and hardware systems now at a consumer-grade quality, we’re one step closer to Rosie the Robot. Or Her. Or the Cylon uprising. Remains to be seen.

pepper_aldebaran_softbank

Emotional interaction, according to Bruno, is “everything.” We spoke briefly on June 13, 2014 and I was able to ask him about some of this technology, its socio-cultural impacts, and the future of this work for Aldebaran. Several years ago I had been hanging around the Paris studio of Aldebaran and had seen Romeo (and wrote about it and NAO in my book on robots), so I had some sense of what was coming, but I was not expecting the emphasis on affective computing methods that Pepper embodies.

MAISONNIER: Pepper is a fantastic dream I’ve had for years – an interactive robot you can have at home that’s still large enough to be a companion and not just a toy or a pet. At Aldebaran we have a specific concept. We want to help people with robots. We want them to be companion. To help people grow, have fun, and connect them to others.

So first you need to have a cute, nice robot – not just something to say, ‘Oh, that it is nice,’ but the essence of the robot is something that lives in your home. This is not for geeks, so design is our first and most essential part. We want a natural way to interact with the robot and not have a 200-page manual that people need to read. So we need people to be comfortable and interact easily with the robot. We humans have evolved to interact with other humanoid shapes. And sometimes, even without words you understand what the other person means and yes or no can be clearly indicated with body language. We can see if someone is saying yes but meaning no, and if you are saying yes when you are sad, or no when you are happy the message is totally different. So if I want the robot interaction to be natural the robot has to understand that.

MEADOWS: How do you do that? That’s a tall order.

MAISONNIER: These drivers for both understanding and expressing emotions are key to our design. NAO was the first emotional robot, with many possibilities, but many limitations because of its size. Pepper is a larger robot, and one of the consequences of this design decision is that it can move faster. You can have the robot follow you, or if it has to go into your bedroom to look for something it won’t take 20 minutes, and it will be able to open doors. If you want to interact with emotional context, then size is important; [Pepper’s size offers] a more human interaction, rather than talking with something the size of a mouse.

MEADOWS: Ok, but an android is a bad design for a robot because the function follows the form.

MAISSONNIER: But for Pepper the function is to interact with people. The majority of people in robotics are coming from industrial robotics; they’re building left-brain robots. I’m not interested in that. There are huge applications, but this is not what I’m interested in. We want right-brain robots for friends and companions – robots that are more like Jiminy Cricket.

MEADOWS: Here at Geppetto we feel much the same way. Now Romeo and NAO have legs. Pepper’s got wheels. Why?

MAISSONNIER: The main driver is that Pepper is an emotional robot, and we rarely use our legs for social interaction. While there are some detriments, like going up stairs, there are also benefits to the wheels, such as giving the robot a longer autonomy because it can carry a heavier battery. The goal of the robot is to help by teaching, being a health coach, monitoring, growing, etc. So it is more right-brain. Yes, the form follows function because the interaction with humans is the function. So you don’t need legs.

MEADOWS: So how are emotions used to guide the robot?

MAISSONNIER: Communication is emotion. You are an artist and know this. When you are speaking, only about 20-30% of what you say is in the words. But 70-80% is not words – all the other parts are emotions. So communication is emotion.

MEADOWS: What are the methods you’ve developed to convey this?

MAISSONNIER: We’ve invented an emotional engine that collects many different inputs and combines them in multi-modal approaches that use facial analysis, tone of voice, body language, gesture, small noises like laughter, and understanding the micro-signals that your body sends when you talk with someone. All that is feeding a higher level message, which might be that “she said yes but she means no” for example. This serves as the input to our dialogue engine, and we have an expressive engine that uses automatic body language in response. It has all the possibliities – hand, movements, sound, voice, etc – to express the correct message.

MEADOWS: Beautiful. What do you do with this platform?

MAISSONNIER: We’ve been speaking about the robot itself, and all that is part of our expertise platform, but on top of that are the applications so users or developers can build apps via our API. All that goes to the developers, who are then able to use it as they want. There are many ways to use this. For example, if you are designing an application to play chess, you can use the API to deliver messages on your chess program. The system can appear frustrated, or sense frustration at key moments in the game.

The developers are core to our strategy. Aldebaran has an app store and we confirm the quality of the app so you can download this to users, free of charge, who can download the application from the store. We have an atelier [a french term between workshop, gallery, and studio] where everyone can enter, meet with robots, experiment with them, and play with our applications. This will be the home of the developers.

MEADOWS: I love this! Perfect. Are they only online or can I visit them?

MAISSONNIER: We will have an Aldebaran Atelier in Paris, on 26 June, 2014 and two more will open in Tokyo in August. By 2015 we will have two in the US (New York and California).

pepper_aldebaran_softbank_2

MEADOWS: Changing the subject a little, you have a robot that has some incredible surveillant abilities. I think there’s a risk here in that systems may not respect privacy, or may be used for ends that don’t align with the end-user. As I’ve written about on RoboHub, where I ask if surveillance is the new business model for consumer robotics, Andy Rubin is taking the robotics effort in the same direction he took the Android effort. This not only changes the landscape for Aldebaran, but it means that you have to address questions of privacy. What Is Google doing?

MAISSONNIER: I don’t know. It is opening up the [robotics] market. They will need some time before having ready robots because the culture of software is not the culture of hardware. Google is a software company, but though they have bought these companies this is not an immediate solution. Building a $1m robot is not the same as building a $1k robot .. they will need time. Their business model, as you said, is to sell advertisements. Do you want to have a spying robot at home?

MEADOWS: Well, most of us don’t seem to mind. Many people still use Google. Why is spying on me with my email different from spying on me with my robot?

MAISSONNIER: One is autonomous. If you send your mail you don’t see it flying around your house checking your books, checking your clothes. It has to be friendly, but as important is that it needs to be your friend that can protect you. You have to be sure it is on your side.

MEADOWS: Autonomy and invisibility are quite different. And a robot can do things that a user may not perceive. After all, email could be called a spybot as easily as a bots.txt file.

MAISSONNIER: This kind of thing will exist. If other robots like Aldebaran exist then between a spying robot and a non-spying robot, people will choose the right one. People will chose and we will settle the rules that we don’t want spying robots. I don’t want them, you dont’ want them. We would put a physical bridge, a switch, that will block communication and ensure privacy. That way people can say, “I want to authorize my robot to communicate with the outside world or not.”

MSM: I hope so. Where will people be able to find Pepper?

MAISSONNIER: Thousands of Peppers will be in SoftBank stores in Japan.

MEADOWS: I understand it will be about $2000, correct?

MAISSONNIER: Yes, for Pepper standalone, after that you can get insurance, monitor modules, repair modules, content, etc. These added components can be important. For me, the next step is to add content, applications in our application store – so a huge community of people will become interested in joining our adventure.

And what an adventure it will be. We look forward to seeing what great inventions Aldebaran turns out next.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , , , , , , , , ,


Mark Stephen Meadows is President of BOTanic, a company that provides natural language interfaces for conversational avatars, robots, IoT appliances, and connected systems.
Mark Stephen Meadows is President of BOTanic, a company that provides natural language interfaces for conversational avatars, robots, IoT appliances, and connected systems.





Related posts :



Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

From sea to space, this robot is on a roll

  13 Oct 2025
Graduate students in the aptly named "RAD Lab" are working to improve RoboBall, the robot in an airbag.

Robot Talk Episode 128 – Making microrobots move, with Ali K. Hoshiar

  10 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Ali K. Hoshiar from University of Essex about how microrobots move and work together.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence