Robohub.org
 

Emotive communication with things – EmoShape Founder Patrick Levy Rosenthal


by and
14 January 2014



share this:
Holding-a-heart
Photo credit: Image Agency

Why do people who use Facebook spend so much of their online time there? Why do people want to share, to comment?
Patrick Levy Rosenthal asked himself these answers and was drawn back again and again to: Emotion. Researcher and Parisian, Patrick now lives in London working on his startup, EmoSpace.

Over the course of our conversation, he painted a bit of his vision for an emotionally attuned physical world, where objects and devices around us can adjust their function to have a desirable effect on our emotional state, or to simply “sync” with us at an emotional level. I spoke a bit about this same topic with RockPaperRobot’s Jessica Banks, but Patrick’s business concept – and life’s work – is creating the first user interface for emotionally calibrating devices. It starts with a small cube-shaped object “EmoSPARK,” which Patrick plans to have fitted with the technology to connect to a number of other household devices – beginning (most likely) with an mp3 music player.

The first question is: How will this machine detect it’s owner’s emotional state?

Patrick explains that this should be one of the simpler challenges of the device. “It tracks over 180 points on your face, but also the relation between those points, so that if you are smiling it will know that your lips will be stretched, and eyes made more narrow.” The machine will also be able to detect movement and voice tonality in order discern the emotional state of it’s owner.

In terms of the application of this data, Patrick started with the example of tying the emotional feedback to a basic music device. If you walk into a a room and display plenty of signals of sadness (slumped shoulders, low voice tones, frown), the machine will aim to bring your emotion up another level via a piece of music that might bump your emotional state upwards. In doing so, the machine will aim to detect the effect that the music is having on your emotions, or the machine may simply ask you if you like the music, or what you’re in the mood for. In doing so, it can remember your responses to certain stimuli, and remember which stimuli tended to be most helpful for you in which situations, and so perform it’s job (improving your state / creating a desirable environment) better day by day. Patrick’s vision is for lighting, computers, and myriad other devices to have a similar kind of resonation with your emotions by attaching them to the cube.

“For the last 20 years, I believe that robotics and artificial intelligence failed humans.” He admits that in many respects, robots are impressive in their modern feats, but that they are nowhere near the level of intelligence that many people supposed they might be a few decades ago. ”We still see them as a bunch of silicon… we know that they don’t understand what we feel.” Patrick also mentions the uncanny valley, and how more realistic robots are often more threatening and troubling than ones which do not look like humans, and how a machine like Disney’s WALL-E appeals to us because it is not humanoid (read: creepy), and because WALL-E is clearly an emotional creature.

I asked Patrick a bit about the potential risks for an emotionally intelligent machine. It would seem that a machine which is capable of experiencing emotion itself might be capable of doing what humans do when they get emotional. Namely: rash things, violent things, irrational or unpredictable things. Patrick explained that he will have laws programmed into his machines which do not permit them to – say – harm humans, or act with any kind of malicious intent (for those of you who are interested, his rules are modeled and extrapolated from Asimov’s robot lows). “Even if you hurt the feelings of the cube, it will always gear it’s actions towards making you feel better,” he says.

In the end, Patrick believes that emotional intelligence in machines will – on the aggregate – make us less likely to run into a “Terminator” situation in the future. For his sake and mine, I hope so, too.



tags: , ,


Daniel Faggella Daniel Faggella is the founder of TechEmergence, an internet entrepreneur, and speaker.
Daniel Faggella Daniel Faggella is the founder of TechEmergence, an internet entrepreneur, and speaker.

TechEmergence is the only news and media site exclusively about innovation at the crossroads of technology and psychology.
TechEmergence is the only news and media site exclusively about innovation at the crossroads of technology and psychology.





Related posts :



Robot Talk Episode 123 – Standardising robot programming, with Nick Thompson

  30 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Nick Thompson from BOW about software that makes robots easier to program.

Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

  29 May 2025
Find out who won the awards presented at the International Conference on Autonomous Agents and Multiagent Systems last week.

Congratulations to the #ICRA2025 best paper award winners

  27 May 2025
The winners and finalists in the different categories have been announced.

#ICRA2025 social media round-up

  23 May 2025
Find out what the participants got up to at the International Conference on Robotics & Automation.

Robot Talk Episode 122 – Bio-inspired flying robots, with Jane Pauline Ramos Ramirez

  23 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Jane Pauline Ramos Ramirez from Delft University of Technology about drones that can move on land and in the air.

Robot Talk Episode 121 – Adaptable robots for the home, with Lerrel Pinto

  16 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Lerrel Pinto from New York University about using machine learning to train robots to adapt to new environments.

What’s coming up at #ICRA2025?

  16 May 2025
Find out what's in store at the IEEE International Conference on Robotics & Automation, which will take place from 19-23 May.

Robot see, robot do: System learns after watching how-tos

  14 May 2025
Researchers have developed a new robotic framework that allows robots to learn tasks by watching a how-to video



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence