Robohub.org
 

Emotive communication with things – EmoShape Founder Patrick Levy Rosenthal


by and
14 January 2014



share this:
Holding-a-heart
Photo credit: Image Agency

Why do people who use Facebook spend so much of their online time there? Why do people want to share, to comment?
Patrick Levy Rosenthal asked himself these answers and was drawn back again and again to: Emotion. Researcher and Parisian, Patrick now lives in London working on his startup, EmoSpace.

Over the course of our conversation, he painted a bit of his vision for an emotionally attuned physical world, where objects and devices around us can adjust their function to have a desirable effect on our emotional state, or to simply “sync” with us at an emotional level. I spoke a bit about this same topic with RockPaperRobot’s Jessica Banks, but Patrick’s business concept – and life’s work – is creating the first user interface for emotionally calibrating devices. It starts with a small cube-shaped object “EmoSPARK,” which Patrick plans to have fitted with the technology to connect to a number of other household devices – beginning (most likely) with an mp3 music player.

The first question is: How will this machine detect it’s owner’s emotional state?

Patrick explains that this should be one of the simpler challenges of the device. “It tracks over 180 points on your face, but also the relation between those points, so that if you are smiling it will know that your lips will be stretched, and eyes made more narrow.” The machine will also be able to detect movement and voice tonality in order discern the emotional state of it’s owner.

In terms of the application of this data, Patrick started with the example of tying the emotional feedback to a basic music device. If you walk into a a room and display plenty of signals of sadness (slumped shoulders, low voice tones, frown), the machine will aim to bring your emotion up another level via a piece of music that might bump your emotional state upwards. In doing so, the machine will aim to detect the effect that the music is having on your emotions, or the machine may simply ask you if you like the music, or what you’re in the mood for. In doing so, it can remember your responses to certain stimuli, and remember which stimuli tended to be most helpful for you in which situations, and so perform it’s job (improving your state / creating a desirable environment) better day by day. Patrick’s vision is for lighting, computers, and myriad other devices to have a similar kind of resonation with your emotions by attaching them to the cube.

“For the last 20 years, I believe that robotics and artificial intelligence failed humans.” He admits that in many respects, robots are impressive in their modern feats, but that they are nowhere near the level of intelligence that many people supposed they might be a few decades ago. ”We still see them as a bunch of silicon… we know that they don’t understand what we feel.” Patrick also mentions the uncanny valley, and how more realistic robots are often more threatening and troubling than ones which do not look like humans, and how a machine like Disney’s WALL-E appeals to us because it is not humanoid (read: creepy), and because WALL-E is clearly an emotional creature.

I asked Patrick a bit about the potential risks for an emotionally intelligent machine. It would seem that a machine which is capable of experiencing emotion itself might be capable of doing what humans do when they get emotional. Namely: rash things, violent things, irrational or unpredictable things. Patrick explained that he will have laws programmed into his machines which do not permit them to – say – harm humans, or act with any kind of malicious intent (for those of you who are interested, his rules are modeled and extrapolated from Asimov’s robot lows). “Even if you hurt the feelings of the cube, it will always gear it’s actions towards making you feel better,” he says.

In the end, Patrick believes that emotional intelligence in machines will – on the aggregate – make us less likely to run into a “Terminator” situation in the future. For his sake and mine, I hope so, too.



tags: , ,


Daniel Faggella Daniel Faggella is the founder of TechEmergence, an internet entrepreneur, and speaker.
Daniel Faggella Daniel Faggella is the founder of TechEmergence, an internet entrepreneur, and speaker.

TechEmergence is the only news and media site exclusively about innovation at the crossroads of technology and psychology.
TechEmergence is the only news and media site exclusively about innovation at the crossroads of technology and psychology.





Related posts :



Learning robust controllers that work across many partially observable environments

  27 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

Human-robot interaction design retreat

  25 Nov 2025
Find out more about an event exploring design for human-robot interaction.

Robot Talk Episode 134 – Robotics as a hobby, with Kevin McAleer

  21 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Kevin McAleer from kevsrobots about how to get started building robots at home.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Robot Talk Episode 133 – Creating sociable robot collaborators, with Heather Knight

  14 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Heather Knight from Oregon State University about applying methods from the performing arts to robotics.

CoRL2025 – RobustDexGrasp: dexterous robot hand grasping of nearly any object

  11 Nov 2025
A new reinforcement learning framework enables dexterous robot hands to grasp diverse objects with human-like robustness and adaptability—using only a single camera.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence