Why do people who use Facebook spend so much of their online time there? Why do people want to share, to comment?
Patrick Levy Rosenthal asked himself these answers and was drawn back again and again to: Emotion. Researcher and Parisian, Patrick now lives in London working on his startup, EmoSpace.
Over the course of our conversation, he painted a bit of his vision for an emotionally attuned physical world, where objects and devices around us can adjust their function to have a desirable effect on our emotional state, or to simply “sync” with us at an emotional level. I spoke a bit about this same topic with RockPaperRobot’s Jessica Banks, but Patrick’s business concept – and life’s work – is creating the first user interface for emotionally calibrating devices. It starts with a small cube-shaped object “EmoSPARK,” which Patrick plans to have fitted with the technology to connect to a number of other household devices – beginning (most likely) with an mp3 music player.
The first question is: How will this machine detect it’s owner’s emotional state?
Patrick explains that this should be one of the simpler challenges of the device. “It tracks over 180 points on your face, but also the relation between those points, so that if you are smiling it will know that your lips will be stretched, and eyes made more narrow.” The machine will also be able to detect movement and voice tonality in order discern the emotional state of it’s owner.
In terms of the application of this data, Patrick started with the example of tying the emotional feedback to a basic music device. If you walk into a a room and display plenty of signals of sadness (slumped shoulders, low voice tones, frown), the machine will aim to bring your emotion up another level via a piece of music that might bump your emotional state upwards. In doing so, the machine will aim to detect the effect that the music is having on your emotions, or the machine may simply ask you if you like the music, or what you’re in the mood for. In doing so, it can remember your responses to certain stimuli, and remember which stimuli tended to be most helpful for you in which situations, and so perform it’s job (improving your state / creating a desirable environment) better day by day. Patrick’s vision is for lighting, computers, and myriad other devices to have a similar kind of resonation with your emotions by attaching them to the cube.
“For the last 20 years, I believe that robotics and artificial intelligence failed humans.” He admits that in many respects, robots are impressive in their modern feats, but that they are nowhere near the level of intelligence that many people supposed they might be a few decades ago. ”We still see them as a bunch of silicon… we know that they don’t understand what we feel.” Patrick also mentions the uncanny valley, and how more realistic robots are often more threatening and troubling than ones which do not look like humans, and how a machine like Disney’s WALL-E appeals to us because it is not humanoid (read: creepy), and because WALL-E is clearly an emotional creature.
I asked Patrick a bit about the potential risks for an emotionally intelligent machine. It would seem that a machine which is capable of experiencing emotion itself might be capable of doing what humans do when they get emotional. Namely: rash things, violent things, irrational or unpredictable things. Patrick explained that he will have laws programmed into his machines which do not permit them to – say – harm humans, or act with any kind of malicious intent (for those of you who are interested, his rules are modeled and extrapolated from Asimov’s robot lows). “Even if you hurt the feelings of the cube, it will always gear it’s actions towards making you feel better,” he says.
In the end, Patrick believes that emotional intelligence in machines will – on the aggregate – make us less likely to run into a “Terminator” situation in the future. For his sake and mine, I hope so, too.