Robohub.org
 

The race for robot clairvoyance

by
09 November 2018



share this:

This week a Harvard Business School student challenged me to name a startup capable of producing an intelligent robot – TODAY! At first I did not understand the question, as artificial intelligence (AI) is an implement like any other in a roboticist’s toolbox. The student persisted, she demanded to know if I thought that the current co-bots working in factories could one day evolve to perceive the world like humans. It’s a good question that I didn’t appreciate at the time as robots are best deployed for specific repeatable tasks, even with deep learning systems. By contrast, mortals comprehend their surroundings (and other organisms) using a sixth sense, intuition.

tennisbot.gif

As an avid tennis player, I also enjoyed meeting Tennibot this week. The autonomous ball-gathering robot sweeps the court like a roomba sucking up dust off a rug. In order to accomplish this task, without knocking over players, it navigates around the cage utilizing six cameras on each side. This is a perfect example of the type of job that an unmanned system excels at performing, freeing up athletes from wasting precious court time with tedious cleanup. Yet, Tennibot, at the end of the day, is a dumb appliance. While it gobbles up balls quicker than any person, it is unable to discern the quality of the game or the health of players.

No one expects Tennibot to save Roger Federer’s life, but what happens when a person has a heart attack inside a self-driving car on a two-hour journey? While autonomous vehicles are packed with sensors to identify and safely steer around cities and highways, few are able to perceive human intent. As Ann Cheng of Hyundai explains, “We [drivers] think about what that other person is doing or has the intent to do. We see a lot of AI companies working on more classical problems, like object detection [or] object classification. Perceptive is trying to go one layer deeper—what we do intuitively already.” Hyundai joined Jim Adler’s Toyota AI Ventures this month in investing Perceptive Automata, an “intuitive self-driving system that is able to recognize, understand, and predict human behavior.”

0_1dqK7bN1HqXq6AT1.png

As stated by Adler’s Medium post, Perceptive’s technology uses “behavioral science techniques to characterize the way human drivers understand the state-of-mind of other humans and then train deep learning models to acquire that human ability. These deep learning models are designed for integration into autonomous driving stacks and next-generation driver assistance systems, sandwiched between the perception and planning layers. These deep learning, predictive models provide real-time information on the intention, awareness, and other state-of-mind attributes of pedestrians, cyclists and other motorists.”

While Perceptive Automata is creating “predictive models” for outside the vehicle, few companies are focused on the conditions inside the cabin. The closest implementations are a number of eye-tracking cameras that alert occupants to distracted driving. While these technologies observe the general conditions of passengers, they rely on direct eye contact to distinguish between emotions (fatigue, excitability, stress, etc.), which is impossible if one is passed out. Furthermore, none of these vision systems have the ability to predict human actions before they become catastrophic.

Isaac Litman, formerly of Mobileye, understands fully well the dilemma presented by computer vision systems in delivering on the promise of autonomous travel. In speaking with Litman this week about his newest venture Neteera, he declared that in today’s automative landscape the “the only unknown variable is the human.” Unfortunately, the recent wave of Tesla and Uber autopilot crashes has glaringly illustrated the importance of tracking the attention of vehicle occupants in handing off between autopilot systems and human drivers. Litman further explains that Waymo and others are collecting data on occupant comfort as AI-enabled drivers have reportedly led to high levels of nausea from driving too consistently. Litman describes this as the indigestion problem, clarifying that after eating a big meal one may want to drive more slowly than on an empty stomach. In the future Litman professes that autonomous cars will be marketed “not by the performance of their engines, but on the comfort of their rides.”

Screen Shot 2018-10-28 at 11.46.02 AM.png

Litman’s view is further endorsed by the recent patent application filed this summer by Apple’s Project Titan team for developing “Comfort Profiles” for autonomous driving. According to AppleInsider, the application “describes how an autonomous driving and navigation system can move through an environment, with motion governed by a number of factors that are set indirectly by the passengers of the vehicle.” The Project Titan system would utilize a fusion of sensors (LIDAR, depth cameras, and infrared) to monitor the occupants’ “eye movements, body posture, gestures, pupil dilation, blinking, body temperature, heart beat, perspiration, and head position.” The application details how the data would integrate into the vehicle systems to automatically adjust the acceleration, turning rate, performance, suspension, traction control and other factors to the personal preferences of the riders. While Project Titan is taking the first step toward developing an autonomous comfort system, Litman expresses that it is limited by the inherent short-comings of vision-based systems that are susceptible to light, dust, line of sight, condensation, motion, resolution, and safety concerns.

Unlike vision sensors, Neteera is a cost-effective micro-radar on a chip that leverages its own network of proprietary algorithms to provide “the first contact free vital sign detection platform.” Its FDA-level of accuracy is not only being utilized by the automative sector, but healthcare systems across the United States for monitoring such elusive conditions as sleep apnea and sudden infant death syndrome. To date, the challenge of monitoring vital signs through micro-skin motion in the automotive industry has been the displacement caused by a moving vehicles. However, Litman’s team has developed a a patent-pending “motion compensation algorithm” that tracks “quasi-periodic signals in the presence of massive random motions,” providing near perfect accuracy (see tables below).

Screen Shot 2018-10-28 at 1.22.31 PM

While the automotive industry races to launch fleets of autonomous vehicles, Litman estimates that the most successful players will be the ones that install empathic engines into the machines’ framework. Unlike the crowded field of AI and computer vision startups that are enabling robocars to safely navigate city streets, Neteera’s “intuition on a chip” is probably one of the only mechatronic ventures that actually report on the psychological state of drivers and passengers. Litman’s innovation has wider societal implications, as social robots begin to augment humans in the workplace and support the infirm and elderly in coping with the fragility of life.

As scientists improve artificial intelligence, it is still unclear what the reaction will be from ordinary people to such “emotional” robots. In the words of writer Adam Williams, “Emotion is something we reserve for ourselves: depth of feeling is what we use to justify the primacy of human life. If a machine is capable of feeling, that doesn’t make it dangerous in a Terminator-esque fashion, but in the abstract sense of impinging on what we think of as classically human.”




Oliver Mitchell is the Founding Partner of Autonomy Ventures a New York based venture capital firm focused on seed stage investments in robotics
Oliver Mitchell is the Founding Partner of Autonomy Ventures a New York based venture capital firm focused on seed stage investments in robotics





Related posts :



Octopus inspires new suction mechanism for robots

Suction cup grasping a stone - Image credit: Tianqi Yue The team, based at Bristol Robotics Laboratory, studied the structures of octopus biological suckers,  which have superb adaptive s...
18 April 2024, by

Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association