Robohub.org
 

Teaching a robot to ‘cook’ by showing it YouTube videos of cooking shows

by , , and
14 January 2015



share this:
University of Maryland computer scientist Yiannis Aloimonos (center) is developing robotic systems able to visually recognize objects and generate new behavior based on those observations. Photo credit: John T. Consoli

University of Maryland computer scientist Yiannis Aloimonos (center) is developing robotic systems able to visually recognize objects and generate new behavior based on those observations.
Photo credit: John T. Consoli

Most of us have at some point watched a video in order to learn how to do something. Our research shows that a robot can learn human actions by watching videos of humans performing those actions – in this case, learning to cook by watching cooking shows on YouTube.

While it might be easy for a human to learn from watching a video, the problem is very challenging for a robot. This is because humans perform a large variety of actions that can involve various objects, body parts and movements that all have to be visually recognized by the robot, even when hands and objects might obscure parts of the video scene, as happens in the video below:

Several research groups have looked at the problem of teaching robots actions, and most have concentrated on trying to imitate or copy the actual movement. In contrast, we model them by the goals that are involved in order to accommodate the different ways manipulation actions can be performed. This introduces a set of basic constraints on the sequence of movements.

For example, if you want to cut a cucumber, first you must grasp the knife, then bring the knife to the cucumber, then perform the cutting action, and every cut separates a piece from the main cucumber. This set of rules can be expressed in the form of a context-free grammar (see Fig.1). This is a minimalist action grammar (a simple grammar with a small set of rules), and its symbols correspond to meaningful chunks of the observed video. Thus, interpreting the action in a video is like understanding a sentence that we read or hear.

Fig.1. A grammar encodes actions by representing the underlying goals. Its symbols are the objects, tools, movements, and grasp types, which are obtained from video.

Fig.1. A grammar encodes actions by representing the underlying goals. Its symbols are the objects, tools, movements, and grasp types, which are obtained from video.

In other words, to parse the video into the primitive actions that constitute complex tasks, we need to map chunks of the video to particular symbols involving objects, tools, and movement; i.e. we need to segment the video into meaningful parts.

The paper provides a new idea that allows this segmentation of the visual data.

University of Maryland researcher Cornelia Fermuller (left) works with graduate student Yezhou Yang (right) on computer vision systems able to accurately identify and replicate intricate hand movements. Photo credit: John T. Consoli

University of Maryland researcher Cornelia Fermuller (left) works with graduate student Yezhou Yang (right) on computer vision systems able to accurately identify and replicate intricate hand movements. Photo credit: John T. Consoli

Important locations in the video are the ones where there is contact. For example, the hand grasps the knife, or the knife touches the cucumber, and so on. These are hard to find in video, but we can predict most of them from the particular pose of the hand, i.e. the grasp type.

To address the second challenge – the visual recognition from video – we use “deep learning” techniques. We learn to recognize different grasp types (see Fig. 2), objects, and tools by learning the image information that is most relevant for the recognition. By finding the objects and the grasp type, and the times when they change in the video, we can segment and parse the video into a set of primitive actions that the robot can then execute.

In our experiments we used YouTube videos from cooking shows, and the result is that the robot “learns how to cook”. However, the approach can be generalized, and the same ideas can be used for learning other types of actions and manipulations, such as hammering or screwing, assembling, and so on.

grasp

Fig.2. The different grasping types.

We are grateful to our research sponsors, specifically the MSEE DARPA Program, The European Union Cognitive Systems and Robotics Program and the NSF INSPIRE Program. The research is the result of a collaborative activity between the Computer Vision Laboratory (CVL) in the Institute for Advanced Computer Studies (UMIACS) and the Autonomy, Robotics and Cognition Laboratory (ARC) in the Institute for Systems Research, both at the University of Maryland, College Park. NICTA is funded by the Australian Government through the ICT Centre of Excellence program.

You can find a PDF of the original research paper here.

If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , ,


Yezhou Yang is a PhD student in Computer Science at the University of Maryland at College Park.
Yezhou Yang is a PhD student in Computer Science at the University of Maryland at College Park.

Yi Li is a senior researcher at NICTA in the Visual Processing group, where he develops algorithms for Computer Vision.
Yi Li is a senior researcher at NICTA in the Visual Processing group, where he develops algorithms for Computer Vision.

Cornelia Fermüller is a Research Scientist at the University of Maryland Institute for Advanced Computer Studies.
Cornelia Fermüller is a Research Scientist at the University of Maryland Institute for Advanced Computer Studies.

Yiannis Aloimonos is a Professor of Computational Vision and Intelligence at the Department of Computer Science of the University of Maryland at College Park and the Director of the Computer Vision Laboratory at the Institute for Advanced Computer Studies (UMIACS).
Yiannis Aloimonos is a Professor of Computational Vision and Intelligence at the Department of Computer Science of the University of Maryland at College Park and the Director of the Computer Vision Laboratory at the Institute for Advanced Computer Studies (UMIACS).





Related posts :



ep.

357

podcast

Origin Story of the OAK-D, with Brandon Gilles

Brandon Gilles, the founder of Luxonis and maker of the OAK-D, describes the journey and the flexibility of the OAK-D line of products
01 July 2022, by

The one-wheel Cubli

Researchers Matthias Hofer, Michael Muehlebach and Raffaello D’Andrea have developed the one-wheel Cubli, a three-dimensional pendulum system that can balance on its pivot using a single reaction wheel. How is it possible to stabilize the two tilt angles of the system with only a single reaction wheel?
30 June 2022, by and

At the forefront of building with biology

Raman is, as she puts it, “a mechanical engineer through and through.” Today, Ritu Raman leads the Raman Lab and is an Assistant Professor in the Department of Mechanical Engineering.
28 June 2022, by

Hot Robotics Symposium celebrates UK success

An internationally leading robotics initiative that enables academia and industry to find innovative solutions to real world challenges, celebrated its success with a Hot Robotics Symposium hosted across three UK regions last week.
25 June 2022, by

Researchers release open-source photorealistic simulator for autonomous driving

MIT scientists unveil the first open-source simulation engine capable of constructing realistic environments for deployable training and testing of autonomous vehicles.
22 June 2022, by

In this episode, Audrow Nash speaks to Maria Telleria, who is a co-founder and the CTO of Canvas. Canvas makes a drywall finishing robot and is based in the Bay Area. In this interview, Maria talks ab...
21 June 2022, by and





©2021 - ROBOTS Association


 












©2021 - ROBOTS Association