Robohub.org
 

Peter K. Allen: Multi-Modal Geometric Learning for Grasping | CMU RI Seminar


by
01 December 2018



share this:

Link to video on YouTube

Abstract: “In this talk, we will describe methods to enable robots to grasp novel objects using multi-modal data and machine. The starting point is an architecture to enable robotic grasp planning via shape completion using a single occluded depth view of objects. Shape completion is accomplished through the use of a 3D CNN. The network is trained on our open source dataset of over 440,000 3D exemplars captured from varying viewpoints. At runtime, a pointcloud captured from a single point of view is fed into the CNN, which fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object, which extends to novel objects as well. We have extended this network to incorporate both depth and tactile information. Offline, the network is provided with both simulated depth and tactile information and trained to predict the object’s geometry, thus filling in regions of occlusion. At runtime, the network is provided a partial view of an object and exploratory tactile information is acquired to augment the captured depth information. We demonstrate that even small amounts of additional tactile information can be incredibly helpful in reasoning about object geometry. We also provide experimental results comparing grasping success using our method.”




John Payne





Related posts :



Robot Talk Episode 140 – Robot balance and agility, with Amir Patel

  16 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Amir Patel from University College London about designing robots with the agility and manoeuvrability of a cheetah.

Taking humanoid soccer to the next level: An interview with RoboCup trustee Alessandra Rossi

and   14 Jan 2026
Find out more about the forthcoming changes to the RoboCup soccer leagues.

Robots to navigate hiking trails

  12 Jan 2026
Find out more about work presented at IROS 2025 on autonomous hiking trail navigation via semantic segmentation and geometric analysis.

Robot Talk Episode 139 – Advanced robot hearing, with Christine Evers

  09 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Christine Evers from University of Southampton about helping robots understand the world around them through sound.

Meet the AI-powered robotic dog ready to help with emergency response

  07 Jan 2026
Built by Texas A&M engineering students, this four-legged robot could be a powerful ally in search-and-rescue missions.

MIT engineers design an aerial microrobot that can fly as fast as a bumblebee

  31 Dec 2025
With insect-like speed and agility, the tiny robot could someday aid in search-and-rescue missions.

Robohub highlights 2025

  29 Dec 2025
We take a look back at some of the interesting blog posts, interviews and podcasts that we've published over the course of the year.

The science of human touch – and why it’s so hard to replicate in robots

  24 Dec 2025
Trying to give robots a sense of touch forces us to confront just how astonishingly sophisticated human touch really is.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence