Robohub.org
 

Peter K. Allen: Multi-Modal Geometric Learning for Grasping | CMU RI Seminar


by
01 December 2018



share this:

Link to video on YouTube

Abstract: “In this talk, we will describe methods to enable robots to grasp novel objects using multi-modal data and machine. The starting point is an architecture to enable robotic grasp planning via shape completion using a single occluded depth view of objects. Shape completion is accomplished through the use of a 3D CNN. The network is trained on our open source dataset of over 440,000 3D exemplars captured from varying viewpoints. At runtime, a pointcloud captured from a single point of view is fed into the CNN, which fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object, which extends to novel objects as well. We have extended this network to incorporate both depth and tactile information. Offline, the network is provided with both simulated depth and tactile information and trained to predict the object’s geometry, thus filling in regions of occlusion. At runtime, the network is provided a partial view of an object and exploratory tactile information is acquired to augment the captured depth information. We demonstrate that even small amounts of additional tactile information can be incredibly helpful in reasoning about object geometry. We also provide experimental results comparing grasping success using our method.”




John Payne

            AUAI is supported by:



Subscribe to Robohub newsletter on substack



Related posts :

Ultralightweight sonar plus AI lets tiny drones navigate like bats

  29 Apr 2026
Researchers develop ultrasound-based perception system inspired by bat echolocation.

Gradient-based planning for world models at longer horizons

  28 Apr 2026
What were the problems that motivated this project and what was the approach to address them?

Robot Talk Episode 153 – Origami-inspired robots, with Chenying Liu

  24 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Chenying Liu from University of Oxford about how a robot's physical form can actively contribute to sensing, processing, decision-making, and movement.

Sony AI table tennis robot outplays elite human players

  22 Apr 2026
New robot and AI system has beaten professional and elite table tennis players.

AI system learns to keep warehouse robot traffic running smoothly

  20 Apr 2026
This new approach adapts to decide which robots should get the right of way at every moment, avoiding congestion and increasing throughput.

Robot Talk Episode 152 – Dexterous robot hands, with Rich Walker

  17 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Rich Walker from Shadow Robot Company about their advanced robotic hands for research and industry.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

and   14 Apr 2026
Ross King created the first robot scientist back in 2009. He spoke to us about the nature of scientific discovery, the role AI has to play, and his recent work in DNA computing.

Robot Talk Episode 151 – Robots to study the ocean, with Simona Aracri

  10 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Simona Aracri from National Research Council of Italy about innovative robot designs for oceanography and environmental monitoring.



AUAI is supported by:







Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence