news    views    podcast    learn    |    about    contribute     republish    
ep258

DART: Noise injection for robust imitation learning, with Michael Laskey

April 14, 2018
Toyota HSR Trained with DART to Make a Bed.

In this episode, Audrow Nash speaks with Michael Laskey, PhD student at UC Berkeley, about a method for robust imitation learning, called DART. Laskey discusses how DART relates to previous imitation learning methods, how this approach has been used for folding bed sheets, and on the importance of robotics leveraging theory in other disciplines.

To learn more, see this post on Robohub from the Berkeley Artificial Intelligence Research (BAIR) Lab.

Michael Laskey

Michael Laskey is a Ph.D. Candidate in EECS at UC Berkeley, advised by Prof. Ken Goldberg in the AUTOLAB (Automation Sciences). Michael’s Ph.D. develops new algorithms for Deep Learning of robust robot control policies and examines how to reliably apply recent deep learning advances for scalable robotics learning in challenging unstructured environments. Michael received a B.S. in Electrical Engineering from the University of Michigan, Ann Arbor. His work has been nominated for multiple best paper awards at IEEE, ICRA, and CASE and has been featured in news outlets such as MIT Tech Review and Fast Company.

Links


comments powered by Disqus


about Robohub Podcast:

Robohub Podcast is a non-profit robotics podcast where we interview experts in robotics, including researchers, entrepreneurs, policy makers, and venture capitalists. Our interviewers are researchers, entrepreneurs, and engineers involved in robotics. Our interviews are technical and, often, get into the details of what we are discussing, but we make an effort to have our interviews understandable to a general audience.


read more
follow Robohub Podcast: