Robohub.org
ep.

257

podcast
 

Learning Robot Objectives from Physical Human Interaction with Andrea Bajcsy and Dylan P. Losey


by
31 March 2018



share this:




In this interview, Audrow speaks with Andrea Bajcsy and Dylan P. Losey about a method that allows robots to infer a human’s objective through physical interaction. They discuss their approach, the challenges of learning complex tasks, and their experience collaborating between different universities.

Some examples of people working with the more typical impedance control (left) and Bajcsy and Losey’s learning method (right).


To learn more, see this post on Robohub from the Berkeley Artificial Intelligence Research (BAIR) Lab.

Andrea Bajcsy
Andrea Bajcsy is a Ph.D. student in Electrical Engineering and Computer Sciences at the University of California Berkeley. She received her B.S. degree in Computer Science at the University of Maryland and was awarded the NSF Graduate Research Fellowship in 2016. At Berkeley, she works in the Interactive Autonomy and Collaborative Technologies Laboratory researching physical human-robot interaction.

Dylan P. Losey


Dylan P. Losey received the B.S. degree in mechanical engineering from Vanderbilt University, Nashville, TN, USA, in 2014, and the M.S. degree in mechanical engineering from Rice University, Houston, TX, USA, in 2016.

He is currently working towards the Ph.D. degree in mechanical engineering at Rice University, where he has been a member of the Mechatronics and Haptic Interfaces Laboratory since 2014.  In addition, between May and August 2017, he was a visiting scholar in the Interactive Autonomy and Collaborative Technologies Laboratory at the University of California, Berkeley.  He researches physical human-robot interaction; in particular, how robots can learn from and adapt to human corrections.

Mr. Losey received an NSF Graduate Research Fellowship in 2014, and the 2016 IEEE/ASME Transactions on Mechatronics Best Paper Award as a first author.

Links



tags: , , , , , , ,


Audrow Nash is a Software Engineer at Open Robotics and the host of the Sense Think Act Podcast
Audrow Nash is a Software Engineer at Open Robotics and the host of the Sense Think Act Podcast





Related posts :



#ICML2025 outstanding position paper: Interview with Jaeho Kim on addressing the problems with conference reviewing

  15 Sep 2025
Jaeho argues that the AI conference peer review crisis demands author feedback and reviewer rewards.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Robots to the rescue: miniature robots offer new hope for search and rescue operations

  09 Sep 2025
Small two-wheeled robots, equipped with high-tech sensors, will help to find survivors faster in the aftermath of disasters.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

Researchers are teaching robots to walk on Mars from the sand of New Mexico

  02 Sep 2025
Researchers are closer to equipping a dog-like robot to conduct science on the surface of Mars

Engineering fantasy into reality

  26 Aug 2025
PhD student Erik Ballesteros is building “Doc Ock” arms for future astronauts.

RoboCup@Work League: Interview with Christoph Steup

and   22 Aug 2025
Find out more about the RoboCup League focussed on industrial production systems.

Interview with Haimin Hu: Game-theoretic integration of safety, interaction and learning for human-centered autonomy

and   21 Aug 2025
Hear from Haimin in the latest in our series featuring the 2025 AAAI / ACM SIGAI Doctoral Consortium participants.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence