Robohub.org
 

Teleoperating robots with virtual reality


by
12 October 2017



share this:


by Rachel Gordon
Consisting of a headset and hand controllers, CSAIL’s new VR system enables users to teleoperate a robot using an Oculus Rift headset.
Photo: Jason Dorfman/MIT CSAIL

Certain industries have traditionally not had the luxury of telecommuting. Many manufacturing jobs, for example, require a physical presence to operate machinery.

But what if such jobs could be done remotely? Last week researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) presented a virtual reality (VR) system that lets you teleoperate a robot using an Oculus Rift headset.

The system embeds the user in a VR control room with multiple sensor displays, making it feel like they’re inside the robot’s head. By using hand controllers, users can match their movements to the robot’s movements to complete various tasks.

“A system like this could eventually help humans supervise robots from a distance,” says CSAIL postdoc Jeffrey Lipton, who was the lead author on a related paper about the system. “By teleoperating robots from home, blue-collar workers would be able to tele-commute and benefit from the IT revolution just as white-collars workers do now.”  

The researchers even imagine that such a system could help employ increasing numbers of jobless video-gamers by “gameifying” manufacturing positions.

The team used the Baxter humanoid robot from Rethink Robotics, but said that it can work on other robot platforms and is also compatible with the HTC Vive headset.

Lipton co-wrote the paper with CSAIL Director Daniela Rus and researcher Aidan Fay. They presented the paper at the recent IEEE/RSJ International Conference on Intelligent Robots and Systems in Vancouver.

There have traditionally been two main approaches to using VR for teleoperation.

In a direct model, the user’s vision is directly coupled to the robot’s state. With these systems, a delayed signal could lead to nausea and headaches, and the user’s viewpoint is limited to one perspective.

In a cyber-physical model, the user is separate from the robot. The user interacts with a virtual copy of the robot and the environment. This requires much more data, and specialized spaces.

The CSAIL team’s system is halfway between these two methods. It solves the delay problem, since the user is constantly receiving visual feedback from the virtual world. It also solves the the cyber-physical issue of being distinct from the robot: Once a user puts on the headset and logs into the system, they’ll feel as if they’re inside Baxter’s head.

The system mimics the homunculus model of the mind — the idea that there’s a small human inside our brains controlling our actions, viewing the images we see, and understanding them for us. While it’s a peculiar idea for humans, for robots it fits: Inside the robot is a human in a virtual control room, seeing through its eyes and controlling its actions.

Using Oculus’ controllers, users can interact with controls that appear in the virtual space to open and close the hand grippers to pick up, move, and retrieve items. A user can plan movements based on the distance between the arm’s location marker and their hand while looking at the live display of the arm.

To make these movements possible, the human’s space is mapped into the virtual space, and the virtual space is then mapped into the robot space to provide a sense of co-location.

The system is also more flexible compared to previous systems that require many resources. Other systems might extract 2-D information from each camera, build out a full 3-D model of the environment, and then process and redisplay the data. In contrast, the CSAIL team’s approach bypasses all of that by simply taking the 2-D images that are displayed to each eye. (The human brain does the rest by automatically inferring the 3-D information.) 

To test the system, the team first teleoperated Baxter to do simple tasks like picking up screws or stapling wires. They then had the test users teleoperate the robot to pick up and stack blocks.

Users successfully completed the tasks at a much higher rate compared to the direct model. Unsurprisingly, users with gaming experience had much more ease with the system.

Tested against current state-of-the-art systems, CSAIL’s system was better at grasping objects 95 percent of the time and 57 percent faster at doing tasks. The team also showed that the system could pilot the robot from hundreds of miles away; testing included controling Baxter at MIT from a hotel’s wireless network in Washington.

“This contribution represents a major milestone in the effort to connect the user with the robot’s space in an intuitive, natural, and effective manner.” says Oussama Khatib, a computer science professor at Stanford University who was not involved in the paper.

The team eventually wants to focus on making the system more scalable, with many users and different types of robots that can be compatible with current automation technologies.

The project was funded, in part, by the Boeing Company and the National Science Foundation.




MIT News





Related posts :



Robot Talk Episode 112 – Getting creative with robotics, with Vali Lalioti

  07 Mar 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Vali Lalioti from the University of the Arts London about how art, culture and robotics interact.

Robot Talk Episode 111 – Robots for climate action, with Patrick Meier

  28 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Patrick Meier from the Climate Robotics Network about how robots can help scale action on climate change.

Robot Talk Episode 110 – Designing ethical robots, with Catherine Menon

  21 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Catherine Menon from the University of Hertfordshire about designing home assistance robots with ethics in mind.

Robot Talk Episode 109 – Building robots at home, with Dan Nicholson

  14 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Dan Nicholson from MakerForge.tech about creating open source robotics projects you can do at home.

Robot Talk Episode 108 – Giving robots the sense of touch, with Anuradha Ranasinghe

  07 Feb 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Anuradha Ranasinghe from Liverpool Hope University about haptic sensors for wearable tech and robotics.

Robot Talk Episode 107 – Animal-inspired robot movement, with Robert Siddall

  31 Jan 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Robert Siddall from the University of Surrey about novel robot designs inspired by the way real animals move.

Robot Talk Episode 106 – The future of intelligent systems, with Didem Gurdur Broo

  24 Jan 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Didem Gurdur Broo from Uppsala University about how to shape the future of robotics, autonomous vehicles, and industrial automation.

Robot Talk Episode 105 – Working with robots in industry, with Gianmarco Pisanelli 

  17 Jan 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Gianmarco Pisanelli from the Advanced Manufacturing Research Centre about how to promote the safe and intuitive use of robots in manufacturing.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association