Robohub.org
 

MIT robot helps nurses schedule tasks on labor floor


by
11 July 2016



share this:
NAO

Robot from Computer Science and Artificial Intelligence Lab suggests where to move patients and who should do C-sections.

By: Adam Conner-Simons

Today’s robots are awkward co-workers because they are often unable to predict what humans need. In hospitals, robots are employed to perform simple tasks like delivering supplies and medications but they have to be explicitly told what to do.

A team from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) thinks that this will soon change, and that robots might be most effective by helping humans perform one of the most complex tasks of all: scheduling.

In a pair of new papers, CSAIL researchers demonstrate a robot that, by learning from human workers, can help assign and schedule tasks in fields ranging from medicine to the military.

In one paper, the team demonstrated a robot that assisted nurses in a labor ward, where it made recommendations on everything from where to move a patient to which nurse to assign to a C-section.

“The aim of the work was to develop artificial intelligence that can learn from people about how the labor and delivery unit works, so that robots can better anticipate how to be helpful or when to stay out of the way – and maybe even help by collaborating in making challenging decisions,” says MIT professor Julie Shah, the senior author on both papers.

In a second paper, the same system was put to the test in a videogame that simulates missile-defense scenarios. In the game, which was developed by Lincoln Laboratory researchers and involves using decoy missiles to ward off enemy attacks, the system even occasionally outperformed human experts at reducing both the number of missile attacks and the overall cost of decoys.

The labor-ward paper was presented at the recent Robotics: Science and Systems (RSS) Conference and was co-written by PhD student Matthew Gombolay, CSAIL postdocs Xi Jessie Yang and Brad Hayes, and Dr. Neel Shah and Toni Golen from Beth Israel Deaconess Medical Center, which is where the study took place.

The Navy-simulation paper is being presented at this week’s International Joint Conference on Artificial Intelligence (IJCAI), and was co-written by Gombolay and the  Lincoln Lab’s Reed Jensen, Jessica Stigile and Sung-Hyun Son.

Right on schedule

From visiting hospitals and factories, Shah and Gombolay found that a subset of workers are extremely strong schedulers, but can’t easily transfer that knowledge to colleagues.

“Figuring out what makes certain people good at this often seems like a mystery,” Gombolay says. “Being able to automate the task of learning from experts – and to then generalize it across industries – could help make many businesses run more efficiently.”

A particularly tough place for scheduling are hospitals. Labor wards’ head nurses have to try to predict when a woman will arrive in labor, how long labor will take, and which patients will become sick enough to require C-sections or other procedures.

They are deluged with an endless stream of challenging split-second decisions that include assigning nurses to patients, patients to beds, and technicians to surgeries. At Beth Israel, the head nurse has to coordinate 10 nurses, 20 patients and 20 rooms at the same time, meaning that the number of distinct scheduling possibilities adds up to a staggering 2 to the one millionth power, which is more than the number of atoms in the universe.

“We thought a complex environment like a labor ward would be a good place to try to automate scheduling and take this significant burden off of workers,” says Gombolay.

How it works

Like many AI systems, the team’s robot was trained via “learning from demonstration,” which involves observing humans’ performances of tasks. But Gombolay says that researchers have never been able to apply this technique to scheduling, because of the complexity of coordinating multiple actions that can be very dependent on each other.

To overcome this, the team trained its system to look at several actions that human schedulers make, and compare them to all the possible actions that are not made at each of those moments in time. From there, it developed a scheduling policy that can respond dynamically to new situations that it has not seen before.

“Rather than considering actions in isolation of each other, we crafted a model that understands why one action is better than the alternatives,” says Shah. “By considering all such comparisons, you can learn to recommend which action will be most helpful.”

The policy is “model-free,” meaning that the nurses do not have to train the robot by painstakingly ranking each possible action in each possible scenario by hand.

“You can put the robot on the labor floor, and, just by watching humans doing the different tasks, it will understand how to coordinate an efficient schedule,” says Gombolay.

The results

With this framework, the system – which the team has dubbed “apprenticeship scheduling” – can anticipate room assignments and suggest which nurses to assign to patients for C-sections and other procedures.

The approach was evaluated through experiments in which a robot provided decision-support to nurses and doctors as they made decisions on a labor floor.

Using the system on a Nao robot, nurses accepted the robot’s recommendations 90 percent of the time. The team also demonstrated that human subjects weren’t just blindly accepting advice – the robot delivered consciously bad feedback that was rejected at the same rate of 90 percent, showing that the system was trained to distinguish between good and bad recommendations.

Nurses had almost uniformly positive feedback about the robot. One said that it would “allow for a more even dispersion of workload,” while another said that it would be particularly helpful for “new nurses [who] may not understand the constraints and complexities of the role.”

“A great potential of this technology is that good solutions can be spread more quickly to many hospitals and workplaces,” says Dana Kulic, an associate professor of computer engineering at the University of Waterloo. “ For example, innovative improvements can be distributed rapidly from research hospitals to regional health centres.”

Shah says that the new techniques have many uses, from turning robots into better collaborators to helping train new nurses, but the goal is not to develop robots that fully make decisions on their own.

“These initial results show there is tremendous potential for machines to collaborate with us in rich ways that will enhance many sectors of the economy,” says Shah. “The awkward robots of the past will be replaced by valued team members.”

The RSS paper was supported by the National Science Foundation, CRICO Harvard Risk Management Foundation, and Aldebaran Robotics Inc. The IJCAI paper was supported the National Science Foundation and the U.S. Navy.

Read the paper here. 



tags: , , , , , , , ,


CSAIL MIT The Computer Science and Artificial Intelligence Laboratory – known as CSAIL ­– is the largest research laboratory at MIT and one of the world’s most important centers of information technology research.
CSAIL MIT The Computer Science and Artificial Intelligence Laboratory – known as CSAIL ­– is the largest research laboratory at MIT and one of the world’s most important centers of information technology research.





Related posts :



Robot Talk Episode 131 – Empowering game-changing robotics research, with Edith-Clare Hall

  31 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Edith-Clare Hall from the Advanced Research and Invention Agency about accelerating scientific and technological breakthroughs.

A flexible lens controlled by light-activated artificial muscles promises to let soft machines see

  30 Oct 2025
Researchers have designed an adaptive lens made of soft, light-responsive, tissue-like materials.

Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence