Dr Rustam Stolkin and robots that learn: Nuclear robotics meets machine learning

by and
27 April 2017

share this:

Heavy Water Components Test Reactor decommissioning.

How can we create robots that can carry out important tasks in dangerous environments? Machine learning is supporting advances in the field of robotics. To find out more, we talked to Dr Rustam Stolkin, Royal Society Industry Fellow for Nuclear Robotics, Professor of Robotics at the University of Birmingham, and Director at A.R.M Robotics Ltd, about his work combining machine learning and robotics to create practical solutions to nuclear problems.

What drew you to engineering and robotics?

There are many definitions of engineering, but the one I like is “the creation of artefacts for the benefit of mankind”. Engineering is a way of using science to be creative and to create novel technologies which can bring major societal and economic benefit.

Dr Rustam Stolkin. Source: University of Birmingham/YouTube

Robotics, in particular, attracted me because robots are inherently inter-disciplinary: they use mechanics and electronics, as well as computational elements, such as machine learning and artificial intelligence. This has allowed me to become an “expert generalist”; I specialise in being extremely cross-disciplinary. At the University of Birmingham, I have research collaborations spanning eight different schools across three colleges on the campus.

Tell us about your research?

My work focuses on developing advanced robotics technologies for nuclear decommissioning, demolishing legacy nuclear sites, and safely packaging, storing and monitoring any radiological or contaminated materials. The UK contains 4.9 million tonnes of legacy nuclear waste, mainly by-products of weapons production during the Cold War, alongside a smaller contribution from nuclear power use for civilian electricity generation. Cleaning up the UK’s legacy waste is the biggest environmental remediation project across Europe; it could take over 100 years and cost up to £220billion.

Factories are highly structured environments, where robots can be pre-programmed to make repetitive motions. This has supported the revolution in robotics that we’ve seen since the 1980s. In contrast, there has been surprisingly little use of robots in the nuclear industry, aside from a very small number of robotic vehicles deployed after the Fukushima disaster, and a very few cases of deploying bespoke robotic manipulators, controlled in rudimentary ways by joystick teleoperation.

This is in part because legacy nuclear sites, some of them over 60 years old, are highly complex and unknown environments, such as contaminated buildings that have been sealed off for decades. They have an enormous diversity of scenes, materials, and objects that nuclear decommissioning robots must interact with in complex ways, such as by grasping and cutting objects.

We have only recently discovered how to use state-of-the-art machine learning methods to help robots learn adaptive and transferable skills which they can apply in new environments. This allows them to perceive and handle new kinds of objects which they have not seen before. For example, using machine learning, a robot can be trained how to grasp and pick up one object, and then use this knowledge to adapt the grasp for a new object with a different shape.

Recent “deep learning” approaches might offer part of the solution in navigating these complex environments. However such learning approaches can rely on large amounts of “training data” – labelled examples of images of objects for example – which can be used to train the robot so that it can recognise similar objects in future. It is still very unclear how to generate sufficient training data in order to enable robots to learn how to behave in useful ways in complex real environments.

Why do we need robots to carry out nuclear decommissioning?

Contaminated materials are currently decommissioned by human workers in air-fed plastic suits, which is not only unsustainable but can be dangerous. Using robots means humans do not need to enter highly hazardous work environments, and means that zones that humans cannot enter (even in protective suits) due to high gamma radiation can still be decommissioned. Using robots also reduces “secondary nuclear waste”; for every one container filled with actual primary nuclear waste, more than ten containers become filled with contaminated plastic suits, respirators, rubber gloves, and other “secondary waste” from human entries.

How have you used machine learning to develop robots capable of nuclear decommissioning?

My lab has used robots to perform fully autonomous grasping and manipulation of nuclear waste-like objects and grasping of moving objects.

We also use state-of-the-art deep-learning neural networks to do real-time 3D reconstruction of a room, and simultaneously recognise and label materials such as concrete, metal, wood and fabric. This helps us develop ways for robots to characterise nuclear environments and nuclear waste objects in real-time. We are extending these methods to also combine visual camera information with other kinds of data from radiation, thermal and other sensors.

Where do you see machine learning going within your field of research?

Robotics engineers often argue that machine learning has only been used on relatively simple problems so far. Achieving machine learning of real robotic actions, with real objects in real scenes, is a significantly more complicated challenge that is only just starting to be attempted.

Innovative new approaches will emerge in the next few years for overcoming these challenges in enabling useful robotic behaviours in real environments. The next few years will see an interesting fusion of conventional approaches, such as hand-crafted robot control algorithms, with new approaches, like machine learning of robot behaviours.

Human-robot interaction, and human-AI collaborative working will also become increasingly important. What we are working towards now is “shared control” in which artificial intelligence and humans collaborate together to control a remote robot. For example, recent research from the French institute CNRS shows human operators controlling the motion of a robot arm and gripper in relation to an object, while the AI automatically orients the gripper with the object. Simultaneously, the AI automatically controls a second robot arm to move a camera so that it always offers the best viewpoint. Thus, human and AI collaborate to control a pair of arms for a remote handling task. In the future, this kind of human-machine collaboration could transform how we interact with nuclear and hazardous environments.

The Royal Society is currently carrying out a major policy project on machine learning, and its impact for the UK economy and society. To find out more please visit our project page.

This post was originally published by the Royal Society. Click here to view the original post.

tags: , , , , , ,

The Royal Society The Royal Society is a Fellowship of many of the world's most eminent scientists and is the oldest scientific academy in continuous existence.
The Royal Society The Royal Society is a Fellowship of many of the world's most eminent scientists and is the oldest scientific academy in continuous existence.

Susannah Odell works as a policy adviser at the Royal Society...
Susannah Odell works as a policy adviser at the Royal Society...

Related posts :

How drones for organ transportation are changing the healthcare industry

The healthcare drone industry has witnessed a dramatic surge in the last couple of years. In 2020, the market grew 30% and is expected to grow from $254 million in 2021 to $1,5 billion in 2028.
21 March 2023, by

Robotic bees and roots offer hope of healthier environment and sufficient food

Miniature robots that mimic living organisms are being developed to explore and support real-life ecosystems.
18 March 2023, by

Robot Talk Episode 41 – Alessandra Rossi

In this week's episode of the Robot Talk podcast, host Claire Asher chatted to Alessandra Rossi from the University of Naples all about social robotics, theory of mind, and robots playing football.
17 March 2023, by

Mix-and-match kit could enable astronauts to build a menagerie of lunar exploration bots

Robotic parts could be assembled into nimble spider bots for exploring lava tubes or heavy-duty elephant bots for transporting solar panels.
14 March 2023, by

Learning to compute through art

“Introduction to Physical Computing for Artists” at the MIT Student Art Association teaches students to use circuits, wiring, motors, sensors, and displays by developing their own kinetic artworks.
12 March 2023, by

Robot Talk Episode 40 – Edward Timpson

In this week's episode of the Robot Talk podcast, host Claire Asher chatted to Edward Timpson from QinetiQ all about robots in the military, uncrewed vehicles, and cyber security.
10 March 2023, by

©2021 - ROBOTS Association


©2021 - ROBOTS Association