Robohub.org
 

Teaching a brain-controlled robotic prosthetic to learn from its mistakes

by
01 October 2015



share this:
Using a BCI the robot was able to find targets that the person could see but the robot could not

Using a BCI the robot was able to find targets that the person could see but the robot could not (Photo: Iturrate et al., 2015).

Brain-Machine Interfaces (BMIs) — where brain waves captured by electrodes on the skin are used to control external devices such as a robotic prosthetic — are a promising tool for helping people who have lost motor control due to injury or illness. However, learning to operate a BMI can be very time consuming. In a paper published in Nature Scientific Reports, a group from CNBI, EPFL and NCCR Robotics show how their new feedback system can speed up the training process by detecting error messages from the brain and adapting accordingly.

One issue that bars the use of BMIs in everyday life for those with disabilities is the amount of time required to train users, who must learn to modulate their thought processes before their brain signals are clear enough to control an external machine. For example, to move a robotic prosthetic arm, a person must actively think about moving their arm — a thought process that uses significantly more brainpower than the subconscious thought required to move a natural arm. Furthermore, even with extensive training, users are often not able to perform complex movements.

It has been observed, however, that the brain emits very different waves when it experiences success at controlling a BMI than when it experiences failure. With this in mind, the research team developed a new feedback system that records error signals from the brain (called ‘error-related potentials’, or ErrPs) and uses these to evaluate whether or not the correct movement has been achieved. The system then adapts the movement until it finds the correct one, becoming more accurate the longer it is in use.

Schematic diagram of the new system

In order to determine the ErrP, twelve subjects were asked to watch a machine perform 350 separate movements, where the machine was programmed to make the wrong movement in 20% of cases. This step took an average of 25 minutes. After this first training stage, each subject performed three experiments where they attempted to locate a specific target using the robotic arm. As expected, the time taken to locate a target reduced as the experiment continued.

Experimental scheme

Three experiments showed that a robot improved its ability to find the position of a fixed point using error-related brain activity. (Iturrate et al. 2015)

 

https://youtu.be/jAtcVlTqxeA

This new approach finds obvious applicability in the field of neuroprosthesis, particularly for those with degenerative neurological conditions who find that their requirements change over time. The system also has the potential to automatically adapt itself without the need for retraining or reprogramming.

Reference

I. Iturrate, R. Chavarriaga, L. Montesano, J.  Minguez and J. del R. Millán, “Teaching brain-machine interfaces as an alternative paradigm to neuroprosthetics control,” Nature Scientific Reports, vol. 5, Article number: 13893, 2015. doi:10.1038/srep13893


If you liked this article, you may also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , ,


NCCR Robotics





Related posts :



Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by

Robot Talk Episode 94 – Esyin Chew

In the latest episode of the Robot Talk podcast, Claire chatted to Esyin Chew from Cardiff Metropolitan University about service and social humanoid robots in healthcare and education.
18 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association