Robohub.org
 

Can a brain-computer interface convert your thoughts to text?


by
01 November 2016



share this:
Abstract electric circuit with brain technology concept- deep-learning

By Srividya Sundaresan, Frontiers Science Writer

Ever wonder what it would be like if a device could decode your thoughts into actual speech or written words? While this might enhance the capabilities of already existing speech interfaces with devices, it could be a potential game-changer for those with speech pathologies, and even more so for “locked-in” patients who lack any speech or motor function.

“So instead of saying ‘Siri, what is the weather like today’ or ‘Ok Google, where can I go for lunch?’ I just imagine saying these things,” explains Christian Herff, author of a review recently published in the journal Frontiers in Human Neuroscience.

ECoG and audio data are recorded at the same time. Speech decoding software is then used to determine timing of vowels and consonants in acoustic data. ECoG models are then trained for each phone individually by calculating the mean and covariance of all segments associated with that particular phone. Courtesy of Christian Herff

While reading one’s thoughts might still belong to the realms of science fiction, scientists are already decoding speech from signals generated in our brains when we speak or listen to speech.

In their review, Herff and co-author, Dr. Tanja Schultz, compare the pros and cons of using various brain imaging techniques to capture neural signals from the brain and then decode them to text.

The technologies include functional MRI and near infrared imaging that can detect neural signals based on metabolic activity of neurons, to methods such as EEG and magnetoencephalography (MEG) that can detect electromagnetic activity of neurons responding to speech. One method in particular, called electrocorticography or ECoG, showed promise in Herff’s study.

This study presents the Brain-to-text system in which epilepsy patients who already had electrode grids implanted for treatment of their condition participated. They read out texts presented on a screen in front of them while their brain activity was recorded. This formed the basis of a database of patterns of neural signals that could now be matched to speech elements or “phones”.

When the researchers also included language and dictionary models in their algorithms, they were able to decode neural signals to text with a high degree of accuracy. “For the first time, we could show that brain activity can be decoded specifically enough to use ASR technology on brain signals,” says Herff. “However, the current need for implanted electrodes renders it far from usable in day-to-day life.”

So, where does the field go from here to a functioning thought detection device? “A first milestone would be to actually decode imagined phrases from brain activity, but a lot of technical issues need to be solved for that,” concedes Herff.

Their study results, while exciting, are still only a preliminary step towards this type of brain-computer interface.


If you liked this article, you may also like reading:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: ,


Frontiers in Computational Neuroscience is devoted to promoting theoretical modeling of brain function and fostering interdisciplinary interactions between theoretical and experimental neuroscience...
Frontiers in Computational Neuroscience is devoted to promoting theoretical modeling of brain function and fostering interdisciplinary interactions between theoretical and experimental neuroscience...





Related posts :



Robot Talk Episode 126 – Why are we building humanoid robots?

  20 Jun 2025
In this special live recording at Imperial College London, Claire chatted to Ben Russell, Maryam Banitalebi Dehkordi, and Petar Kormushev about humanoid robotics.

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

and   18 Jun 2025
We hear from the organiser of RoboCupJunior 2025 and find out how the preparations are going for the event.

Robot Talk Episode 125 – Chatting with robots, with Gabriel Skantze

  13 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriel Skantze from KTH Royal Institute of Technology about having natural face-to-face conversations with robots.

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

and   12 Jun 2025
We caught up with Marco to find out what exciting events are in store at this year's RoboCup.

Interview with Amar Halilovic: Explainable AI for robotics

  10 Jun 2025
Find out about Amar's research investigating the generation of explanations for robot actions.

Robot Talk Episode 124 – Robots in the performing arts, with Amy LaViers

  06 Jun 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Amy LaViers from the Robotics, Automation, and Dance Lab about the creative relationship between humans and machines.

Robot Talk Episode 123 – Standardising robot programming, with Nick Thompson

  30 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Nick Thompson from BOW about software that makes robots easier to program.

Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

  29 May 2025
Find out who won the awards presented at the International Conference on Autonomous Agents and Multiagent Systems last week.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence