Robohub.org
 

Signal processing technology to extract required information from image, sound, and biometric signals


by
26 May 2014



share this:

At Keio University, the Mitsukura Laboratory, in the Department of System Design Engineering, researches how to extract required information from biometric, image, audio signal data. To achieve this, the researchers use technologies such as signal processing, machine learning, pattern recognition, artificial intelligence, and statistical processing.

“Our research has three research projects : image processing, audio signal processing, and biometric signal processing. In image processing, we’re researching how to embed and extract motion. In audio signal processing, we construct the systems which divide into the scores automatically. And in biometric signal processing, we determine the meaning of brain waves(EEG) and convert thoughts to text.”

In image signal processing research, the Mitsukura Lab works on position matching between real and virtual space, which is the basis for AR technology. These researchers are developing a method for estimating the orientation of a face at high speed with high precision, to link the motion of an actual person with an animation. Another research topic at the Mitsukura Lab is expression recognition, using information about changes in the eyebrows and corners of the mouth. The Lab is also researching how to overlay virtual data used in head-mount displays in the real world.

In audio signal processing research, we studied the ability to automatically determine which direction a sound is coming from, by using multiple mic signals. The researchers are developing a system to automatically direct the mic toward the person talking in a videoconference, and a robot to track a talking person. The Lab is also working to develop a system for inputting information by whistling.

In research on biometric signal processing, the Lab is developing a system to operate computers and devices by reading brain waves(EEG) and EOGs.

In brain-wave(EEG) signal processing, the Lab is doing a wide range of R&D. One research project is a system to determine the user’s likes / dislikes and convert them to text form. Another research project is to evaluate sound quality from brain waves(EEG).

Signal processing technology using EOGs, obtained around the eyes, is being utilized in R&D on steering a wheelchair by winking.

Signal processing technology is expected to have a diverse range of applications, including healthcare, entertainment, and wearable devices.

In the Mitsukura Lab, the researchers are using the many algorithms they’ve developed so far to advance such R&D further, in ways that contribute to society.

“At the present stage, for example, it’s said that Google Glass will become very popular, but with that, you have to wear glasses. Wearing glasses itself is unnatural for people. The same is true for brain-wave(EEG) systems; if something’s unnatural, people won’t do it for long. The first thing we have to do is create a system that can read brain waves(EEG) in a natural manner. Next, in conjunction with that, we need a system that can communicate emotions between people, in a way that resembles using the telephone to communicate thoughts. I think we might be able to communicate emotions even without a telephone.”



tags: ,


DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.
DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.





Related posts :



Robot Talk Episode 103 – Keenan Wyrobek

  20 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.

Robot Talk Episode 102 – Isabella Fiorello

  13 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Isabella Fiorello from the University of Freiburg about bioinspired living materials for soft robotics.

Robot Talk Episode 101 – Christos Bergeles

  06 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Christos Bergeles from King's College London about micro-surgical robots to deliver therapies deep inside the body.

Robot Talk Episode 100 – Mini Rai

  29 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.

Robot Talk Episode 99 – Joe Wolfel

  22 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.

Robot Talk Episode 98 – Gabriella Pizzuto

  15 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.

Online hands-on science communication training – sign up here!

  13 Nov 2024
Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.

Robot Talk Episode 97 – Pratap Tokekar

  08 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association