Robohub.org
 

Signal processing technology to extract required information from image, sound, and biometric signals


by
26 May 2014



share this:

At Keio University, the Mitsukura Laboratory, in the Department of System Design Engineering, researches how to extract required information from biometric, image, audio signal data. To achieve this, the researchers use technologies such as signal processing, machine learning, pattern recognition, artificial intelligence, and statistical processing.

“Our research has three research projects : image processing, audio signal processing, and biometric signal processing. In image processing, we’re researching how to embed and extract motion. In audio signal processing, we construct the systems which divide into the scores automatically. And in biometric signal processing, we determine the meaning of brain waves(EEG) and convert thoughts to text.”

In image signal processing research, the Mitsukura Lab works on position matching between real and virtual space, which is the basis for AR technology. These researchers are developing a method for estimating the orientation of a face at high speed with high precision, to link the motion of an actual person with an animation. Another research topic at the Mitsukura Lab is expression recognition, using information about changes in the eyebrows and corners of the mouth. The Lab is also researching how to overlay virtual data used in head-mount displays in the real world.

In audio signal processing research, we studied the ability to automatically determine which direction a sound is coming from, by using multiple mic signals. The researchers are developing a system to automatically direct the mic toward the person talking in a videoconference, and a robot to track a talking person. The Lab is also working to develop a system for inputting information by whistling.

In research on biometric signal processing, the Lab is developing a system to operate computers and devices by reading brain waves(EEG) and EOGs.

In brain-wave(EEG) signal processing, the Lab is doing a wide range of R&D. One research project is a system to determine the user’s likes / dislikes and convert them to text form. Another research project is to evaluate sound quality from brain waves(EEG).

Signal processing technology using EOGs, obtained around the eyes, is being utilized in R&D on steering a wheelchair by winking.

Signal processing technology is expected to have a diverse range of applications, including healthcare, entertainment, and wearable devices.

In the Mitsukura Lab, the researchers are using the many algorithms they’ve developed so far to advance such R&D further, in ways that contribute to society.

“At the present stage, for example, it’s said that Google Glass will become very popular, but with that, you have to wear glasses. Wearing glasses itself is unnatural for people. The same is true for brain-wave(EEG) systems; if something’s unnatural, people won’t do it for long. The first thing we have to do is create a system that can read brain waves(EEG) in a natural manner. Next, in conjunction with that, we need a system that can communicate emotions between people, in a way that resembles using the telephone to communicate thoughts. I think we might be able to communicate emotions even without a telephone.”



tags: ,


DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.
DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.





Related posts :

Robot Talk Episode 141 – Our relationship with robot swarms, with Razanne Abu-Aisheh

  23 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Razanne Abu-Aisheh from the University of Bristol about how people feel about interacting with robot swarms.

Vine-inspired robotic gripper gently lifts heavy and fragile objects

  23 Jan 2026
The new design could be adapted to assist the elderly, sort warehouse products, or unload heavy cargo.

Robot Talk Episode 140 – Robot balance and agility, with Amir Patel

  16 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Amir Patel from University College London about designing robots with the agility and manoeuvrability of a cheetah.

Taking humanoid soccer to the next level: An interview with RoboCup trustee Alessandra Rossi

and   14 Jan 2026
Find out more about the forthcoming changes to the RoboCup soccer leagues.

Robots to navigate hiking trails

  12 Jan 2026
Find out more about work presented at IROS 2025 on autonomous hiking trail navigation via semantic segmentation and geometric analysis.

Robot Talk Episode 139 – Advanced robot hearing, with Christine Evers

  09 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Christine Evers from University of Southampton about helping robots understand the world around them through sound.

Meet the AI-powered robotic dog ready to help with emergency response

  07 Jan 2026
Built by Texas A&M engineering students, this four-legged robot could be a powerful ally in search-and-rescue missions.

MIT engineers design an aerial microrobot that can fly as fast as a bumblebee

  31 Dec 2025
With insect-like speed and agility, the tiny robot could someday aid in search-and-rescue missions.


Robohub is supported by:





 













©2026.01 - Association for the Understanding of Artificial Intelligence