Robohub.org
 

Targeting a robotic brain capable of thoughtful communication

by
11 August 2014



share this:
Diginfo

The Hagiwara Lab in the Department of Information and Computer Science of Keio University’s Faculty of Science and Technology is trying to realize a robotic brain that can carry on a conversation, or in other words, a robotic brain that can understand images and words and can carry on thoughtful communication with humans.

“Even now, significant progress is being made with robots, and tremendous advancements are being made with the control parts. However, we feel like R&D with regards to the brain has been significantly delayed. When we think about what types of functions are necessary for the brain, the first thing that we as humans do is visual information processing. In other words, the brain needs to be able to process what is seen. The next thing is the language information processing that we as humans implement. By using language capabilities, humans are able to perform extremely advanced intellectual processing. However, even if a robotic brain can process what it sees and use words, it is still lacking one thing, specifically, feelings and emotions. Therefore, as a third pillar, we’re conducting research on what is called Kansei Engineering, or affective information processing.”

The Hagiwara Lab has adopted an approach of learning from the information processing of the human brain. The team is trying to construct a robotic brain while focusing on three elements: visual information processing, language information processing, and affective information processing, and an even more important point is integrating these three elements.

“With regards to visual information processing, by using a neural network as well, we’re trying to recognize items through mechanisms based on experience and intuition in the same manner that is implemented directly by humans without having to use three-dimensional structures or perform complicated mathematical processing. In the conventional object recognition field, patterns from the recognized results are merely converted to symbols. However, by adding language processing to those recognized results, we can comprehensively utilize knowledge to get a better visual image. For example, even if an object is recognized as being a robot, knowledge such as the robot has a human form, or it has arms and legs can also be used. Next will be language information processing because processing of language functions is becoming extremely important. For example, even as a robot, the next step would be for it to recognize something as being cute, not cute, mechanical, or some other type of characteristic. Humans naturally have this type of emotional capability, but in current robotic research, that type of direction is not being researched much. Therefore, at our lab, we’re conducting research in a direction that enables robots to understand what they see, to use language information processing to understand what they saw as knowledge, and to then comprehensively use the perspective of feelings and emotions like those of humans as well.”

The robotic brain targeted by the Hagiwara Lab is one that is not merely just smart. Instead, the lab is targeting a robotic brain with emotions, feelings, and spirit that will enable it to interact skillfully with humans and other environments. To achieve this, the lab is conducting a broad range of research from the fundamentals of Kansei Engineering to applications thereof in fields such as entertainment, design, and healing.

“Most of the robots thus far move exactly as they are programmed to do. However, within the next 10 years, and perhaps even sooner, I believe that robots will be steadily introduced into the home. And when that happens, the interface with humans, which are the users, will be extremely important. For example, if you have a robot that can undergo a variety of movements rather than being a robot like this that doesn’t move, and if amongst those various movements, there is movement that looks like fluctuation, then communication is occurring amongst that movement, or if the contact time with the robot becomes longer, then of course the robot will be able to understand even the user’s feelings and personality, and it can then respond and act accordingly. We’re trying to build a robot that is capable of that type of attentiveness.”

 



tags: ,


DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.
DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.





Related posts :



Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by

Robot Talk Episode 94 – Esyin Chew

In the latest episode of the Robot Talk podcast, Claire chatted to Esyin Chew from Cardiff Metropolitan University about service and social humanoid robots in healthcare and education.
18 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association