Robohub.org
 

Targeting a robotic brain capable of thoughtful communication

by
11 August 2014



share this:
Diginfo

The Hagiwara Lab in the Department of Information and Computer Science of Keio University’s Faculty of Science and Technology is trying to realize a robotic brain that can carry on a conversation, or in other words, a robotic brain that can understand images and words and can carry on thoughtful communication with humans.

“Even now, significant progress is being made with robots, and tremendous advancements are being made with the control parts. However, we feel like R&D with regards to the brain has been significantly delayed. When we think about what types of functions are necessary for the brain, the first thing that we as humans do is visual information processing. In other words, the brain needs to be able to process what is seen. The next thing is the language information processing that we as humans implement. By using language capabilities, humans are able to perform extremely advanced intellectual processing. However, even if a robotic brain can process what it sees and use words, it is still lacking one thing, specifically, feelings and emotions. Therefore, as a third pillar, we’re conducting research on what is called Kansei Engineering, or affective information processing.”

The Hagiwara Lab has adopted an approach of learning from the information processing of the human brain. The team is trying to construct a robotic brain while focusing on three elements: visual information processing, language information processing, and affective information processing, and an even more important point is integrating these three elements.

“With regards to visual information processing, by using a neural network as well, we’re trying to recognize items through mechanisms based on experience and intuition in the same manner that is implemented directly by humans without having to use three-dimensional structures or perform complicated mathematical processing. In the conventional object recognition field, patterns from the recognized results are merely converted to symbols. However, by adding language processing to those recognized results, we can comprehensively utilize knowledge to get a better visual image. For example, even if an object is recognized as being a robot, knowledge such as the robot has a human form, or it has arms and legs can also be used. Next will be language information processing because processing of language functions is becoming extremely important. For example, even as a robot, the next step would be for it to recognize something as being cute, not cute, mechanical, or some other type of characteristic. Humans naturally have this type of emotional capability, but in current robotic research, that type of direction is not being researched much. Therefore, at our lab, we’re conducting research in a direction that enables robots to understand what they see, to use language information processing to understand what they saw as knowledge, and to then comprehensively use the perspective of feelings and emotions like those of humans as well.”

The robotic brain targeted by the Hagiwara Lab is one that is not merely just smart. Instead, the lab is targeting a robotic brain with emotions, feelings, and spirit that will enable it to interact skillfully with humans and other environments. To achieve this, the lab is conducting a broad range of research from the fundamentals of Kansei Engineering to applications thereof in fields such as entertainment, design, and healing.

“Most of the robots thus far move exactly as they are programmed to do. However, within the next 10 years, and perhaps even sooner, I believe that robots will be steadily introduced into the home. And when that happens, the interface with humans, which are the users, will be extremely important. For example, if you have a robot that can undergo a variety of movements rather than being a robot like this that doesn’t move, and if amongst those various movements, there is movement that looks like fluctuation, then communication is occurring amongst that movement, or if the contact time with the robot becomes longer, then of course the robot will be able to understand even the user’s feelings and personality, and it can then respond and act accordingly. We’re trying to build a robot that is capable of that type of attentiveness.”

 



tags: ,


DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.
DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.





Related posts :



Robot Talk Episode 89 – Simone Schuerle

In the latest episode of the Robot Talk podcast, Claire chatted to Simone Schuerle from ETH Zürich all about microrobots, medicine and science.
14 June 2024, by

Robot Talk Episode 88 – Lord Ara Darzi

In the latest episode of the Robot Talk podcast, Claire chatted to Lord Ara Darzi from Imperial College London all about robotic surgery - past, present and future.
07 June 2024, by

Robot Talk Episode 87 – Isabelle Ormerod

In the latest episode of the Robot Talk podcast, Claire chatted to Isabelle Ormerod from the University of Bristol all about human-centred design and women in robotics.
31 May 2024, by

Robot Talk Episode 86 – Mario Di Castro

In the latest episode of the Robot Talk podcast, Claire chatted to Mario Di Castro from CERN all about robotic inspection and maintenance in hazardous environments.
24 May 2024, by

Congratulations to the #ICRA2024 best paper winners

The winners and finalists in the different categories have been announced.
20 May 2024, by

Robot Talk Episode 85 – Margarita Chli

In the latest episode of the Robot Talk podcast, Claire chatted to Margarita Chli from the University of Cyprus all about vision, navigation, and small aerial drones.
17 May 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association