Robohub.org
 

SOINN artificial brain can now use the internet to learn new things


by
01 May 2013



share this:
13-0032-r

A group at Tokyo Institute of Technology, led by Dr. Osamu Hasegawa, has succeeded in making further advances with SOINN, their machine learning algorithm, which can now use the internet to learn how to perform new tasks.

“Image searching technology is quite practical now. So, by linking our algorithm to that, we’ve enabled the system to identify which characteristics are important by itself, and to remember that what kind of thing the subject is.”

These are pictures of rickshaws, taken in India by the Group. When one of these pictures is loaded, the system hasn’t yet learned what it is. So, it recognizes the subject as a “car,” which it has already learned. The system is then given the keyword “rickshaw.” From the Internet, the system picks out the main characteristics of pictures related to rickshaws, and learns by itself what a rickshaw is. After learning, even if a different picture of a rickshaw is loaded, the system recognizes it as a rickshaw.

“In the case of a rickshaw, there may be other things in the picture, or people may be riding in the rickshaw, but the system picks out only those features common to many cases, such as large wheels, a platform above the wheels, and a roof, and it learns that what people call a rickshaw includes these features. So, even with an object it hasn’t seen before, if the object has those features, the system can recognize it.”

“With previous methods, for example, face recognition by digital cameras, it’s necessary to teach the system quite a lot of things about faces. When subjects become diverse, it’s very difficult for people to tell the system what sort of characteristics they have, and how many features are sufficient to recognize things. SOINN can pick those features out for itself. It doesn’t need models, which is a very big advantage.”

The Group is also developing ways to transfer learned characteristic data to other things. For example, the system has already learned knives and pens, and possesses the characteristic data that they are “pointed objects” and “stick-shaped objects” respectively. To make the system recognize box cutters, it’s made to look at the similarities between box cutters, and knives and pens, which it has already learned. And it’s made to transfer the basic characteristic of being stick-shaped and pointed. If characteristic data for box cutters can be obtained from other systems, SOINN can guess from the transferred data that the objects are box cutters.

“Here, you’ve seen how this works for pictures. But SOINN can handle other types of information flexibly. For example, we think we could teach it to pick out features from audio or video data. Then, it could also utilize data from robot sensors.”

“With previous pet robots, such as AIBO, training involved patterns that were decided in advance. When those possibilities are exhausted, the robot can’t do any more. So, people come to understand what it’s going to do, and get bored with it. But SOINN can remember an amount of changes. So, in principle, it can develop without a scripted scenario.”



tags: ,


DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.
DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.





Related posts :



Robot Talk Episode 134 – Robotics as a hobby, with Kevin McAleer

  21 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Kevin McAleer from kevsrobots about how to get started building robots at home.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Robot Talk Episode 133 – Creating sociable robot collaborators, with Heather Knight

  14 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Heather Knight from Oregon State University about applying methods from the performing arts to robotics.

CoRL2025 – RobustDexGrasp: dexterous robot hand grasping of nearly any object

  11 Nov 2025
A new reinforcement learning framework enables dexterous robot hands to grasp diverse objects with human-like robustness and adaptability—using only a single camera.

Robot Talk Episode 132 – Collaborating with industrial robots, with Anthony Jules

  07 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Anthony Jules from Robust.AI about their autonomous warehouse robots that work alongside humans.

Teaching robots to map large environments

  05 Nov 2025
A new approach could help a search-and-rescue robot navigate an unpredictable environment by rapidly generating an accurate map of its surroundings.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence