Robohub.org
 

SOINN artificial brain can now use the internet to learn new things


by
01 May 2013



share this:
13-0032-r

A group at Tokyo Institute of Technology, led by Dr. Osamu Hasegawa, has succeeded in making further advances with SOINN, their machine learning algorithm, which can now use the internet to learn how to perform new tasks.

“Image searching technology is quite practical now. So, by linking our algorithm to that, we’ve enabled the system to identify which characteristics are important by itself, and to remember that what kind of thing the subject is.”

These are pictures of rickshaws, taken in India by the Group. When one of these pictures is loaded, the system hasn’t yet learned what it is. So, it recognizes the subject as a “car,” which it has already learned. The system is then given the keyword “rickshaw.” From the Internet, the system picks out the main characteristics of pictures related to rickshaws, and learns by itself what a rickshaw is. After learning, even if a different picture of a rickshaw is loaded, the system recognizes it as a rickshaw.

“In the case of a rickshaw, there may be other things in the picture, or people may be riding in the rickshaw, but the system picks out only those features common to many cases, such as large wheels, a platform above the wheels, and a roof, and it learns that what people call a rickshaw includes these features. So, even with an object it hasn’t seen before, if the object has those features, the system can recognize it.”

“With previous methods, for example, face recognition by digital cameras, it’s necessary to teach the system quite a lot of things about faces. When subjects become diverse, it’s very difficult for people to tell the system what sort of characteristics they have, and how many features are sufficient to recognize things. SOINN can pick those features out for itself. It doesn’t need models, which is a very big advantage.”

The Group is also developing ways to transfer learned characteristic data to other things. For example, the system has already learned knives and pens, and possesses the characteristic data that they are “pointed objects” and “stick-shaped objects” respectively. To make the system recognize box cutters, it’s made to look at the similarities between box cutters, and knives and pens, which it has already learned. And it’s made to transfer the basic characteristic of being stick-shaped and pointed. If characteristic data for box cutters can be obtained from other systems, SOINN can guess from the transferred data that the objects are box cutters.

“Here, you’ve seen how this works for pictures. But SOINN can handle other types of information flexibly. For example, we think we could teach it to pick out features from audio or video data. Then, it could also utilize data from robot sensors.”

“With previous pet robots, such as AIBO, training involved patterns that were decided in advance. When those possibilities are exhausted, the robot can’t do any more. So, people come to understand what it’s going to do, and get bored with it. But SOINN can remember an amount of changes. So, in principle, it can develop without a scripted scenario.”



tags: ,


DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.
DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.


Subscribe to Robohub newsletter on substack



Related posts :

Robot Talk Episode 148 – Ethical robot behaviour, with Alan Winfield

  13 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Alan Winfield from the University of the West of England about developing new standards for ethics and transparency in robotics.

Coding for underwater robotics

  12 Mar 2026
Lincoln Laboratory intern Ivy Mahncke developed and tested algorithms to help human divers and robots navigate underwater.

Restoring surgeons’ sense of touch with robotic fingertips

  10 Mar 2026
Researchers are developing robotic “fingertips” that could give surgeons back their sense of touch during minimally invasive and robotic operations.

Robot Talk Episode 147 – Miniature living robots, with Maria Guix

  06 Mar 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Maria Guix from the University of Barcelona about combining electronics and biology to create biohybrid robots with emergent properties.

Developing an optical tactile sensor for tracking head motion during radiotherapy: an interview with Bhoomika Gandhi

  05 Mar 2026
Bhoomika Gandhi discusses her work on an optical sensor for medical robotics applications.

Humanoid home robots are on the market – but do we really want them?

  03 Mar 2026
Last year, Norwegian-US tech company 1X announced “the world’s first consumer-ready humanoid robot designed to transform life at home”.

Robot Talk Episode 146 – Embodied AI on the ISS, with Jamie Palmer

  27 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Jamie Palmer from Icarus Robotics about building a robotic labour force to perform routine and risky tasks in orbit.

I developed an app that uses drone footage to track plastic litter on beaches

  26 Feb 2026
Plastic pollution is one of those problems everyone can see, yet few know how to tackle it effectively.



Robohub is supported by:


Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence