Robohub.org
 

SOINN artificial brain can now use the internet to learn new things


by
01 May 2013



share this:
13-0032-r

A group at Tokyo Institute of Technology, led by Dr. Osamu Hasegawa, has succeeded in making further advances with SOINN, their machine learning algorithm, which can now use the internet to learn how to perform new tasks.

“Image searching technology is quite practical now. So, by linking our algorithm to that, we’ve enabled the system to identify which characteristics are important by itself, and to remember that what kind of thing the subject is.”

These are pictures of rickshaws, taken in India by the Group. When one of these pictures is loaded, the system hasn’t yet learned what it is. So, it recognizes the subject as a “car,” which it has already learned. The system is then given the keyword “rickshaw.” From the Internet, the system picks out the main characteristics of pictures related to rickshaws, and learns by itself what a rickshaw is. After learning, even if a different picture of a rickshaw is loaded, the system recognizes it as a rickshaw.

“In the case of a rickshaw, there may be other things in the picture, or people may be riding in the rickshaw, but the system picks out only those features common to many cases, such as large wheels, a platform above the wheels, and a roof, and it learns that what people call a rickshaw includes these features. So, even with an object it hasn’t seen before, if the object has those features, the system can recognize it.”

“With previous methods, for example, face recognition by digital cameras, it’s necessary to teach the system quite a lot of things about faces. When subjects become diverse, it’s very difficult for people to tell the system what sort of characteristics they have, and how many features are sufficient to recognize things. SOINN can pick those features out for itself. It doesn’t need models, which is a very big advantage.”

The Group is also developing ways to transfer learned characteristic data to other things. For example, the system has already learned knives and pens, and possesses the characteristic data that they are “pointed objects” and “stick-shaped objects” respectively. To make the system recognize box cutters, it’s made to look at the similarities between box cutters, and knives and pens, which it has already learned. And it’s made to transfer the basic characteristic of being stick-shaped and pointed. If characteristic data for box cutters can be obtained from other systems, SOINN can guess from the transferred data that the objects are box cutters.

“Here, you’ve seen how this works for pictures. But SOINN can handle other types of information flexibly. For example, we think we could teach it to pick out features from audio or video data. Then, it could also utilize data from robot sensors.”

“With previous pet robots, such as AIBO, training involved patterns that were decided in advance. When those possibilities are exhausted, the robot can’t do any more. So, people come to understand what it’s going to do, and get bored with it. But SOINN can remember an amount of changes. So, in principle, it can develop without a scripted scenario.”



tags: ,


DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.
DigInfo TV is a Tokyo-based online video news platform dedicated to producing original coverage of cutting edge technology, research and products from Japan.





Related posts :



Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions

and   25 Jul 2025
Hear from PhD student Kate about her work on human-robot interactions.

#RoboCup2025: social media round-up part 2

  24 Jul 2025
Find out what participants got up to during the second half of RoboCup2025 in Salvador, Brazil.

#RoboCup2025: social media round-up 1

  21 Jul 2025
Find out what participants got up to during the opening days of RoboCup2025 in Salvador, Brazil.

Livestream of RoboCup2025

  18 Jul 2025
Watch the competition live from Salvador!

Tackling the 3D Simulation League: an interview with Klaus Dorer and Stefan Glaser

and   15 Jul 2025
With RoboCup2025 starting today, we found out more about the 3D simulation league, and the new simulator they have in the works.

An interview with Nicolai Ommer: the RoboCupSoccer Small Size League

and   01 Jul 2025
We caught up with Nicolai to find out more about the Small Size League, how the auto referees work, and how teams use AI.

RoboCupRescue: an interview with Adam Jacoff

and   25 Jun 2025
Find out what's new in the RoboCupRescue League this year.

Robot Talk Episode 126 – Why are we building humanoid robots?

  20 Jun 2025
In this special live recording at Imperial College London, Claire chatted to Ben Russell, Maryam Banitalebi Dehkordi, and Petar Kormushev about humanoid robotics.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence