Robohub.org
 

Interview: Billion dollar brain with Professor Alois Knoll

by
23 March 2016



share this:
Professor Alois Knoll, chair of real-time systems and robotics, stands between two tendon driven robots developed as part of the EU project Eccerobot at the Technical University in Munich-Garching-Hochbrueck, Germany, 28 January 2013. Knoll coordinates the neuro-robotics division of the EU flagship project Human Brain Project (HBP). Photo: FRANK LEONHARDT

Professor Alois Knoll, chair of real-time systems and robotics, stands between two tendon driven robots developed as part of the EU project Eccerobot at the Technical University in Munich-Garching-Hochbrueck, Germany, 28 January 2013. Knoll coordinates the neuro-robotics division of the EU flagship project Human Brain Project (HBP). Photo: FRANK LEONHARDT

By: Abdul Montaqim
Professor Knoll, one of the most influential roboticists in Europe, is currently the co-ordinator of the European Clearing House for Open Robotics Development (Echord), and one of the key scientists involved in the $1.5 billion-dollar Human Brain Project. In this interview he gives his views of the state of robotics today.



The transcript has been edited for clarity.

What are your current areas of research?

Professor Knoll: We’ve been researching many areas but there’s one invariant: human robot interaction, multimodality, and closed loop, or closing the loop between the robot environment and human behaviour. The human robot interaction is the basic theme I always come back to. The most interesting and frequently mentioned, attractive topic seems to be human robot interaction.

What, in your opinion, are the most important areas for research?

Professor Knoll: Well, there are quite a few problems in mechatronics. People, like myself, seem to be a lot more interested in sensors, artificial intelligence, data processing, sensor information fusion, things like that.

For the advancement of the whole field, I think it would be beneficial if more people – unlike myself – were more interested in mechatronics, and control of bodies and things. I would like to see that. The development of the hardware in terms of the mechanics is rather slow. Whereas development in terms of cameras, cheap sensors, and things you can reuse from smartphones – very cheap, but high quality – is something that fascinates people. The same goes for AI.

Robotics as a sector appears to be growing fast. Would you not agree?

Professor Knoll: We will be in continuous growth. The growth rates may even increase. But there will be no sharp drop, like let’s say like the internet [bubble], where you have rapid development with growth rates of 100 per cent a year, but then there’s only one or two companies left that end up conquering. This is not what robotics will be like. There will be a large number of vendors with very specialised solutions. They will excel more and more because there’s greater need. This will be a positive feedback cycle. But don’t expect growth rates of 50 or a hundred per cent in robotics – that’s just not doable.

But of course the need – the demand – is obvious. For example, the assistance systems we have for autonomous cars, which is also a specialised type of robot. So the potential of the sector is huge, but it’s not an easy sector to navigate.

What are the important trends in robotics?

Professor Knoll: In principle, I see one trend. What is likely is that the devices and appliances we have at home and in factories will become more intelligent. So they will be equipped with more sensors, more computing power, and people will learn how to use and how to programme them.

Here of course is one of the decisive factors: the interface between the human and the robot. That’s one thing that’s important. But there’s also another interface between the system on the robot and the environment – that will also be important to master.

Making a robot navigate within a room is something that we have learned how to do, but if the room is difficult to describe or if it’s a completely new building, then it’s quite difficult. Equipping the robot with basic skills is not trivial. But nevertheless, this will happen over time. Over the next couple of years we will certainly see interesting developments.

Is deep learning, or machine learning, not the answer to the problem of robots not knowing how to navigate unfamiliar surroundings?

Professor Knoll: You have to make a distinction between a bodied intelligence and a disembodied intelligence. A disembodied intelligence is a computer sitting somewhere. You put in a big chunk of data, the computer adapts to it by machine learning, and then outputs another chunk of data, which is presented to you.

But here we are talking about a computer or an artificial intelligence built into a body, which is a totally different ball game because what you expect from a bodied intelligence is that it reacts to changes in the environment, it reacts to users’ instructions, and it can develop some independence in moving around, in doing something, in assembling something, in being in an assistance role for hospital, and so on – and that is much more difficult.

Can today’s colossal data processing power – the cloud, parallel and cluster computing and so on – not help the robot navigate any surroundings, no matter how unfamiliar?

Professor Knoll: There has to be a piece of hardware, a robot if you will, that can use a big computer, whether it’s built into it or if it’s connected by WiFi – that doesn’t really matter. The important point is that we have a body, a piece of hardware, with wheels or legs or arms, that perceives its environment and has to respond in real time to changes in this environment.

So, what you’re saying is that it’s a problem with the mechanics?

Professor Knoll: It’s a problem of real time capabilities of these algorithms, which are not normally there. For example when you play Go, where the computer takes half a minute or a minute to make a move.

How long before the machines are as responsive to their surroundings and as capable of navigating their environments as humans?

Professor Knoll: There is group in Manchester, UK, run by Steve Furber. As part of the Human Brain Project, he is now building aneuromorphic computing platform, where he connects a million of these chips simulating a very large number of neurons plus an even bigger number of synapses, just like in the human brain. It’s basically a neural network like we have in our brain.

And he says that this machine with 1 million cores – when it’s finished – will have the intellectual capability of 1 per cent of one human brain.

How much of the human brain have we understood?

Professor Knoll: That’s a very difficult question, which nobody can answer even now because you really have to differentiate between the individual layers, from the molecular level to the topology. This is an open-ended story and it’s difficult to say when it will end.

What we can say is that we have understood enough to be able to map some of what we know about the brain: the technical systems we build, to hardware, to computer architecture, and also to algorithms that are of a new quality, and let us hope we will achieve some…I’m a bit reluctant to use the word ‘intelligence’ because the connotation is always that it’s human-like intelligence. But let’s say we can map our knowledge of the human brain to smarter machines, smarter devices, smarter appliances that sooner or later we will be able to buy.

What distinction are you making between human intelligence and machine intelligence?

Professor Knoll: The distinction I’m making is that human intelligence will only work in a human body because it will only develop in our body as we grow. It takes years to form and shape as the individual develops and, of course, along with the development of the human species.

Whereas, if you are far from having such a body and are far from perceiving the environment like we do with our special sensors, human senses, it’s very unlikely that we will see a similar development.

Is that why people want to develop humanoid robots – so they can treat them as slaves, or machines?

Professor Knoll: But that’s another question, right? When you have a robot that is like a human, maybe they would be like pets, or animals, and people would demand rights for them.

Do you think robots should have rights?

Professor Knoll: I don’t think so. And actually we are dealing with these ethics issues already not only in the Human Brain Project and with these autonomous cars. Also the question of virtual robots.

It’s really difficult. There are many arguments I could list, and there are of course people who do this full time, think about the ethics of robotics, but it’s still only speculation.

Nevertheless, these questions are important because if we don’t answer them in a satisfactory way it will be a major obstacle to the development of these cars as a commercial products.

Would you feel safe in an autonomous car?

Professor Knoll: Yes, I would feel comfortable in an autonomous car. I see no reason why I should trust a driver-less car less than an autonomous aeroplane.

What is the difference? The difference is that the environment is much more complicated in the case of the car. That’s basically the only difference. But when it comes to decision-making, sensing, controlling, the actuators, there’s not much of a difference between a car and an aeroplane.

What are the projects you have planned for the future?

Professor Knoll: We will be focusing on the Human Brain Project. We are working on the basic principles of the functions of the human brain at various levels and we are trying to use their data, and data from our own institute Echord, and what is available around the world, to develop brain-derived controllers for robots.

And at the same time, we are developing a simulation system so that we will be able to virtualise our research, which means that we can do the same experiments with robots in the real world and computers.

The post appeared first on Robotics & Automation News.



tags:


Robotics and Automation News





Related posts :



Robot Talk Episode 101 – Christos Bergeles

In the latest episode of the Robot Talk podcast, Claire chatted to Christos Bergeles from King's College London about micro-surgical robots to deliver therapies deep inside the body.
06 December 2024, by

Robot Talk Episode 100 – Mini Rai

In the latest episode of the Robot Talk podcast, Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.
29 November 2024, by

Robot Talk Episode 99 – Joe Wolfel

In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.
22 November 2024, by

Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association