news    views    talk    learn    |    about    contribute     republish     crowdfunding     archives     events

Articles

All the latest news for robotics

by   -   November 30, 2016

retro-robot-reading-book-thinking-AI

The common, and recurring, view of the latest breakthroughs in artificial intelligence research is that sentient and intelligent machines are just on the horizon. Machines understand verbal commands, distinguish pictures, drive cars and play games better than we do. How much longer can it be before they walk among us?

The new White House report on artificial intelligence takes an appropriately skeptical view of that dream. It says the next 20 years likely won’t see machines “exhibit broadly-applicable intelligence comparable to or exceeding that of humans,” though it does go on to say that in the coming years, “machines will reach and exceed human performance on more and more tasks.” But its assumptions about how those capabilities will develop missed some important points.

As an AI researcher, I’ll admit it was nice to have my own field highlighted at the highest level of American government, but the report focused almost exclusively on what I call “the boring kind of AI.” It dismissed in half a sentence my branch of AI research, into how evolution can help develop ever-improving AI systems, and how computational models can help us understand how our human intelligence evolved.

The report focuses on what might be called mainstream AI tools: machine learning and deep learning. These are the sorts of technologies that have been able to play “Jeopardy!” well, and beat human Go masters at the most complicated game ever invented. These current intelligent systems are able to handle huge amounts of data and make complex calculations very quickly. But they lack an element that will be key to building the sentient machines we picture having in the future.

We need to do more than teach machines to learn. We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.

Type I AI: Reactive machines

The most basic types of AI systems are purely reactive and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world.

The current intelligent machines we marvel at either have no such concept of the world, or have a very limited and specialized one for its particular duties. The innovation in Deep Blue’s design was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to stop pursuing some potential future moves, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov.

Similarly, Google’s AlphaGo, which has beaten top human Go experts, can’t evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blue’s, using a neural network to evaluate game developments.

These methods do improve the ability of AI systems to play specific games better, but they can’t be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world – meaning they can’t function beyond the specific tasks they’re assigned and are easily fooled.

They can’t interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave exactly the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy: You want your autonomous car to be a reliable driver. But it’s bad if we want machines to truly engage with, and respond to, the world. These simplest AI systems won’t ever be bored, or interested, or sad.

Type II AI: Limited memory

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel.

So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to make up for human shortcomings by letting the machines build their own representations.

Type III AI: Theory of mind

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.

Type IV AI: Self-awareness

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it.

This is, in a sense, an extension of the “theory of mind” possessed by Type III artificial intelligences. Consciousness is also called “self-awareness” for a reason. (“I want that item” is a very different statement from “I know I want that item.”) Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. We assume someone honking behind us in traffic is angry or impatient, because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.

This article was originally published on The Conversation. Read the original article.

If you enjoyed this article, you may also want to read these articles about AI: 

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

by   -   November 29, 2016

Man-robot-interaction-handshake-robotics

Dear Readers,
For #GivingTueday, we would like to ask you, our loyal readers, to consider donating to Robohub. From the beginning, our mission has been to help demystify robotics by hearing straight from the experts. Robohub isn’t like most news websites. We’re a community. We’re a forum. Much of our content is written directly by the experts in academia, businesses, and industry. That means you get to learn about the latest research and business news, events and opinions, directly from the experts, unfiltered, with no media bias. Our goal is to keep you engaged and interested in robotics that may not necessarily be covered by top news agencies.

by   -   November 29, 2016
Roboethics panel in Amsterdam.

What ethical issues do we face in providing robot care for the elderly? Is there better acceptance with the public? What should we be mindful of when designing human-robot interactions?

At the #ERW2016 central event, held in Amsterdam 18-22 November, these questions (and more) were discussed, debated, and encouraged by expert panellists hailing from research, industry, academia, and government as well as insightful members in the community. All were welcome to ‘Robots at Your Service’, a multi-track event featuring panel deliberations in robotics regulation, assistive living technologies, and aimed at attracting more youth, and especially girls, into science, technology, engineering, arts and maths (STEAM). The event hosted workshops and featured a 48-hour hackathon for designers, makers, coders, engineers, and anyone else who believed healthy ageing should be a societal challenge.

by   -   November 24, 2016

readers_pick_winner

We’re delighted to announce that UniExo is the winner of the “Robohub Reader’s Pick” in the Robot Launch 2016 global startup competition. Are Robohub readers on the same wavelength as our panel of VC, investors and expert judges? We’ll find out next week when we announce the overall winners of the Robot Launch 2016 competition!

by   -   November 17, 2016

vote_three

For the next three weeks, Robohub readers can vote for their “Readers’ Pick” startup from the Robot Launch competition. Each week, we’ll be publishing 10 videos. Our ultimate Robohub Readers’ Favorites, along with lots of other prizes, will be announced at the end of November. Every week we’ll showcase different aspects of robotics startups and their business models: from agricultural to humanoid, from consumer to industrial and from hardware to robotics software. Make sure you vote for your favorite – below – by 18:00pm UTC, Wednesday 23 November, spread the word through social media using #robotlaunch2016.

by   -   November 15, 2016

iros-pic4

The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) took place in October, in Daejeon, Korea. IROS is an annual robotics conference that seeks to “explore the frontier of science and technology in intelligent robots and smart machines, and to stimulate innovative ideas, exchange technological perspectives and assess future directions in the field of intelligent robots and smart machines with a view to promote progress and prosperity for all nations.”

All images by Robots Podcast interviewer MeiXing Dong.

by   -   November 15, 2016
Air, land and sea robots will meet in Piombino, Italy, for the ERL Emergency Robots major tournament. Photos: Joshua Hayes Davidson / euRathlon
Air, land and sea robots will meet in Piombino, Italy, for the ERL Emergency Robots major tournament. Photos: Joshua Hayes Davidson / euRathlon

The European Robotics League (ERL) is pleased to announce the dates for the major tournament of the ERL Emergency Robots competition. The 2017 challenge will be held in Piombino, Italy, from 15-23 September 2017. The call for participation is now open.

by   -   November 10, 2016

vote_two

For the next three weeks, Robohub readers can vote for their “Readers’ Pick” startup from the Robot Launch competition. Each week, we’ll be publishing 10 videos. Our ultimate Robohub Readers’ Favorites, along with lots of other prizes, will be announced at the end of November. Every week we’ll showcase different aspects of robotics startups and their business models: from agricultural to humanoid, from consumer to industrial and from hardware to robotics software. Make sure you vote for your favorite – below – by 18:00pm UTC, Wednesday 16 November, spread the word through social media using #robotlaunch2016 and come back next week for the next 10!

by   -   November 8, 2016
Banksy robot and barcode graffiti in New York, USA.
Banksy robot and barcode graffiti in New York, USA.

Let’s stop talking about bad robots and start talking about what makes a robot good.

by   -   November 7, 2016
Source: euRobotics
Source: euRobotics

European Robotics Week 2016 (#ERW2016) is kicking off 18-22 November in Amsterdam. The central event ‘Robots at your Service’ is about focussing on robotic technologies to help ageing populations live a healthier, active, and independent life. European Robotics Week celebrates Europe as a world-leader in robotics with typically hundreds of events.

by   -   November 7, 2016

dars_video

The 13th International Symposium on Distributed Autonomous Robotic Systems (DARS) in London this week brings together the very best working in multi-robot systems.

by   -   November 2, 2016

vote_one

For the next three weeks, Robohub readers can vote for their “Readers’ Pick” startup from the Robot Launch competition. Each week, we’ll be publishing 10 videos. Our ultimate Robohub Readers’ Favorites, along with lots of other prizes, will be announced at the end of November. Every week we’ll showcase different aspects of robotics startups and their business models: from agricultural to humanoid, from consumer to industrial and from hardware to robotics software. Make sure you vote for your favorite – below – by 18:00pm UTC, Wednesday 9 November, spread the word through social media using #robotlaunch2016 and come back next week for the next 10!

by and   -   October 19, 2016
Charging Bull statue. Credit: Sam Valadi/Flickr
Charging Bull statue. Credit: Sam Valadi/Flickr

You may be surprised, but I’m not. These are the people I see regularly both in Silicon Valley and overseas interacting with the robotics community. That makes them the smart money (most of the time). According to CB Insights, the 7 most active robotics investors over the last 5 years are: Eclipse Ventures, High-Tech Gründerfonds, Lux, Intel Capital, Sequoia China, CRV, and Visionaire Ventures.

As CB Insights demonstrates, old school ‘smart money’ is still making investments in robotics — just at a slower pace. Overall, the last 5 years has seen an increase in global robotics equity funding to $2.6 billion in 405 deals.

by   -   October 7, 2016

lemano2_forwebThis week, the world’s first Cybathlon will take place in Zurich, Switzerland and today we present to you the second of the NCCR Robotics teams to be taking part in the competition, LeMano. The Cybathlon is the brainchild of NCCR Robotics co-director and ETH Zurich professor Robert Riener, and is designed to facilitate discussion between academics, industry and end users of assistive aids, to promote the position of people with disabilities within society and to push development of assistive technology towards solutions that are suitable for use all-day, every day.

by   -   October 5, 2016

_img5870_resized

This week the world’s first Cybathlon will take place in Zurich, Switzerland. Cybathlon is the brainchild of NCCR Robotics co-director and ETH Zurich professor Robert Riener, and is designed to facilitate discussion between academics, industry and end users of assistive aids, to promote the position of people with disabilities within society and to push development of assistive technology towards solutions that are suitable for use all-day, every day.





Robots and Communication
August 21, 2015


Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign