news    views    podcast    learn    |    about    contribute     republish    

herotagrc

by   -   December 4, 2018


By Chelsea Finn∗, Frederik Ebert∗, Sudeep Dasari, Annie Xie, Alex Lee, and Sergey Levine

With very little explicit supervision and feedback, humans are able to learn a wide range of motor skills by simply interacting with and observing the world through their senses. While there has been significant progress towards building machines that can learn complex skills and learn based on raw sensory information such as image pixels, acquiring large and diverse repertoires of general skills remains an open challenge. Our goal is to build a generalist: a robot that can perform many different tasks, like arranging objects, picking up toys, and folding towels, and can do so with many different objects in the real world without re-learning for each object or task.

by   -   November 4, 2018


Researchers from EPFL and Stanford have developed small drones that can land and then move objects that are 40 times their weight, with the help of powerful winches, gecko adhesives and microspines.

by , and   -   October 9, 2018

From driving rovers on Mars to improving farm automation for Indian women, once again we’re bringing you a list of 25 amazing women in robotics! These women cover all aspects of the robotics industry, both research, product and policy. They are founders and leaders, they are investigators and activists. They are early career stage and emeritus. There is a role model here for everyone! And there is no excuse – ever – not to have a woman speaking on a panel on robotics and AI.

by , and   -   October 6, 2018

The deployment of connected, automated, and autonomous vehicles presents us with transformational opportunities for road transport. These opportunities reach beyond single-vehicle automation: by enabling groups of vehicles to jointly agree on maneuvers and navigation strategies, real-time coordination promises to improve overall traffic throughput, road capacity, and passenger safety. However, coordinated driving for intelligent vehicles still remains a challenging research problem, and testing new approaches is cumbersome. Developing true-scale facilities for safe, controlled vehicle testbeds is massively expensive and requires a vast amount of space. One approach to facilitating experimental research and education is to build low-cost testbeds that incorporate fleets of down-sized, car-like mobile platforms.

by and   -   September 26, 2018
DelFly Nimble in forward flight. Credits: TU Delft

Bio-inspired flapping wing robots hold a great potential. The promise is that they can fly very efficiently even at smaller scales, while being able to fly fast, hover, and make quick maneuvers. We now present a new flapping wing robot, the DelFly Nimble, that is so agile that it can even accurately mimic the high-speed escape maneuvers of fruit flies. In the scientific article, published in Science, we show that the robot’s motion resembles so much that of the fruit fly that it allowed us to better understand the dynamics of fruit flies during escape maneuvers. Here at Robohub, we wish to give a bit more background about the motivation and process of how we got to the final design of this robot, and what we think the future may bring.

by   -   August 29, 2018

If you follow the robotics community on the twittersphere, you’ll have noticed that Rodney Brooks is publishing a series of essays on the future of robotics and AI which has been gathering wide attention.

by   -   August 29, 2018

Will a robot take my job?
Media headlines often speculate about robots taking our jobs. We’re told robots will replace swaths of workers from taxi drivers to caregivers. While some believe this will lead to a utopian future where humans live a life of leisure provided for by robots, the dystopian view sees automation as a risk to the very fabric of society. Such hopes and fears have preceded the introduction of new technologies for centuries – the Luddites for example destroyed weaving machines in the 19th century to protest the automation of their sector. What we see, time and time again, is that technology drives productivity and wealth, which translates to more and better jobs down the line. But can we expect the same to happen with robots, or is this time different?

by   -   August 9, 2018

A new fabrication process enables the creation of soft robots at the millimeter scale with features on the micrometer scale as shown here with the example of a small soft robotic peacock spider with moving body parts and colored eyes and abdomens. Credit: Wyss Institute at Harvard University

By Benjamin Boettner

Roboticists are envisioning a future in which soft, animal-inspired robots can be safely deployed in difficult-to-access environments, such as inside the human body or in spaces that are too dangerous for humans to work, in which rigid robots cannot currently be used. Centimeter-sized soft robots have been created, but thus far it has not been possible to fabricate multifunctional flexible robots that can move and operate at smaller size scales.

by   -   August 9, 2018

By John Miller

An earlier version of this post was published on Off the Convex Path. It is reposted here with the author’s permission.

In the last few years, deep learning practitioners have proposed a litany of different sequence models. Although recurrent neural networks were once the tool of choice, now models like the autoregressive Wavenet or the Transformer are replacing RNNs on a diverse set of tasks. In this post, we explore the trade-offs between recurrent and feed-forward models.

by   -   July 25, 2018

Ever since the première of “Steamboat Willie” in 1928, The Walt Disney Company has pushed the envelope of imagination. Mickey Mouse is still more popular worldwide than any single human actor. In fact, from that one cel an entire world of animated characters was born. The entertainment powerhouse demonstrated last week a new generation of theatrics with a flying robot-like stuntman (hero pause and all) that is destined to become a leading player in the age of autonomy.

by   -   July 16, 2018

Readers of this blog will know that I’ve become very excited by the potential of robots with simulation-based internal models in recent years. So far we’ve demonstrated their potential in simple ethical robots and as the basis for rational imitation. Our most recent publication instead examines the potential of robots with simulation-based internal models for safety. Of course it’s not hard to see why the ability to model and predict the consequences of both your own and others’ actions can help you to navigate the world more safely than without that ability.

Our paper Simulation-Based Internal Models for Safer Robots demonstrates the value of anticipation in what we call the corridor experiment. Here a smart robot (equipped with a simulation based internal model which we call a consequence engine) must navigate to the end of a corridor while maintaining a safe space around it at all times despite five other robots moving randomly in the corridor – in much the same way you and I might have to navigate down a busy office corridor while others are coming in the opposite direction.

Here is the abstract from our paper:

In this paper, we explore the potential of mobile robots with simulation-based internal models for safety in highly dynamic environments. We propose a robot with a simulation of itself, other dynamic actors and its environment, inside itself. Operating in real time, this simulation-based internal model is able to look ahead and predict the consequences of both the robot’s own actions and those of the other dynamic actors in its vicinity. Hence, the robot continuously modifies its own actions in order to actively maintain its own safety while also achieving its goal. Inspired by the problem of how mobile robots could move quickly and safely through crowds of moving humans, we present experimental results which compare the performance of our internal simulation-based controller with a purely reactive approach as a proof-of-concept study for the practical use of simulation-based internal models.

So, does it work? Thanks to some brilliant experimental work by Christian Blum the answer is a resounding yes. The best way to understand what’s going on is with this wonderful gif animation of one experimental run below. The smart robot (blue) starts at the left and has the goal of safely reaching the right hand end of the corridor – its actual path is also shown in blue. Meanwhile 5 (red) robots are moving randomly (including bouncing off walls) and their actual paths are also shown in red; these robots are equipped only with simple obstacle avoidance behaviours. The larger blue circle shows blue’s ‘attention radius’ – to reduce computational effort blue will only model red robots within this radius. The yellow paths in front of the red robots in blue’s attention radius show blue’s predictions of how those robots will move (taking into account collisions with the corridor walls and with blue and each other). The light blue projection in front of blue shows which of the 34 next possible actions of blue that is internally modelled is actually chosen as the next action (which, as you will see, sometimes includes standing still).

What do the results show us? Christian ran lots of trials – 88 simulations and 54 real robot experiments – over four experiments: (1) the baseline in simulation – in which the blue robot has only a simple reactive collision avoidance behaviour, (2) the baseline with real robots, (3) using the consequence engine (CE) in the blue robot in simulation, and (4) using the consequence engine in the blue robot with real robots. In the results below (a) shows the time taken for the blue robot to reach the end of the corridor, (b) shows the distance that the blue robot covers while reaching the end of the corridor, (c) shows the “danger ratio” experienced by the blue robot, and (d) shows the number of consequence engine runs per timestep in the blue robot. The danger ratio is the percentage of the run time that anther robot is within the blue robot’s safety radius.

For a relatively small cost in additional run time and distance covered, panels (a) and (b), the danger ratio is very significantly reduced from a mean value of ~20% to a mean value of zero, panel (c). Of course there is a computational cost, and this is reflected in panel (d); the baseline experiment has no consequence engine and hence runs no simulations, whereas the smart robot runs an average of between 8 and 10 simulations per time-step. This is exactly what we would expect: predicting the future clearly incurs a computational overhead.


Full paper reference:
Blum C, Winfield AFT and Hafner VV (2018) Simulation-Based Internal Models for Safer Robots. Front. Robot. AI 4:74. doi: 10.3389/frobt.2017.00074


Acknowledgements:
I am indebted to Christian Blum who programmed the robots, set up the experiment and obtained the results outlined here. Christian lead authored the paper, which was also co-authored by my friend and research collaborator Verena Hafner, who was Christian’s PhD advisor.

by   -   July 1, 2018

Twenty-seven startups raised money in June to the tune of $2.1 billion, another great month for robotics! Also during June there were ten acquisitions and two IPOs. See below for details.

by   -   June 21, 2018

Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils.

by   -   May 25, 2018

The International Conference on Robotics and Automation (ICRA) is the IEEE Robotics and Automation Society’s flagship conference and is a premier international forum for robotics researchers to present their work. ICRA 2018 is just wrapping up over in Brisbane Australia.

by   -   April 24, 2018

By Siddharth Reddy

Imagine a drone pilot remotely flying a quadrotor, using an onboard camera to navigate and land. Unfamiliar flight dynamics, terrain, and network latency can make this system challenging for a human to control. One approach to this problem is to train an autonomous agent to perform tasks like patrolling and mapping without human intervention. This strategy works well when the task is clearly specified and the agent can observe all the information it needs to succeed. Unfortunately, many real-world applications that involve human users do not satisfy these conditions: the user’s intent is often private information that the agent cannot directly access, and the task may be too complicated for the user to precisely define. For example, the pilot may want to track a set of moving objects (e.g., a herd of animals) and change object priorities on the fly (e.g., focus on individuals who unexpectedly appear injured). Shared autonomy addresses this problem by combining user input with automated assistance; in other words, augmenting human control instead of replacing it.

← previous page        ·         next page →



Cognitive Robotics Under Uncertainty
November 26, 2019


Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign