news    views    podcast    learn    |    about    contribute     republish    

Articles

by   -   June 22, 2019

Racing team 2018-2019: Christophe De Wagter, Guido de Croon, Shuo Li, Phillipp Dürnay, Jiahao Lin, Simon Spronk

Autonomous drone racing
Drone racing is becoming a major e-sports. Enthusiasts – and now also professionals – transform drones into seriously fast racing platforms. Expert drone racers can reach speeds up to 190 km/h. They fly by looking at a first-person view (FPV) of their drone, which has a camera transmitting images mounted on the front.

by   -   June 22, 2019
Effect of Population Based Augmentation applied to images, which differs at different percentages into training.

In this blog post we introduce Population Based Augmentation (PBA), an algorithm that quickly and efficiently learns a state-of-the-art approach to augmenting data for neural network training. PBA matches the previous best result on CIFAR and SVHN but uses one thousand times less compute, enabling researchers and practitioners to effectively learn new augmentation policies using a single workstation GPU. You can use PBA broadly to improve deep learning performance on image recognition tasks.

We discuss the PBA results from our recent paper and then show how to easily run PBA for yourself on a new data set in the Tune framework.

by   -   June 22, 2019

By Eugene Vinitsky

We are in the midst of an unprecedented convergence of two rapidly growing trends on our roadways: sharply increasing congestion and the deployment of autonomous vehicles. Year after year, highways get slower and slower: famously, China’s roadways were paralyzed by a two-week long traffic jam in 2010. At the same time as congestion worsens, hundreds of thousands of semi-autonomous vehicles (AVs), which are vehicles with automated distance and lane-keeping capabilities, are being deployed on highways worldwide. The second trend offers a perfect opportunity to alleviate the first. The current generation of AVs, while very far from full autonomy, already hold a multitude of advantages over human drivers that make them perfectly poised to tackle this congestion. Humans are imperfect drivers: accelerating when we shouldn’t, braking aggressively, and make short-sighted decisions, all of which creates and amplifies patterns of congestion.

by   -   June 3, 2019


By Avi Singh

Communicating the goal of a task to another person is easy: we can use language, show them an image of the desired outcome, point them to a how-to video, or use some combination of all of these. On the other hand, specifying a task to a robot for reinforcement learning requires substantial effort. Most prior work that has applied deep reinforcement learning to real robots makes uses of specialized sensors to obtain rewards or studies tasks where the robot’s internal sensors can be used to measure reward. For example, using thermal cameras for tracking fluids, or purpose-built computer vision systems for tracking objects. Since such instrumentation needs to be done for any new task that we may wish to learn, it poses a significant bottleneck to widespread adoption of reinforcement learning for robotics, and precludes the use of these methods directly in open-world environments that lack this instrumentation.

by   -   May 27, 2019

By Marvin Zhang and Sharad Vikram

Imagine a robot trying to learn how to stack blocks and push objects using visual inputs from a camera feed. In order to minimize cost and safety concerns, we want our robot to learn these skills with minimal interaction time, but efficient learning from complex sensory inputs such as images is difficult. This work introduces SOLAR, a new model-based reinforcement learning (RL) method that can learn skills – including manipulation tasks on a real Sawyer robot arm – directly from visual inputs with under an hour of interaction. To our knowledge, SOLAR is the most efficient RL method for solving real world image-based robotics tasks.

by   -   May 12, 2019
Figure 1: Our model-based meta reinforcement learning algorithm enables a legged robot to adapt online in the face of an unexpected system malfunction (note the broken front right leg).

By Anusha Nagabandi and Ignasi Clavera

Humans have the ability to seamlessly adapt to changes in their environments: adults can learn to walk on crutches in just a few seconds, people can adapt almost instantaneously to picking up an object that is unexpectedly heavy, and children who can walk on flat ground can quickly adapt their gait to walk uphill without having to relearn how to walk. This adaptation is critical for functioning in the real world.

by   -   April 21, 2019

By Annie Xie

In many animals, tool-use skills emerge from a combination of observational learning and experimentation. For example, by watching one another, chimpanzees can learn how to use twigs to “fish” for insects. Similarly, capuchin monkeys demonstrate the ability to wield sticks as sweeping tools to pull food closer to themselves. While one might wonder whether these are just illustrations of “monkey see, monkey do,” we believe these tool-use abilities indicate a greater level of intelligence.

by   -   April 7, 2019

Automated, networked truck convoys could save fuel and cut down on driving time. Image credit – MAN Truck & Bus

by Sandrine Ceurstemont
Semi-autonomous cars are expected to hit the roads in Europe next year with truck convoys following a few years later. But before different brands can share the roads, vehicle manufacturers need to agree on standards for automated functions.

by   -   April 5, 2019

By Frederik Ebert and Stephen Tian

Guiding our fingers while typing, enabling us to nimbly strike a matchstick, and inserting a key in a keyhole all rely on our sense of touch. It has been shown that the sense of touch is very important for dexterous manipulation in humans. Similarly, for many robotic manipulation tasks, vision alone may not be sufficient – often, it may be difficult to resolve subtle details such as the exact position of an edge, shear forces or surface textures at points of contact, and robotic arms and fingers can block the line of sight between a camera and its quarry. Augmenting robots with this crucial sense, however, remains a challenging task.

Our goal is to provide a framework for learning how to perform tactile servoing, which means precisely relocating an object based on tactile information. To provide our robot with tactile feedback, we utilize a custom-built tactile sensor, based on similar principles as the GelSight sensor developed at MIT. The sensor is composed of a deformable, elastomer-based gel, backlit by three colored LEDs, and provides high-resolution RGB images of contact at the gel surface. Compared to other sensors, this tactile sensor sensor naturally provides geometric information in the form of rich visual information from which attributes such as force can be inferred. Previous work using similar sensors has leveraged the this kind of tactile sensor on tasks such as learning how to grasp, improving success rates when grasping a variety of objects.

by   -   February 19, 2019

By Tijana Zrnic

“Scientific research has changed the world. Now it needs to change itself. – The Economist, 2013

There has been a growing concern about the validity of scientific findings. A multitude of journals, papers and reports have recognized the ever smaller number of replicable scientific studies. In 2016, one of the giants of scientific publishing, Nature, surveyed about 1,500 researchers across many different disciplines, asking for their stand on the status of reproducibility in their area of research. One of the many takeaways to the worrisome results of this survey is the following: 90% of the respondents agreed that there is a reproducibility crisis, and the overall top answer to boosting reproducibility was “better understanding of statistics”. Indeed, many factors contributing to the explosion of irreproducible research stem from the neglect of the fact that statistics is no longer as static as it was in the first half of the 20th century, when statistical hypothesis testing came into prominence as a theoretically rigorous proposal for making valid discoveries with high confidence.

by   -   February 12, 2019

By Rohin Shah and Dmitrii Krasheninnikov

It would be great if we could all have household robots do our chores for us. Chores are tasks that we want done to make our houses cater more to our preferences; they are a way in which we want our house to be different from the way it currently is. However, most “different” states are not very desirable:

Surely our robot wouldn’t be so dumb as to go around breaking stuff when we ask it to clean our house? Unfortunately, AI systems trained with reinforcement learning only optimize features specified in the reward function and are indifferent to anything we might’ve inadvertently left out. Generally, it is easy to get the reward wrong by forgetting to include preferences for things that should stay the same, since we are so used to having these preferences satisfied, and there are so many of them. Consider the room below, and imagine that we want a robot waiter that serves people at the dining table efficiently. We might implement this using a reward function that provides 1 reward whenever the robot serves a dish, and use discounting so that the robot is incentivized to be efficient. What could go wrong with such a reward function? How would we need to modify the reward function to take this into account? Take a minute to think about it.

by   -   January 25, 2019
OroBOT – Credit: Maxime Marendaz

Using the fossil and fossilized footprints of a 300-million-year-old animal, scientists from EPFL and Humboldt-Universität zu Berlin have identified the most likely gaits of extinct animals and designed a robot that can recreate an extinct animal’s walk. This study can help researchers better understand how vertebrate locomotion evolved over time.

by   -   January 25, 2019

Cozmo robots and their corresponding tablets are being distributed to participants to take home so that they can interact with them for a week for an experiment being carried out by social robotics professor Emily Cross. Image credit – Ruud Hortensius and Emily Cross
By Frieda Klotz

People’s interactions with machines, from robots that throw tantrums when they lose a colour-matching game against a human opponent to the bionic limbs that could give us extra abilities, are not just revealing more about how our brains are wired – they are also altering them.

Emily Cross is a professor of social robotics at the University of Glasgow in Scotland who is examining the nature of human-robot relationships and what they can tell us about human cognition.

by   -   January 25, 2019

Developing countries must begin seriously considering how technological changes will impact labour trends. KC Jan/Shutterstock

By Asit K. Biswas, University of Glasgow and Kris Hartley, The Education University of Hong Kong

In the 21st century, governments cannot ignore how changes in technology will affect employment and political stability.

The automation of work – principally through robotics, artificial intelligence (AI) and the Internet of things (IoT), collectively known as the Fourth Industrial Revolution – will provide an unprecedented boost to productivity and profit. It will also threaten the stability of low- and mid-skilled jobs in many developing and middle-income countries.

by   -   January 17, 2019

In trials, the ResiBot robot learned to walk again in less than two minutes after one of its legs was removed. Image credit – Antoine Cully / Sorbonne University

By Gareth Willmer

It’s part of a field of work that is building machines that can provide real-time help using only limited data as input. Standard machine-learning algorithms often need to process thousands of possibilities before deciding on a solution, which may be impractical in pressurised scenarios where fast adaptation is critical.



On Design in Human-Robot Interaction
June 24, 2019


Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign