news    views    podcast    learn    |    about    contribute     republish    

Articles

by   -   October 3, 2019

One of the biggest urban legends growing up in New York City were rumors about alligators living in the sewers. This myth even inspired a popular children’s book called “The Great Escape: Or, The Sewer Story,” with illustrations of reptiles crawling out of apartment toilets. To this day, city dwellers anxiously look at manholes wondering what lurks below. This curiosity was shared last month by the US Defense Department with its appeal for access to commercial underground complexes.

by   -   September 16, 2019

Jellyfish are about 95% water, making them some of the most diaphanous, delicate animals on the planet. But the remaining 5% of them have yielded important scientific discoveries, like green fluorescent protein (GFP) that is now used extensively by scientists to study gene expression, and life-cycle reversal that could hold the keys to combating aging. Jellyfish may very well harbor other, potentially life-changing secrets, but the difficulty of collecting them has severely limited the study of such “forgotten fauna.” The sampling tools available to marine biologists on remotely operated vehicles (ROVs) were largely developed for the marine oil and gas industries, and are much better-suited to grasping and manipulating rocks and heavy equipment than jellies, often shredding them to pieces in attempts to capture them.

by   -   August 25, 2019

A new generation of swarming robots which can independently learn and evolve new behaviours in the wild is one step closer, thanks to research from the University of Bristol and the University of the West of England (UWE).

by   -   August 25, 2019

By Laure-Anne Pessina and Nicola Nosengo
Scientists at EPFL have developed a tiny pump that could play a big role in the development of autonomous soft robots, lightweight exoskeletons and smart clothing. Flexible, silent and weighing only one gram, it is poised to replace the rigid, noisy and bulky pumps currently used. The scientists’ work has just been published in Nature.

by   -   August 25, 2019

The light-weight versatile exosuit assists hip extension during uphill walking and at different running speeds in natural terrain. Credit: Wyss Institute at Harvard University

By Benjamin Boettner

Between walking at a leisurely pace and running for your life, human gaits can cover a wide range of speeds. Typically, we choose the gait that allows us to consume the least amount of energy at a given speed. For example, at low speeds, the metabolic rate of walking is lower than that of running in a slow jog; vice versa at high speeds, the metabolic cost of running is lower than that of speed walking.

by   -   August 14, 2019

By Nicholas Carlini

It is important whenever designing new technologies to ask “how will this affect people’s privacy?” This topic is especially important with regard to machine learning, where machine learning models are often trained on sensitive user data and then released to the public. For example, in the last few years we have seen models trained on users’ private emails, text messages, and medical records.

This article covers two aspects of our upcoming USENIX Security paper that investigates to what extent neural networks memorize rare and unique aspects of their training data.

Specifically, we quantitatively study to what extent following problem actually occurs in practice:

by   -   July 11, 2019


A team of EPFL researchers has developed tiny 10-gram robots that are inspired by ants: they can communicate with each other, assign roles among themselves and complete complex tasks together. These reconfigurable robots are simple in structure, yet they can jump and crawl to explore uneven surfaces. The researchers have just published their work in Nature.

by   -   June 30, 2019
Changes to the Robobee — including an additional pair of wings and improvements to the actuators and transmission ratio — made the vehicle more efficient and allowed the addition of solar cells and an electronics panel. This Robobee is the first to fly without a power cord and is the lightest, untethered vehicle to achieve sustained flight. Credit: Harvard Microrobotics Lab/Harvard SEAS

By Leah Burrows

In the Harvard Microrobotics Lab, on a late afternoon in August, decades of research culminated in a moment of stress as the tiny, groundbreaking Robobee made its first solo flight.

Graduate student Elizabeth Farrell Helbling, Ph.D.’19, and postdoctoral fellow Noah T. Jafferis, Ph.D. from Harvard’s Wyss Institute for Biologically Inspired Engineering, the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and the Graduate School of Arts and Sciences caught the moment on camera.

by   -   June 22, 2019

Racing team 2018-2019: Christophe De Wagter, Guido de Croon, Shuo Li, Phillipp Dürnay, Jiahao Lin, Simon Spronk

Autonomous drone racing
Drone racing is becoming a major e-sports. Enthusiasts – and now also professionals – transform drones into seriously fast racing platforms. Expert drone racers can reach speeds up to 190 km/h. They fly by looking at a first-person view (FPV) of their drone, which has a camera transmitting images mounted on the front.

by   -   June 22, 2019
Effect of Population Based Augmentation applied to images, which differs at different percentages into training.

In this blog post we introduce Population Based Augmentation (PBA), an algorithm that quickly and efficiently learns a state-of-the-art approach to augmenting data for neural network training. PBA matches the previous best result on CIFAR and SVHN but uses one thousand times less compute, enabling researchers and practitioners to effectively learn new augmentation policies using a single workstation GPU. You can use PBA broadly to improve deep learning performance on image recognition tasks.

We discuss the PBA results from our recent paper and then show how to easily run PBA for yourself on a new data set in the Tune framework.

by   -   June 22, 2019

By Eugene Vinitsky

We are in the midst of an unprecedented convergence of two rapidly growing trends on our roadways: sharply increasing congestion and the deployment of autonomous vehicles. Year after year, highways get slower and slower: famously, China’s roadways were paralyzed by a two-week long traffic jam in 2010. At the same time as congestion worsens, hundreds of thousands of semi-autonomous vehicles (AVs), which are vehicles with automated distance and lane-keeping capabilities, are being deployed on highways worldwide. The second trend offers a perfect opportunity to alleviate the first. The current generation of AVs, while very far from full autonomy, already hold a multitude of advantages over human drivers that make them perfectly poised to tackle this congestion. Humans are imperfect drivers: accelerating when we shouldn’t, braking aggressively, and make short-sighted decisions, all of which creates and amplifies patterns of congestion.

by   -   June 3, 2019


By Avi Singh

Communicating the goal of a task to another person is easy: we can use language, show them an image of the desired outcome, point them to a how-to video, or use some combination of all of these. On the other hand, specifying a task to a robot for reinforcement learning requires substantial effort. Most prior work that has applied deep reinforcement learning to real robots makes uses of specialized sensors to obtain rewards or studies tasks where the robot’s internal sensors can be used to measure reward. For example, using thermal cameras for tracking fluids, or purpose-built computer vision systems for tracking objects. Since such instrumentation needs to be done for any new task that we may wish to learn, it poses a significant bottleneck to widespread adoption of reinforcement learning for robotics, and precludes the use of these methods directly in open-world environments that lack this instrumentation.

by   -   May 27, 2019

By Marvin Zhang and Sharad Vikram

Imagine a robot trying to learn how to stack blocks and push objects using visual inputs from a camera feed. In order to minimize cost and safety concerns, we want our robot to learn these skills with minimal interaction time, but efficient learning from complex sensory inputs such as images is difficult. This work introduces SOLAR, a new model-based reinforcement learning (RL) method that can learn skills – including manipulation tasks on a real Sawyer robot arm – directly from visual inputs with under an hour of interaction. To our knowledge, SOLAR is the most efficient RL method for solving real world image-based robotics tasks.

by   -   May 12, 2019
Figure 1: Our model-based meta reinforcement learning algorithm enables a legged robot to adapt online in the face of an unexpected system malfunction (note the broken front right leg).

By Anusha Nagabandi and Ignasi Clavera

Humans have the ability to seamlessly adapt to changes in their environments: adults can learn to walk on crutches in just a few seconds, people can adapt almost instantaneously to picking up an object that is unexpectedly heavy, and children who can walk on flat ground can quickly adapt their gait to walk uphill without having to relearn how to walk. This adaptation is critical for functioning in the real world.

by   -   April 21, 2019

By Annie Xie

In many animals, tool-use skills emerge from a combination of observational learning and experimentation. For example, by watching one another, chimpanzees can learn how to use twigs to “fish” for insects. Similarly, capuchin monkeys demonstrate the ability to wield sticks as sweeping tools to pull food closer to themselves. While one might wonder whether these are just illustrations of “monkey see, monkey do,” we believe these tool-use abilities indicate a greater level of intelligence.

← previous page        ·         next page →



RoboBee’s Untethered Flight
May 20, 2020


Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign