news    views    podcast    learn    |    about    contribute     republish    

Articles

by   -   November 6, 2019

The Wyss Institute’s and SEAS robotics team built different models of the soft actuator powered RoboBee. Shown here is a four-wing, two actuator, and an eight-wing, four-actuator RoboBee model the latter of which being the first soft actuator-powered flying microrobot that is capable of controlled hovering flight. Credit: Harvard Microrobotics Lab/Harvard SEAS
By Leah Burrows

The sight of a RoboBee careening towards a wall or crashing into a glass box may have once triggered panic in the researchers in the Harvard Microrobotics Laboratory at the Harvard John A. Paulson School of Engineering and Applied Science (SEAS), but no more.

by   -   November 6, 2019

By David Gaddy

When learning to follow natural language instructions, neural networks tend to be very data hungry – they require a huge number of examples pairing language with actions in order to learn effectively. This post is about reducing those heavy data requirements by first watching actions in the environment before moving on to learning from language data. Inspired by the idea that it is easier to map language to meanings that have already been formed, we introduce a semi-supervised approach that aims to separate the formation of abstractions from the learning of language.

by   -   November 4, 2019

By K.N. McGuire, C. De Wagter, K. Tuyls, H.J. Kappen, G.C.H.E. de Croon

Greenhouses, search-and-rescue teams and warehouses are all looking for new methods to enable surveillance in a manner that is quick and safe for the objects and people surrounding them. Many of them already found their way into robotics, but wheeled ground-bound systems have limited maneuverability. Ideally it would be great if flying robots, a.k.a. micro aerial vehicles (MAV) can take advantage of their 3rd dimension to perform surveillance.

by   -   October 21, 2019

By Eric Liang and Richard Liaw and Clement Gehring

In this blog post, we explore a functional paradigm for implementing reinforcement learning (RL) algorithms. The paradigm will be that developers write the numerics of their algorithm as independent, pure functions, and then use a library to compile them into policies that can be trained at scale. We share how these ideas were implemented in RLlib’s policy builder API, eliminating thousands of lines of “glue” code and bringing support for Keras and TensorFlow 2.0.

by   -   October 3, 2019

Our work published recently in Science Robotics describes a new form of computer, ideally suited to controlling soft robots. Our Soft Matter Computer (SMC) is inspired by the way information is encoded and transmitted in the vascular system.

by   -   October 3, 2019

By Anusha Nagabandi

Dexterous manipulation with multi-fingered hands is a grand challenge in robotics: the versatility of the human hand is as yet unrivaled by the capabilities of robotic systems, and bridging this gap will enable more general and capable robots. Although some real-world tasks (like picking up a television remote or a screwdriver) can be accomplished with simple parallel jaw grippers, there are countless tasks (like functionally using the remote to change the channel or using the screwdriver to screw in a nail) in which dexterity enabled by redundant degrees of freedom is critical. In fact, dexterous manipulation is defined as being object-centric, with the goal of controlling object movement through precise control of forces and motions — something that is not possible without the ability to simultaneously impact the object from multiple directions. For example, using only two fingers to attempt common tasks such as opening the lid of a jar or hitting a nail with a hammer would quickly encounter the challenges of slippage, complex contact forces, and underactuation. Although dexterous multi-fingered hands can indeed enable flexibility and success of a wide range of manipulation skills, many of these more complex behaviors are also notoriously difficult to control: They require finely balancing contact forces, breaking and reestablishing contacts repeatedly, and maintaining control of unactuated objects. Success in such settings requires a sufficiently dexterous hand, as well as an intelligent policy that can endow such a hand with the appropriate control strategy. We study precisely this in our work on Deep Dynamics Models for Learning Dexterous Manipulation.

by   -   October 3, 2019

By Leah Burrows

What would it take to transform a flat sheet into a human face? How would the sheet need to grow and shrink to form eyes that are concave into the face and a convex nose and chin that protrude?

by   -   October 3, 2019

By Kourosh Hakhamaneshi

In this post, we share some recent promising results regarding the applications of Deep Learning in analog IC design. While this work targets a specific application, the proposed methods can be used in other black box optimization problems where the environment lacks a cheap/fast evaluation procedure.

by   -   October 3, 2019

One of the biggest urban legends growing up in New York City were rumors about alligators living in the sewers. This myth even inspired a popular children’s book called “The Great Escape: Or, The Sewer Story,” with illustrations of reptiles crawling out of apartment toilets. To this day, city dwellers anxiously look at manholes wondering what lurks below. This curiosity was shared last month by the US Defense Department with its appeal for access to commercial underground complexes.

by   -   September 16, 2019

Jellyfish are about 95% water, making them some of the most diaphanous, delicate animals on the planet. But the remaining 5% of them have yielded important scientific discoveries, like green fluorescent protein (GFP) that is now used extensively by scientists to study gene expression, and life-cycle reversal that could hold the keys to combating aging. Jellyfish may very well harbor other, potentially life-changing secrets, but the difficulty of collecting them has severely limited the study of such “forgotten fauna.” The sampling tools available to marine biologists on remotely operated vehicles (ROVs) were largely developed for the marine oil and gas industries, and are much better-suited to grasping and manipulating rocks and heavy equipment than jellies, often shredding them to pieces in attempts to capture them.

by   -   August 25, 2019

A new generation of swarming robots which can independently learn and evolve new behaviours in the wild is one step closer, thanks to research from the University of Bristol and the University of the West of England (UWE).

by   -   August 25, 2019

By Laure-Anne Pessina and Nicola Nosengo
Scientists at EPFL have developed a tiny pump that could play a big role in the development of autonomous soft robots, lightweight exoskeletons and smart clothing. Flexible, silent and weighing only one gram, it is poised to replace the rigid, noisy and bulky pumps currently used. The scientists’ work has just been published in Nature.

by   -   August 25, 2019

The light-weight versatile exosuit assists hip extension during uphill walking and at different running speeds in natural terrain. Credit: Wyss Institute at Harvard University

By Benjamin Boettner

Between walking at a leisurely pace and running for your life, human gaits can cover a wide range of speeds. Typically, we choose the gait that allows us to consume the least amount of energy at a given speed. For example, at low speeds, the metabolic rate of walking is lower than that of running in a slow jog; vice versa at high speeds, the metabolic cost of running is lower than that of speed walking.

by   -   August 14, 2019

By Nicholas Carlini

It is important whenever designing new technologies to ask “how will this affect people’s privacy?” This topic is especially important with regard to machine learning, where machine learning models are often trained on sensitive user data and then released to the public. For example, in the last few years we have seen models trained on users’ private emails, text messages, and medical records.

This article covers two aspects of our upcoming USENIX Security paper that investigates to what extent neural networks memorize rare and unique aspects of their training data.

Specifically, we quantitatively study to what extent following problem actually occurs in practice:

by   -   July 11, 2019


A team of EPFL researchers has developed tiny 10-gram robots that are inspired by ants: they can communicate with each other, assign roles among themselves and complete complex tasks together. These reconfigurable robots are simple in structure, yet they can jump and crawl to explore uneven surfaces. The researchers have just published their work in Nature.



Using Natural Language in Human-Robot Collaboration
November 11, 2019


Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign