In the future, smart textile-based soft robotic exosuits could be worn by soldiers, fire fighters and rescue workers to help them traverse difficult terrain and arrive fresh at their destinations so that they can perform their respective tasks more effectively. They could also become a powerful means to enhance mobility and quality of living for people suffering from neurodegenerative disorders and for the elderly.
MIT computer scientists have developed a system that learns to identify objects within an image, based on a spoken description of the image. Given an image and an audio caption, the model will highlight in real-time the relevant regions of the image being described.
Manipulating delicate tissues such as blood vessels during difficult surgeries, or gripping fragile organisms in the deep sea presents a challenge to surgeons and researchers alike. Roboticists have made inroads into this problem by developing soft actuators on the microscale that are made of elastic materials and, through the expansion or contraction of embedded active components, can change their shapes to gently handle objects without damaging them. However, the specific designs and materials used for their fabrication so far still limit their range of motion and the strength they can exert at scales on which surgeons and researchers would like to use them.
In this post, we demonstrate how deep reinforcement learning (deep RL) can be used to learn how to control dexterous hands for a variety of manipulation tasks. We discuss how such methods can learn to make use of low-cost hardware, can be implemented efficiently, and how they can be complemented with techniques such as demonstrations and simulation to accelerate learning.
If you follow the robotics community on the twittersphere, you’ll have noticed that Rodney Brooks is publishing a series of essays on the future of robotics and AI which has been gathering wide attention.
We are excited to announce the AI Driving Olympics (AI-DO), a new competition focused around AI for self-driving cars. The first edition is going to be at NIPS 2018; the second edition will be at ICRA 2019.
Investigating inside the human body often requires cutting open a patient or swalloing long tubes with built-in cameras. But what if physicians could get a better glimpse in a less expensive, invasive, and time-consuming manner?
Roboticists are envisioning a future in which soft, animal-inspired robots can be safely deployed in difficult-to-access environments, such as inside the human body or in spaces that are too dangerous for humans to work, in which rigid robots cannot currently be used. Centimeter-sized soft robots have been created, but thus far it has not been possible to fabricate multifunctional flexible robots that can move and operate at smaller size scales.
The deep ocean – dark, cold, under high pressure, and airless – is notoriously inhospitable to humans, yet it teems with organisms that manage to thrive in its harsh environment. Studying those creatures requires specialized equipment mounted on remotely operated vehicles (ROVs) that can withstand those conditions in order to collect samples. This equipment, designed primarily for the underwater oil and mining industries, is clunky, expensive, and difficult to maneuver with the kind of control needed for interacting with delicate sea life. Picking a delicate sea slug off the ocean floor with these tools is akin to trying to pluck a grape using pruning shears.
An earlier version of this post was published on Off the Convex Path. It is reposted here with the author’s permission.
In the last few years, deep learning practitioners have proposed a litany of different sequence models. Although recurrent neural networks were once the tool of choice, now models like the autoregressive Wavenet or the Transformer are replacing RNNs on a diverse set of tasks. In this post, we explore the trade-offs between recurrent and feed-forward models.