The European Robotics Forum 2018 (ERF2018), the most influential meeting of the robotics community in Europe, takes place in Tampere on 13-15 March 2018. ERF brings together over 900 leading scientists, companies, and policymakers for the largest robotics networking event in Europe.
If you’re a rock climber, hiker, runner, dancer, or anyone who likes recording themselves while in motion, a personal drone companion can now do all the filming for you — completely autonomously.
Skydio, a San Francisco-based startup founded by three MIT alumni, is commercializing an autonomous video-capturing drone — dubbed by some as the “selfie drone” — that tracks and films a subject, while freely navigating any environment.
When it comes to soft, assistive devices — like the exosuit being designed by the Harvard Biodesign Lab — the wearer and the robot need to be in sync. But every human moves a bit differently and tailoring the robot’s parameters for an individual user is a time-consuming and inefficient process.
Now, researchers from the Wyss Institute for Biologically Inspired Engineering and the Harvard John A. Paulson School of Engineering and Applied and Sciences (SEAS) have developed an efficient machine learning algorithm that can quickly tailor personalized control strategies for soft, wearable exosuits.
Today, when an enterprise wants to use machine learning to solve a problem, they have to call in the cavalry. Even a simple problem requires multiple data scientists, machine learning experts, and domain experts to come together to agree on priorities and exchange data and information.
Researchers at Harvard University have built soft robots inspired by nature that can crawl, swim, grasp delicate objects and even assist a beating heart, but none of these devices has been able to sense and respond to the world around them.
In this episode of Robots in Depth, Per Sjöborg speaks with Jana Tumova, Assistant Professor at the KTH Royal Institute of Technology, about formal verification of computer systems and synthesizing controllers from models.
Who needs legs? With their sleek bodies, snakes can slither up to 14 miles-per-hour, squeeze into tight spaces, scale trees, and swim. How do they do it? It’s all in the scales. As a snake moves, its scales grip the ground and propel the body forward — similar to how crampons help hikers establish footholds in slippery ice. This so-called “friction-assisted locomotion” is possible because of the shape and positioning of snake’s scales.
In this episode of Robots in Depth, Per Sjöborg speaks with Henrik Christensen, the Qualcomm Chancellor’s Chair of Robot Systems and a Professor of Computer Science at Dept. of Computer Science and Engineering UC San Diego. He is also the director of the Institute for Contextual Robotics. Prior to UC San Diego he was the founding director of Institute for Robotics and Intelligent machines (IRIM) at Georgia Institute of Technology (2006-2016).
It’s called the “grain,” a small IoT device implanted into the back of people’s skulls to record their memories. Human experiences are simply played back on “redo mode” using a smart button remote. The technology promises to reduce crime, terrorism and simplify human relationships with greater transparency. While this is a description of Netflix’s Black Mirror episode, “The Entire History of You,” in reality the concept is not as far-fetched as it may seem. This week life came closer to imitating art with the $19 million grant by the US Department of Defense to a group of six universities to begin work on “neurograins.”
For us humans, a healthy brain handles all the minute details of bodily motion without demanding conscious attention. Not so for brainless robots – in fact, calculating robotic movement is its own scientific subfield.
My colleagues here at the University of Washington’s Institute for Protein Design have figured out how to apply an algorithm originally designed to help robots move to an entirely different problem: drug discovery. The algorithm has helped unlock a class of molecules known as peptide macrocycles, which have appealing pharmaceutical properties.
Unpacking groceries is a straightforward albeit tedious task: You reach into a bag, feel around for an item, and pull it out. A quick glance will tell you what the item is and where it should be stored.
In this episode of Robots in Depth, Per Sjöborg speaks with Peter Corke, distinguished professor of robotic vision from Queensland University of Technology, and Director of the ARC Centre of Excellence for Robotic Vision. Peter is well known for his work in computer vision and has written one of the books that defines the area. He talks about how serendipity made him build a checkers playing robot and then move on to robotics and machine vision. We get to hear about how early experiments with “Blob Vision” got him interested in analyzing images and especially moving images, and his long and interesting journey giving robots eyes to see the world.
Companies like Amazon have big ideas for drones that can deliver packages right to your door. But even putting aside the policy issues, programming drones to fly through cluttered spaces like cities is difficult. Being able to avoid obstacles while traveling at high speeds is computationally complex, especially for small drones that are limited in how much they can carry onboard for real-time processing.