news    views    podcast    learn    |    about    contribute     republish    


by   -   January 25, 2019

Cozmo robots and their corresponding tablets are being distributed to participants to take home so that they can interact with them for a week for an experiment being carried out by social robotics professor Emily Cross. Image credit – Ruud Hortensius and Emily Cross
By Frieda Klotz

People’s interactions with machines, from robots that throw tantrums when they lose a colour-matching game against a human opponent to the bionic limbs that could give us extra abilities, are not just revealing more about how our brains are wired – they are also altering them.

Emily Cross is a professor of social robotics at the University of Glasgow in Scotland who is examining the nature of human-robot relationships and what they can tell us about human cognition.

by   -   January 25, 2019

Developing countries must begin seriously considering how technological changes will impact labour trends. KC Jan/Shutterstock

By Asit K. Biswas, University of Glasgow and Kris Hartley, The Education University of Hong Kong

In the 21st century, governments cannot ignore how changes in technology will affect employment and political stability.

The automation of work – principally through robotics, artificial intelligence (AI) and the Internet of things (IoT), collectively known as the Fourth Industrial Revolution – will provide an unprecedented boost to productivity and profit. It will also threaten the stability of low- and mid-skilled jobs in many developing and middle-income countries.

by   -   January 17, 2019

In trials, the ResiBot robot learned to walk again in less than two minutes after one of its legs was removed. Image credit – Antoine Cully / Sorbonne University

By Gareth Willmer

It’s part of a field of work that is building machines that can provide real-time help using only limited data as input. Standard machine-learning algorithms often need to process thousands of possibilities before deciding on a solution, which may be impractical in pressurised scenarios where fast adaptation is critical.

by   -   January 7, 2019

This biocompatible sensor is made from a non-toxic, highly conductive liquid solution that could be used in diagnostics, therapeutics, human-computer interfaces, and virtual reality. Credit: Harvard SEAS

By Leah Burrows
Children born prematurely often develop neuromotor and cognitive developmental disabilities. The best way to reduce the impacts of those disabilities is to catch them early through a series of cognitive and motor tests. But accurately measuring and recording the motor functions of small children is tricky. As any parent will tell you, toddlers tend to dislike wearing bulky devices on their hands and have a predilection for ingesting things they shouldn’t.

by   -   January 7, 2019
Remote presence technology enables a medic to perform an ultrasound at the scene of accident.
(University of Saskatchewan), Author provided

Ivar Mendez, University of Saskatchewan

It is the middle of the winter and a six-month-old child is brought with acute respiratory distress to a nursing station in a remote community in the Canadian North.

by   -   December 21, 2018

By Lindsay Brownell

Jet engines can have up to 25,000 individual parts, making regular maintenance a tedious task that can take over a month per engine. Many components are located deep inside the engine and cannot be inspected without taking the machine apart, adding time and costs to maintenance. This problem is not only confined to jet engines, either; many complicated, expensive machines like construction equipment, generators, and scientific instruments require large investments of time and money to inspect and maintain.

by   -   December 19, 2018

Artistic photo taken by Jerry H. Wright showing a hand-made shape generated following an emergent Turing pattern (displayed using the LEDs). The trajectory of one of the moving robots can be seen through long exposure. Jerry also used a filter to see the infrared communication between the robots (white light below the robots reflected on the table). Reprinted with permission from AAAS.

Work by I. Slavkov, D. Carrillo-Zapata, N. Carranza, X. Diego, F. Jansson, J. Kaandorp, S. Hauert, J. Sharpe

Our work published today in Science Robotics describes how we grow fully self-organised shapes using a swarm of 300 coin-sized robots. The work was led by James Sharpe at EMBL and the Centre for Genomic Regulation (CRG) in Barcelona – together with my team at the Bristol Robotics Laboratory and University of Bristol.

by   -   December 16, 2018

A research team from the University of Zurich and EPFL has developed a new drone that can retract its propeller arms in flight and make itself small to fit through narrow gaps and holes. This is particularly useful when searching for victims of natural disasters.

by   -   December 16, 2018

Better tracking of forest data will make the climate change reporting process easier for countries who want compensation for protecting their carbon stock. Image credit – lubasi, licensed under CC BY-SA 2.0

by Steve Gillman
Every year 7 million hectares of forest are cut down, chipping away at the 485 gigatonnes of carbon dioxide (CO2) stored in trees around the world, but low-cost drones and new satellite imaging could soon protect these carbon stocks and help developing countries get paid for protecting their trees.

by   -   December 16, 2018

By Tuomas Haarnoja, Vitchyr Pong, Kristian Hartikainen, Aurick Zhou, Murtaza Dalal, and Sergey Levine

We are announcing the release of our state-of-the-art off-policy model-free reinforcement learning algorithm, soft actor-critic (SAC). This algorithm has been developed jointly at UC Berkeley and Google Brain, and we have been using it internally for our robotics experiment. Soft actor-critic is, to our knowledge, one of the most efficient model-free algorithms available today, making it especially well-suited for real-world robotic learning. In this post, we will benchmark SAC against state-of-the-art model-free RL algorithms and showcase a spectrum of real-world robot examples, ranging from manipulation to locomotion. We also release our implementation of SAC, which is particularly designed for real-world robotic systems.

by   -   December 4, 2018

By Chelsea Finn∗, Frederik Ebert∗, Sudeep Dasari, Annie Xie, Alex Lee, and Sergey Levine

With very little explicit supervision and feedback, humans are able to learn a wide range of motor skills by simply interacting with and observing the world through their senses. While there has been significant progress towards building machines that can learn complex skills and learn based on raw sensory information such as image pixels, acquiring large and diverse repertoires of general skills remains an open challenge. Our goal is to build a generalist: a robot that can perform many different tasks, like arranging objects, picking up toys, and folding towels, and can do so with many different objects in the real world without re-learning for each object or task.

by   -   November 14, 2018

By Esther Rolf∗, David Fridovich-Keil∗, and Max Simchowitz

In many tasks in machine learning, it is common to want to answer questions given fixed, pre-collected datasets. In some applications, however, we are not given data a priori; instead, we must collect the data we require to answer the questions of interest.

by   -   November 4, 2018

A crucial task for energy providers is the reliable and safe operation of their plants, especially when producing energy offshore. Autonomous mobile robots are able to offer comprehensive support through regular and automated inspection of machinery and infrastructure. In a world’s first pilot installation, transmission system operator TenneT tested the autonomous legged robot ANYmal on one of the world’s largest offshore converter platforms in the North Sea.

by   -   November 4, 2018

Researchers from EPFL and Stanford have developed small drones that can land and then move objects that are 40 times their weight, with the help of powerful winches, gecko adhesives and microspines.

by   -   October 24, 2018

By Daniel Seita, Jeff Mahler, Mike Danielczuk, Matthew Matl, and Ken Goldberg

This post explores two independent innovations and the potential for combining them in robotics. Two years before the AlexNet results on ImageNet were released in 2012, Microsoft rolled out the Kinect for the X-Box. This class of low-cost depth sensors emerged just as Deep Learning boosted Artificial Intelligence by accelerating performance of hyper-parametric function approximators leading to surprising advances in image classification, speech recognition, and language translation.

← previous page        ·         next page →

Robot Operating System (ROS) & Gazebo
August 6, 2019

Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign