news    views    podcast    learn    |    about    contribute     republish    

Autonomous Robots Blog


Homepage

The Autonomous Robots Blog aims to bring you the latest research in robotics published in the journal Autonomous Robots (Springer) in a fresh and interactive way through news posts, comments and videos. The blog is maintained by Sabine Hauert, Media Editor for Autonomous Robots.



Robots that interact with everyday users may need a combination of speech, gaze, and gesture behaviors to convey their message effectively. This is similar to human-human interactions except that every behavior the robot displays must be designed and programmed ahead of time. In other words, designers of robot applications must understand how each of these behaviors contributes to the robot’s effectiveness so that they can determine which behaviors must be included in the application’s design.

by   -   September 16, 2014

USVs

This post is part of our ongoing efforts to make the latest papers in robotics accessible to a general audience.

Individual robots have accomplished many impressive feats. However, certain tasks are much better suited for a team of robots. The use of multiple robots enables the team to divide the task space into regions and commence the task simultaneously in each region of the space (e.g., consider the case of multiple robotic lawn mowers simultaneously cutting grass over a large patch of land).

Robots are increasingly being developed to work in close collaboration with humans to perform physical tasks. In these contexts, it’s important that we can infer the robot’s intent based on the motion it is making. However, the most logical movement for the robot is not necessarily the most intuitive for us to interpret.

Small Unmanned Aerial Vehicles (UAVs) can be both safe and maneuverable, but their small size means they can’t carry much payload and their battery life only allows for short flights. To increase the range of a small UAV, one idea is to pair it with an unmanned ground vehicle (UGV) that can carry it to a site of operation and transport heavier cargo.

Imagine a robot reaching for a mug on the table, only to realize that it is too far, or that it would need to bend its arm joint backwards to get there. Understanding which objects are within reach and how to grasp them is an essential requirement if robots are to operate in our everyday environments.

This post is part of our ongoing efforts to make the latest papers in robotics accessible to a general audience.

Robots are expected to manipulate a large variety of objects from our everyday lives. The first step is to establish a physical connection between the robot end-effector and the object to be manipulated. In our context, this physical connection is a robotic grasp. What grasp the robot adopts will depend on how it needs to manipulate the object. This problem is studied in the latest Autonomous Robots paper by Hao Dang and Peter Allen at the University of Columbia.

This post is part of our ongoing efforts to make the latest papers in robotics accessible to a general audience.

To manipulate objects, robots are often required to estimate their position and orientation in space. The robot will behave differently if it’s grasping a glass that is standing up, or one that has been tipped over.

This post is part of our ongoing efforts to make the latest papers in robotics accessible to a general audience.

To get around unknown environments, most robots will need to build maps. To help them do so, robots can use the fact that human environments are often made of geometric shapes like circles, rectangles and lines. The latest paper in Autonomous Robots presents a flexible framework for geometrical robotic mapping in structured environments.

by   -   February 17, 2014

This post is part of our ongoing efforts to make the latest papers in robotics accessible to a general audience.

Teaching robots to do tasks is useful, and teaching them in an easy and non time-intensive way is even more useful. The algorithm TRIC presented in the latest paper in Autonomous Robots allows robots to observe a few motions from a human teacher, understand the essence of what the demonstration is, and then repeat it and adapt it to new situations.