news    views    talk    learn    |    about    contribute     republish     crowdfunding     archives     events

Vision, force and touch for manipulation

July 22, 2010

Manipulating objects is still a major challenge for robots in human-centered environments. To overcome this hurdle, Prats et al. propose to combine vision, force and tactile sensing to achieve robust and reliable manipulation with a robot arm fitted with a 3-finger hand (see video below).

Using three sensing modalities increases the robustness of the system, especially since each sensor taken alone has its shortcomings. For example, vision can be used to track a manipulated object and can therefor be used to control manipulation. However, vision is sometimes badly calibrated or occluded. Furthermore, forces applied to the robot arm can be measured to make sure the efforts are focussed in the right direction. However, if the robot does not have a good grip on the object it is manipulating, this might cause it to slip. Adding tactile sensing instead is useful to feel the object manipulated and readjust the position of the manipulator when errors occur.

To prove their point, Prats et al. test different combinations of all three sensor modalities on a tricky task for robots, opening a sliding door. In the end, it seems that a combination of vision, force and tactile sensing saves the day.

Sabine Hauert is lecturer at the Bristol Robotics Laboratory and co-founder of Robohub, the Robots Podcast and the Autonomous Robots blog... read more


Other articles on similar topics:


comments powered by Disqus


EU Robotics Week
February 8, 2013

Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign