Robohub.org
 

Amazon challenges robotics’ hot topic: Perception


by
02 June 2015



share this:
RBO Team from TU Berlin wins the Amazon Picking Challenge at ICRA 2015. Photo Credit: RBO.

RBO Team from TU Berlin wins the Amazon Picking Challenge at ICRA 2015. Photo Credit: RBO.

Capturing and processing camera and sensor data and recognizing various shapes to determine a set of robotic actions is conceptually easy. Yet Amazon challenged the industry to do a selecting and picking task robotically and 28 teams from around the world rose to it.

Perception isn’t just about cameras and sensors. Software has to convert the data and infer as to what it “sees”. In the case of the Amazon Picking Challenge held last week at the IEEE International Conference on Robotics & Automation (ICRA), each team’s robot was to pick from a shopping list of consumer items of varying shapes and sizes – from pencils, to toys, tennis balls, cookies and cereal boxes – which were haphazardly stored on shelves, and then place their selected items in a bin. They could use any robot, mobile or not, and any arm and end-of-arm grasping tool or tools to accomplish the task.

It’s tricky for robots using sensors to identify and locate objects that can be confused by plastic packaging within the shelf or storage area. Rodney Brooks, of iRobot, MIT and Rethink Robotics fame, often speaks of an industry-wide aspirational goal regarding perception in robotics: “If we were only able to provide the visual capabilities of a 2-year old child, robots would quickly get a lot better.” That is what this contest is all about.

Software has to first identify the item to pick and then figure out the best way to grab it and move it out of the storage area. Amazon, with its acquisition of Kiva Systems, has mastered bringing goods to the picker/packer and now wants to automate the remaining process of picking the correct goods from the shelves and placing them in the packing box, hence their Amazon Picking Challenge.

The top three winners were the teams from Technical University (TU) – Berlin, with 148 points; MIT in 2nd place with 88 points; and the 3rd place finisher (Oakland U and Dataspeed), which only got 35 points. Teams were scored on how many items were correctly selected, picked and placed.

Many commercial companies with proprietary software for just this type of application (such as Tenessee-based Universal Robotics and their Neocortex Vision System, and Silicon Valley startup Fetch Robotics who were premiering their new Fetch and Freight system at the same ICRA conference) chose not to enter because the terms of the challenge included that the software be open sourced.

tu-bin-view_amazon_picking_challengeTeam RBO from TU-Berlin wrote their own vision system software and will soon be working on a paper on the subject. They used a Barrett WAM arm because it was the most flexible device for this task amongst the arms that they had access to in their lab. They used a vacuum cleaner tool augmented with a suction cup and a vacuum cleaner to power the suction. For a base, they decided they needed to go mobile and used an old Nomadic Technologies platform, which they upgraded to fit the needs of the contest. Nomadic no longer exists. It was acquired by 3Com in 2000. TU’s photo at right shows robot’s camera view and random placement of items in cubby holes. Watch a video of their winning run here:

Noriko Takiguchi, a Japanese reporter for RoboNews.net, who was at the contest, observed the TU team and said that their approach was torque-based plus position control of the arm and the mobile base, consequently they had good torque control that that gave them flexibility in choosing where to place the suction cup and how much suction to apply.

Team RBO received a 1st place prize of $20,000 plus travel costs for the equipment and team members.



tags: , , , , ,


Frank Tobe is the owner and publisher of The Robot Report, and is also a panel member for Robohub's Robotics by Invitation series.
Frank Tobe is the owner and publisher of The Robot Report, and is also a panel member for Robohub's Robotics by Invitation series.





Related posts :



CoRL2025 – RobustDexGrasp: dexterous robot hand grasping of nearly any object

  11 Nov 2025
A new reinforcement learning framework enables dexterous robot hands to grasp diverse objects with human-like robustness and adaptability—using only a single camera.

Robot Talk Episode 132 – Collaborating with industrial robots, with Anthony Jules

  07 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Anthony Jules from Robust.AI about their autonomous warehouse robots that work alongside humans.

Teaching robots to map large environments

  05 Nov 2025
A new approach could help a search-and-rescue robot navigate an unpredictable environment by rapidly generating an accurate map of its surroundings.

Robot Talk Episode 131 – Empowering game-changing robotics research, with Edith-Clare Hall

  31 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Edith-Clare Hall from the Advanced Research and Invention Agency about accelerating scientific and technological breakthroughs.

A flexible lens controlled by light-activated artificial muscles promises to let soft machines see

  30 Oct 2025
Researchers have designed an adaptive lens made of soft, light-responsive, tissue-like materials.

Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence