Amazon Picking Challenge

05 November 2014

share this:

In September 2014 Amazon announced the Amazon Picking Challenge, a new robot manipulation contest as part of the ICRA robot challenges. The idea was to take a difficult problem from the warehouse order-fulfillment industry and see how far contestants can get applying innovative robotics research. The challenge is centered around a problem the robotics community has been chipping away at for decades: picking various, distinct items off of a shelf under uncertain conditions. The location of the item is only known within a few dozen centimeters, and only vague high-level information is known about the item itself. The challenge description has obvious roots in warehouse fulfillment, but this is clearly a generic task robots need to get better at to be useful in a variety of everyday contexts.

When inventory comes into an Amazon warehouse it is singulated (pulled apart into individual items a customer might purchase) and stocked into shelves, and the location of the item is tagged in a warehouse management system. Later, when the item is ordered, a person is directed to the shelf that contains this item and picks it off. Shelves are often stocked as densely as possible to maximize the value of the warehouse. In a system that handles millions of different products, it is economical to store a variety of items in the same space. The resulting mixed and cluttered bin is relatively easy for a human to work with, but poses a challenge for traditional automation systems.

We might have an idea of what the item ordinarily looks like beforehand, but perhaps this time the packaging has changed color, the manufacturer has added an extra plastic bag to protect it, or it is folded in half inside the bin. Products in an Amazon warehouse have a wide range of visual and physical properties. A real product is rarely just a simple brown cardboard box. It could be a large stuffed animal, a floppy bag of rice, a package of cookies (cookies are more appetizing in their un-crushed state), a necklace, or a small USB stick. The goal in creating this challenge was to clearly define a simplified but applicable version of the real world problem.

The ICRA challenge lists 27 different items that will be randomly stocked inside of a pre-defined shelf structure. The challenge carefully avoids putting constraints on the solution space. The goal of this first challenge year is not to drive toward any one specific software or hardware architecture, but to provide the robotics community with a difficult, real-world problem. The human arm has 7 degrees-of-freedom, but is this really the best configuration to complete such a task? Robot end-effector designs are undergoing a renaissance (as evidenced by the variety of new gripper companies appearing each year), but which design fits well in tight spaces? Perhaps calling this the ‘Picking’ challenge was misleading. Pick and place is a convenient categorization for robotics, but observation of human strategies in these tasks has shown that human manipulation is far more nuanced than the simple approach-grasp-retreat paradigm we traditionally see in a robot. Humans take advantage of many different environmental constraints to perform the task effectively.


The response from the community so far has been fantastic. Over 150 people have signed up for the contest email list, and three robot distributors have agreed to bring base platforms if necessary for teams that can’t bring their own bot (Rethink Robotics with the Baxter Research Robot, Clearpath Robotics with the PR2 Robot, and Olympus Controls with the UR5). We hope this is the beginning of a fun and fruitful event that helps bring together subdomains of robot manipulation to accomplish some very cool tasks. We look forward to seeing everyone at ICRA in 2015!

The first deadline for the challenge is to submit a video this year to receive free challenge supplies in the mail (items and a shelf). Learn more at:

tags: , , ,

Joe Romano Joe is a Research Scientist developing next-generation robotic platforms at Kiva Systems in North Reading, Massachusetts. Prior to Kiva, Joe was part of the engineering team that brought Rethink Robotics Baxter Robot to life.
Joe Romano Joe is a Research Scientist developing next-generation robotic platforms at Kiva Systems in North Reading, Massachusetts. Prior to Kiva, Joe was part of the engineering team that brought Rethink Robotics Baxter Robot to life.

Related posts :

IEEE 17th International Conference on Automation Science and Engineering paper awards (with videos)

The IEEE International Conference on Automation Science and Engineering (CASE) is the flagship automation conference of the IEEE Robotics and Automation Society and constitutes the primary forum for c...



NVIDIA and ROS Teaming Up To Accelerate Robotics Development, with Amit Goel

Amit Goel, Director of Product Management for Autonomous Machines at NVIDIA, discusses the new collaboration between Open Robotics and NVIDIA. The collaboration will dramatically improve the way ROS and NVIDIA's line of products such as Isaac SIM and the Jetson line of embedded boards operate together.
23 October 2021, by

One giant leap for the mini cheetah

A new control system, demonstrated using MIT’s robotic mini cheetah, enables four-legged robots to jump across uneven terrain in real-time.
23 October 2021, by

Robotics Today latest talks – Raia Hadsell (DeepMind), Koushil Sreenath (UC Berkeley) and Antonio Bicchi (Istituto Italiano di Tecnologia)

Robotics Today held three more online talks since we published the one from Amanda Prorok (Learning to Communicate in Multi-Agent Systems). In this post we bring you the last talks that Robotics Today...
21 October 2021, by and

Sense Think Act Pocast: Erik Schluntz

In this episode, Audrow Nash interviews Erik Schluntz, co-founder and CTO of Cobalt Robotics, which makes a security guard robot. Erik speaks about how their robot handles elevators, how they have hum...
19 October 2021, by and

A robot that finds lost items

Researchers at MIT have created RFusion, a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper, that fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.
18 October 2021, by

©2021 - ROBOTS Association


©2021 - ROBOTS Association