“Almost exactly a year ago, we posted about how Ashutosh Saxena’s lab at Cornell was teaching robots to use their “imaginations” to try to picture how a human would want a room organized. The research was successful, with algorithms that used hallucinated humans (which are the best sort of humans) to influence the placement of objects performing significantly better than other methods. Cool stuff indeed, and now comes the next step: labeling 3D point-clouds obtained from RGB-D sensors by leveraging contextual hallucinated people.”
See on spectrum.ieee.org