Fully autonomous robotics could be developed today if objects could tell a robot what they are, their purpose and how to utilize them. Liatris is a new open source project built with ROS. Its objective is to reliably read an object’s pose and identity without relying on vision. It presents an opportunity to rip down the barriers that prevent robotics from being present in our everyday lives.
When we look at an image we not only recognize object categories and scene category, but we can also infer various aesthetic, cultural and historical aspects. For example, an expert (or even an average person) can look at a fine art painting and infer information about its style (e.g. Baroque vs. Impressionism), genre (e.g. a portrait or a landscape), and even the artist who painted it. People can also look at two paintings and find similarities between them in terms of composition, color, texture, subject matter, etc. This impressive ability of human perception for learning and judging complex aesthetic-related visual concepts has long been thought to not be a logical process. In our research, however, we tackle this problem using a computational methodology, to show that machines can in fact learn such aesthetic-related concepts.
In this episode, Audrow Nash interviews Peter Corke from Queensland University of Technology, about computer vision – the subject of his plenary talk at IROS 2014 (link to slides below). He begins with a brief history of biological vision before discussing some early and more modern implementations of computer vision. Corke also talks about resources for those interested in learning computer vision, including his book, Robotic Vision & Control, and a massively open online course (MOOC) that he plans to release in 2015.