Robohub.org
 

ShanghAI Lectures: Sukhan Lee “Cognitive Recognition for Service Robots”

by
23 January 2014



share this:

SukhanLeeGuest talk in the ShanghAI Lectures, 2009-12-10

Recognition has been a subject of intense research interest to computer vision, AI, cognitive science, as well as robotics communities for, at least, several decades. As a result, a rich list of research outcomes on computer based recognition and understanding of symbols, objects, faces, gestures, as well as scenes are available to date in the form of publications, open source libraries, as well as commercial products. One of the major issues in recognition, especially in 2D recognition, has been how to deal with variations due to illumination, perspective, distance, texture as well as occlusion. Conventional approaches to engineering solution include the development of photometric and geometric features invariant to such variations and the efficient organization of visual memory for appearance/view based matching. Despite much success in recognition, the conventional approaches as described above are turned out to be inadequate for dealing with recognition involved in robotic services such as errand, home maid, etc., services in real-world environments. This is due to the fact that the environmental variations service robots must deal with are often beyond what conventional approaches can handle and, furthermore, the preconditions for recognition, such as the target being in the proper sight of and distance from camera in the first place, may not necessarily be met when recognition is ordered. It is apparent that recognition for service robots should extend its scope beyond conventional matching and classification toward more human-like capability under the framework of cognitive vision or cognitive recognition. In this lecture, I will introduce what constitutes cognitive recognition: namely, 1) bottom-up and top-down based saliency detection for implementing focus of attention as a means of efficient yet weak initial recognition step, 2) integration with knowledge to make best use of recognition context, 3) probabilistic evidence fusion for reaching reliable decision, 4) proactive collection of evidences for better decision, 5) cognitive engine where recognition is processed as a self-defined mission.

The ShanghAI Lectures are a videoconference-based lecture series on Embodied Intelligence, run and organized by Rolf Pfeifer (from 2009 till 2012), Fabio Bonsignorio (since 2013), and me with partners around the world.

https://www.youtube.com/watch?v=m-oHUbwprLc

Sukhan Lee received the B.S. and M.S. degrees in Electrical Engineering, Seoul National University, 1974 and 1972, respectively, and Ph.D. degree in Electrical Engineering, Purdue University, West Lafayette, 1982. From 1983 to 1997, he was a professor in the Department of Electrical Engineering and Computer Science at University of Southern California, and also a Senior Member of Technical Staff for Intelligent Robot R&D Programs, Jet Propulsion Laboratory, NASA and California Institute of Technology from 1990 to 1997. From 1998 to 2003, he was an Executive Vice President and Chief Research Officer of MEMS, Nano Systems and Intelligent Systems Programs and Breakthrough Research Team, Samsung Advanced Institute of Technology. Since 2003 he has been a professor and director of the School of Information and Communication Engineering and Intelligent Systems Research, respectively. Prof. Sukhan Lee has his research interest in the areas of Cognitive Robotics, Intelligent Systems, and Micro/Nano Electro-Mechanical systems.

The ShanghAI lectures have brought us a treasure trove of guest lectures by experts in robotics. You can find the whole series from 2012 here. Now, we’re bringing you the guest lectures you haven’t yet seen from previous years, starting with the first lectures from 2009 and releasing a new guest lecture every Thursday until all the series are complete. Enjoy!



tags: , , ,


Nathan Labhart Co-organizing the ShanghAI Lectures since 2009.
Nathan Labhart Co-organizing the ShanghAI Lectures since 2009.





Related posts :



Robot Talk Episode 99 – Joe Wolfel

In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.
22 November 2024, by

Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association