Robohub.org
 

ShanghAI Lectures: Sukhan Lee “Cognitive Recognition for Service Robots”


by
23 January 2014



share this:

SukhanLeeGuest talk in the ShanghAI Lectures, 2009-12-10

Recognition has been a subject of intense research interest to computer vision, AI, cognitive science, as well as robotics communities for, at least, several decades. As a result, a rich list of research outcomes on computer based recognition and understanding of symbols, objects, faces, gestures, as well as scenes are available to date in the form of publications, open source libraries, as well as commercial products. One of the major issues in recognition, especially in 2D recognition, has been how to deal with variations due to illumination, perspective, distance, texture as well as occlusion. Conventional approaches to engineering solution include the development of photometric and geometric features invariant to such variations and the efficient organization of visual memory for appearance/view based matching. Despite much success in recognition, the conventional approaches as described above are turned out to be inadequate for dealing with recognition involved in robotic services such as errand, home maid, etc., services in real-world environments. This is due to the fact that the environmental variations service robots must deal with are often beyond what conventional approaches can handle and, furthermore, the preconditions for recognition, such as the target being in the proper sight of and distance from camera in the first place, may not necessarily be met when recognition is ordered. It is apparent that recognition for service robots should extend its scope beyond conventional matching and classification toward more human-like capability under the framework of cognitive vision or cognitive recognition. In this lecture, I will introduce what constitutes cognitive recognition: namely, 1) bottom-up and top-down based saliency detection for implementing focus of attention as a means of efficient yet weak initial recognition step, 2) integration with knowledge to make best use of recognition context, 3) probabilistic evidence fusion for reaching reliable decision, 4) proactive collection of evidences for better decision, 5) cognitive engine where recognition is processed as a self-defined mission.

The ShanghAI Lectures are a videoconference-based lecture series on Embodied Intelligence, run and organized by Rolf Pfeifer (from 2009 till 2012), Fabio Bonsignorio (since 2013), and me with partners around the world.

https://www.youtube.com/watch?v=m-oHUbwprLc

Sukhan Lee received the B.S. and M.S. degrees in Electrical Engineering, Seoul National University, 1974 and 1972, respectively, and Ph.D. degree in Electrical Engineering, Purdue University, West Lafayette, 1982. From 1983 to 1997, he was a professor in the Department of Electrical Engineering and Computer Science at University of Southern California, and also a Senior Member of Technical Staff for Intelligent Robot R&D Programs, Jet Propulsion Laboratory, NASA and California Institute of Technology from 1990 to 1997. From 1998 to 2003, he was an Executive Vice President and Chief Research Officer of MEMS, Nano Systems and Intelligent Systems Programs and Breakthrough Research Team, Samsung Advanced Institute of Technology. Since 2003 he has been a professor and director of the School of Information and Communication Engineering and Intelligent Systems Research, respectively. Prof. Sukhan Lee has his research interest in the areas of Cognitive Robotics, Intelligent Systems, and Micro/Nano Electro-Mechanical systems.

The ShanghAI lectures have brought us a treasure trove of guest lectures by experts in robotics. You can find the whole series from 2012 here. Now, we’re bringing you the guest lectures you haven’t yet seen from previous years, starting with the first lectures from 2009 and releasing a new guest lecture every Thursday until all the series are complete. Enjoy!



tags: , , ,


Nathan Labhart Co-organizing the ShanghAI Lectures since 2009.
Nathan Labhart Co-organizing the ShanghAI Lectures since 2009.


Subscribe to Robohub newsletter on substack



Related posts :

Robot Talk Episode 145 – Robotics and automation in manufacturing, with Agata Suwala

  20 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Agata Suwala from the Manufacturing Technology Centre about leveraging robotics to make manufacturing systems more sustainable.

Reversible, detachable robotic hand redefines dexterity

  19 Feb 2026
A robotic hand developed at EPFL has dual-thumbed, reversible-palm design that can detach from its robotic ‘arm’ to reach and grasp multiple objects.

“Robot, make me a chair”

  17 Feb 2026
An AI-driven system lets users design and build simple, multicomponent objects by describing them with words.

Robot Talk Episode 144 – Robot trust in humans, with Samuele Vinanzi

  13 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Samuele Vinanzi from Sheffield Hallam University about how robots can tell whether to trust or distrust people.

How can robots acquire skills through interactions with the physical world? An interview with Jiaheng Hu

and   12 Feb 2026
Find out more about work published at the Conference on Robot Learning (CoRL).

Sven Koenig wins the 2026 ACM/SIGAI Autonomous Agents Research Award

  10 Feb 2026
Sven honoured for his work on AI planning and search.

Robot Talk Episode 143 – Robots for children, with Elmira Yadollahi

  06 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Elmira Yadollahi from Lancaster University about how children interact with and relate to robots.

New frontiers in robotics at CES 2026

  03 Feb 2026
Henry Hickson reports on the exciting developments in robotics at Consumer Electronics Show 2026.



Robohub is supported by:


Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence