Robohub.org
 

ShanghAI Lectures: Sukhan Lee “Cognitive Recognition for Service Robots”


by
23 January 2014



share this:

SukhanLeeGuest talk in the ShanghAI Lectures, 2009-12-10

Recognition has been a subject of intense research interest to computer vision, AI, cognitive science, as well as robotics communities for, at least, several decades. As a result, a rich list of research outcomes on computer based recognition and understanding of symbols, objects, faces, gestures, as well as scenes are available to date in the form of publications, open source libraries, as well as commercial products. One of the major issues in recognition, especially in 2D recognition, has been how to deal with variations due to illumination, perspective, distance, texture as well as occlusion. Conventional approaches to engineering solution include the development of photometric and geometric features invariant to such variations and the efficient organization of visual memory for appearance/view based matching. Despite much success in recognition, the conventional approaches as described above are turned out to be inadequate for dealing with recognition involved in robotic services such as errand, home maid, etc., services in real-world environments. This is due to the fact that the environmental variations service robots must deal with are often beyond what conventional approaches can handle and, furthermore, the preconditions for recognition, such as the target being in the proper sight of and distance from camera in the first place, may not necessarily be met when recognition is ordered. It is apparent that recognition for service robots should extend its scope beyond conventional matching and classification toward more human-like capability under the framework of cognitive vision or cognitive recognition. In this lecture, I will introduce what constitutes cognitive recognition: namely, 1) bottom-up and top-down based saliency detection for implementing focus of attention as a means of efficient yet weak initial recognition step, 2) integration with knowledge to make best use of recognition context, 3) probabilistic evidence fusion for reaching reliable decision, 4) proactive collection of evidences for better decision, 5) cognitive engine where recognition is processed as a self-defined mission.

The ShanghAI Lectures are a videoconference-based lecture series on Embodied Intelligence, run and organized by Rolf Pfeifer (from 2009 till 2012), Fabio Bonsignorio (since 2013), and me with partners around the world.

https://www.youtube.com/watch?v=m-oHUbwprLc

Sukhan Lee received the B.S. and M.S. degrees in Electrical Engineering, Seoul National University, 1974 and 1972, respectively, and Ph.D. degree in Electrical Engineering, Purdue University, West Lafayette, 1982. From 1983 to 1997, he was a professor in the Department of Electrical Engineering and Computer Science at University of Southern California, and also a Senior Member of Technical Staff for Intelligent Robot R&D Programs, Jet Propulsion Laboratory, NASA and California Institute of Technology from 1990 to 1997. From 1998 to 2003, he was an Executive Vice President and Chief Research Officer of MEMS, Nano Systems and Intelligent Systems Programs and Breakthrough Research Team, Samsung Advanced Institute of Technology. Since 2003 he has been a professor and director of the School of Information and Communication Engineering and Intelligent Systems Research, respectively. Prof. Sukhan Lee has his research interest in the areas of Cognitive Robotics, Intelligent Systems, and Micro/Nano Electro-Mechanical systems.

The ShanghAI lectures have brought us a treasure trove of guest lectures by experts in robotics. You can find the whole series from 2012 here. Now, we’re bringing you the guest lectures you haven’t yet seen from previous years, starting with the first lectures from 2009 and releasing a new guest lecture every Thursday until all the series are complete. Enjoy!



tags: , , ,


Nathan Labhart Co-organizing the ShanghAI Lectures since 2009.
Nathan Labhart Co-organizing the ShanghAI Lectures since 2009.





Related posts :



Bio-hybrid robots turn food waste into functional machines

  22 Dec 2025
EPFL scientists have integrated discarded crustacean shells into robotic devices, leveraging the strength and flexibility of natural materials for robotic applications.

Robot Talk Episode 138 – Robots in the environment, with Stefano Mintchev

  19 Dec 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Stefano Mintchev from ETH Zürich about robots to explore and monitor the natural environment.

Artificial tendons give muscle-powered robots a boost

  18 Dec 2025
The new design from MIT engineers could pump up many biohybrid builds.

Robot Talk Episode 137 – Getting two-legged robots moving, with Oluwami Dosunmu-Ogunbi

  12 Dec 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Oluwami Dosunmu-Ogunbi from Ohio Northern University about bipedal robots that can walk and even climb stairs.

Radboud chemists are working with companies and robots on the transition from oil-based to bio-based materials

  10 Dec 2025
The search for new materials can be accelerated by using robots and AI models.

Robot Talk Episode 136 – Making driverless vehicles smarter, with Shimon Whiteson

  05 Dec 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Shimon Whiteson from Waymo about machine learning for autonomous vehicles.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence