Robohub.org
 

ShanghAI Lectures: Sukhan Lee “Cognitive Recognition for Service Robots”


by
23 January 2014



share this:

SukhanLeeGuest talk in the ShanghAI Lectures, 2009-12-10

Recognition has been a subject of intense research interest to computer vision, AI, cognitive science, as well as robotics communities for, at least, several decades. As a result, a rich list of research outcomes on computer based recognition and understanding of symbols, objects, faces, gestures, as well as scenes are available to date in the form of publications, open source libraries, as well as commercial products. One of the major issues in recognition, especially in 2D recognition, has been how to deal with variations due to illumination, perspective, distance, texture as well as occlusion. Conventional approaches to engineering solution include the development of photometric and geometric features invariant to such variations and the efficient organization of visual memory for appearance/view based matching. Despite much success in recognition, the conventional approaches as described above are turned out to be inadequate for dealing with recognition involved in robotic services such as errand, home maid, etc., services in real-world environments. This is due to the fact that the environmental variations service robots must deal with are often beyond what conventional approaches can handle and, furthermore, the preconditions for recognition, such as the target being in the proper sight of and distance from camera in the first place, may not necessarily be met when recognition is ordered. It is apparent that recognition for service robots should extend its scope beyond conventional matching and classification toward more human-like capability under the framework of cognitive vision or cognitive recognition. In this lecture, I will introduce what constitutes cognitive recognition: namely, 1) bottom-up and top-down based saliency detection for implementing focus of attention as a means of efficient yet weak initial recognition step, 2) integration with knowledge to make best use of recognition context, 3) probabilistic evidence fusion for reaching reliable decision, 4) proactive collection of evidences for better decision, 5) cognitive engine where recognition is processed as a self-defined mission.

The ShanghAI Lectures are a videoconference-based lecture series on Embodied Intelligence, run and organized by Rolf Pfeifer (from 2009 till 2012), Fabio Bonsignorio (since 2013), and me with partners around the world.

https://www.youtube.com/watch?v=m-oHUbwprLc

Sukhan Lee received the B.S. and M.S. degrees in Electrical Engineering, Seoul National University, 1974 and 1972, respectively, and Ph.D. degree in Electrical Engineering, Purdue University, West Lafayette, 1982. From 1983 to 1997, he was a professor in the Department of Electrical Engineering and Computer Science at University of Southern California, and also a Senior Member of Technical Staff for Intelligent Robot R&D Programs, Jet Propulsion Laboratory, NASA and California Institute of Technology from 1990 to 1997. From 1998 to 2003, he was an Executive Vice President and Chief Research Officer of MEMS, Nano Systems and Intelligent Systems Programs and Breakthrough Research Team, Samsung Advanced Institute of Technology. Since 2003 he has been a professor and director of the School of Information and Communication Engineering and Intelligent Systems Research, respectively. Prof. Sukhan Lee has his research interest in the areas of Cognitive Robotics, Intelligent Systems, and Micro/Nano Electro-Mechanical systems.

The ShanghAI lectures have brought us a treasure trove of guest lectures by experts in robotics. You can find the whole series from 2012 here. Now, we’re bringing you the guest lectures you haven’t yet seen from previous years, starting with the first lectures from 2009 and releasing a new guest lecture every Thursday until all the series are complete. Enjoy!



tags: , , ,


Nathan Labhart Co-organizing the ShanghAI Lectures since 2009.
Nathan Labhart Co-organizing the ShanghAI Lectures since 2009.





Related posts :



Robot Talk Episode 121 – Adaptable robots for the home, with Lerrel Pinto

  16 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Lerrel Pinto from New York University about using machine learning to train robots to adapt to new environments.

What’s coming up at #ICRA2025?

  16 May 2025
Find out what's in store at the IEEE International Conference on Robotics & Automation, which will take place from 19-23 May.

Robot see, robot do: System learns after watching how-tos

  14 May 2025
Researchers have developed a new robotic framework that allows robots to learn tasks by watching a how-to video

AI-powered robots help tackle Europe’s growing e-waste problem

  12 May 2025
EU-funded researchers have developed adaptable robots that could transform the way we recycle electronic waste, benefiting both the environment and the economy.

Robot Talk Episode 120 – Evolving robots to explore other planets, with Emma Hart

  09 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Emma Hart from Edinburgh Napier University about algorithms that 'evolve' better robot designs and control systems.

Robot Talk Episode 119 – Robotics for small manufacturers, with Will Kinghorn

  02 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Will Kinghorn from Made Smarter about how to increase adoption of new tech by small manufacturers.

Multi-agent path finding in continuous environments

  01 May 2025
How can a group of agents minimise their journey length whilst avoiding collisions?

Interview with Yuki Mitsufuji: Improving AI image generation

  29 Apr 2025
Find out about two pieces of research tackling different aspects of image generation.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence