Robohub.org
 

ShanghAI Lectures: Sukhan Lee “Cognitive Recognition for Service Robots”

by
23 January 2014



share this:

SukhanLeeGuest talk in the ShanghAI Lectures, 2009-12-10

Recognition has been a subject of intense research interest to computer vision, AI, cognitive science, as well as robotics communities for, at least, several decades. As a result, a rich list of research outcomes on computer based recognition and understanding of symbols, objects, faces, gestures, as well as scenes are available to date in the form of publications, open source libraries, as well as commercial products. One of the major issues in recognition, especially in 2D recognition, has been how to deal with variations due to illumination, perspective, distance, texture as well as occlusion. Conventional approaches to engineering solution include the development of photometric and geometric features invariant to such variations and the efficient organization of visual memory for appearance/view based matching. Despite much success in recognition, the conventional approaches as described above are turned out to be inadequate for dealing with recognition involved in robotic services such as errand, home maid, etc., services in real-world environments. This is due to the fact that the environmental variations service robots must deal with are often beyond what conventional approaches can handle and, furthermore, the preconditions for recognition, such as the target being in the proper sight of and distance from camera in the first place, may not necessarily be met when recognition is ordered. It is apparent that recognition for service robots should extend its scope beyond conventional matching and classification toward more human-like capability under the framework of cognitive vision or cognitive recognition. In this lecture, I will introduce what constitutes cognitive recognition: namely, 1) bottom-up and top-down based saliency detection for implementing focus of attention as a means of efficient yet weak initial recognition step, 2) integration with knowledge to make best use of recognition context, 3) probabilistic evidence fusion for reaching reliable decision, 4) proactive collection of evidences for better decision, 5) cognitive engine where recognition is processed as a self-defined mission.

The ShanghAI Lectures are a videoconference-based lecture series on Embodied Intelligence, run and organized by Rolf Pfeifer (from 2009 till 2012), Fabio Bonsignorio (since 2013), and me with partners around the world.

https://www.youtube.com/watch?v=m-oHUbwprLc

Sukhan Lee received the B.S. and M.S. degrees in Electrical Engineering, Seoul National University, 1974 and 1972, respectively, and Ph.D. degree in Electrical Engineering, Purdue University, West Lafayette, 1982. From 1983 to 1997, he was a professor in the Department of Electrical Engineering and Computer Science at University of Southern California, and also a Senior Member of Technical Staff for Intelligent Robot R&D Programs, Jet Propulsion Laboratory, NASA and California Institute of Technology from 1990 to 1997. From 1998 to 2003, he was an Executive Vice President and Chief Research Officer of MEMS, Nano Systems and Intelligent Systems Programs and Breakthrough Research Team, Samsung Advanced Institute of Technology. Since 2003 he has been a professor and director of the School of Information and Communication Engineering and Intelligent Systems Research, respectively. Prof. Sukhan Lee has his research interest in the areas of Cognitive Robotics, Intelligent Systems, and Micro/Nano Electro-Mechanical systems.

The ShanghAI lectures have brought us a treasure trove of guest lectures by experts in robotics. You can find the whole series from 2012 here. Now, we’re bringing you the guest lectures you haven’t yet seen from previous years, starting with the first lectures from 2009 and releasing a new guest lecture every Thursday until all the series are complete. Enjoy!



tags: , , , , ,


Nathan Labhart Co-organizing the ShanghAI Lectures since 2009.
Nathan Labhart Co-organizing the ShanghAI Lectures since 2009.





Related posts :



#RoboCup2024 – daily digest: 21 July

In the last of our digests, we report on the closing day of competitions in Eindhoven.
21 July 2024, by and

#RoboCup2024 – daily digest: 20 July

In the second of our daily round-ups, we bring you a taste of the action from Eindhoven.
20 July 2024, by and

#RoboCup2024 – daily digest: 19 July

Welcome to the first of our daily round-ups from RoboCup2024 in Eindhoven.
19 July 2024, by and

Robot Talk Episode 90 – Robotically Augmented People

In this special live recording at the Victoria and Albert Museum, Claire chatted to Milia Helena Hasbani, Benjamin Metcalfe, and Dani Clode about robotic prosthetics and human augmentation.
21 June 2024, by

Robot Talk Episode 89 – Simone Schuerle

In the latest episode of the Robot Talk podcast, Claire chatted to Simone Schuerle from ETH Zürich all about microrobots, medicine and science.
14 June 2024, by

Robot Talk Episode 88 – Lord Ara Darzi

In the latest episode of the Robot Talk podcast, Claire chatted to Lord Ara Darzi from Imperial College London all about robotic surgery - past, present and future.
07 June 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association