Robohub.org
 

Anat Levin: Light-Sensitive Displays | CMU RI Seminar


by
29 September 2018



share this:

Link to video on YouTube

Abstract: “Nobel prize winner M. G. Lippmann described his dream of an ideal display as a “window into the world.” “While the current most perfect photographic print only shows one aspect of reality, reduced to a single image fixed in a plane, the direct view of reality offers, as we know, infinitely more variety.” Changing the observer’s viewpoint reveals perspective changes in object size and location. Moreover, changing the environment lighting can vary appearance substantially by shifting highlights and cast shadows. These effects are extremely important for shape and material perception, and thus for realism. Despite significant recent advances in multiscopic display technology, the majority of these displays are still limited in an important respect: one can only display a scene under the same illumination conditions in which it was captured. If the illumination in the observers environment changes during playback, there is no corresponding effect on the shading, highlight positions, or cast shadows that they witness on the display. In this talk I will survey a sequence of projects we carried in the last years, constructing several light-sensitive displays with different levels of complexities. Our ultimate goal is to build a 3D light-sensitive display, capable of presenting viewpoint-sensitive depth content as well as spatially-varying material reflectance properties that will accurately react to the interaction between environment lighting and scene geometry.”




John Payne





Related posts :



Robot Talk Episode 135 – Robot anatomy and design, with Chapa Sirithunge

  28 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chapa Sirithunge from University of Cambridge about what robots can teach us about human anatomy, and vice versa.

Learning robust controllers that work across many partially observable environments

  27 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

Human-robot interaction design retreat

  25 Nov 2025
Find out more about an event exploring design for human-robot interaction.

Robot Talk Episode 134 – Robotics as a hobby, with Kevin McAleer

  21 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Kevin McAleer from kevsrobots about how to get started building robots at home.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Robot Talk Episode 133 – Creating sociable robot collaborators, with Heather Knight

  14 Nov 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Heather Knight from Oregon State University about applying methods from the performing arts to robotics.

CoRL2025 – RobustDexGrasp: dexterous robot hand grasping of nearly any object

  11 Nov 2025
A new reinforcement learning framework enables dexterous robot hands to grasp diverse objects with human-like robustness and adaptability—using only a single camera.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence