Making sense of vision and touch: #ICRA2019 best paper award video and interview
July 29, 2019
PhD candidate Michelle A. Lee from the Stanford AI Lab won the best paper award at ICRA 2019 with her work “Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks”. You can read the paper on arxiv here.
Audrow Nash was there to capture her pitch.
And here’s the official video about the work.
Full reference
Lee, Michelle A., Yuke Zhu, Krishnan Srinivasan, Parth Shah, Silvio Savarese, Li Fei-Fei, Animesh Garg, and Jeannette Bohg. “Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks.” arXiv preprint arXiv:1810.10191 (2018).
Robohub Editors
guest author
Robohub Editors
Robots-as-a-Service
February 24, 2021
February 24, 2021
Are you planning to crowdfund your robot startup?
Need help spreading the word?
Join the Robohub crowdfunding page and increase the visibility of your campaign
Need help spreading the word?
Join the Robohub crowdfunding page and increase the visibility of your campaign