Robohub.org
 

Multimodal interaction at AWE2014 with HIT Lab NZ


by
02 June 2014



share this:
photo (44)

The Augmented World Expo (AWE) was on at Santa Clara Convention Center from May 27-29. The conference, organized by Ori Inbar and Tish Shute, has grown rapidly in recent years as augmented reality technologies come closer to mainstream adoption. As well as major companies like Bosch, Intel and Qualcomm, AWE had the latest gadgets and interfaces, a fair bit of fashion and some of interesting research in human machine interaction.

There was a proliferation of eyewear, a smattering of gestural controllers and also browser-based AR, no apps required. Ori Inbar, conference organizer and CEO of Augmented Reality.ORG, a global non-for-profit organization dedicated to advancing augmented reality (AR), described the trends for AR:

  1. From gimmick to adding value
  2. From mobile to wearable
  3. From consumer to enterprise
  4. From GPS to 3D-fying the world
  5. The New New Interface

By saying that AR is the new new interface, Inbar is describing the distribution of classic user interfaces as well as the proliferation of new technologies. Inbar explained, “Computers are disappearing onto our bodies and we need a new new interface, more natural and gestural.”

The conference session on the ‘new new interface’ was one of the most interesting for me. Speakers were Stanley Yang from Neurosky, Alex McCallum from Thalmic Lab (Myo), Rony Greenberg from EyeSight and Mark Billinghurst from HIT Lab NZ. HIT Lab is  a research center at the University of Canterbury developing innovative human-computer interfaces, with 50 staff and students. Research areas include: visualization, augmented reality, next generation teleconferencing, applied interaction design and human-robot interaction.

Billinghurst’s presentation “Hands and Speech in Space” described variations in structure of multimodal interactions and the implications for communicating with robots or other machine interfaces are clear. I asked Mark to explain the crossovers between AR and robotics research, from his perspective at HIT Lab.

There’s a couple of things. With augmented reality, a key part of it is making the invisible visible. I’ve been involved in the past with some students who’ve used augmented reality with robotics to visualize some of sensors on the robot. For example, if you’ve got a mobile robot going around, you don’t necessarily know from looking at the robot what it’s sensing. It might have some ultrasonic sensors that are used for depth or range sensing and you don’t know what the robot’s sensing or seeing, except with augmented reality. There was one project I was involved with, where you’d look at the robot and you’d see an overlay on the robot – a pattern showing all the sensor data from the robot, so you’d see exactly where the ultrasonic sensor was sweeping and where the barriers were as well. So there are some applications in that space, although none of the companies here are really showing that.

 

Also, AR borrows a lot from previous research in robotics  tracking. People in robotics have been doing path planning for a long time, or camera pose estimations, when a robot moves, and as mobile phones and computers got faster, some of the same algorithms are moved onto mobile devices and into augmented reality. In just the same way that I can locate a robot with pose information, then you can use the same techniques to locate a mobile phone and use them for AR as well.

 

And another application that is being shown here, you can use augmented reality to see through the eyes of another vehicle or robot, so there’s a guy here who’s flying a UAV around and viewing the output from the drone on his google glass display. Whether it’s remote operated, semiautonomous or autonomously flying, using AR technology you can put your eyes into the vehicle and see what the vehicle is seeing basically. It can be used as a kind of telepresence thing.”

Is the flow two-way? Will increasing AR uptake drive improvements in sensors and CV algorithms for robotics?

I don’t think AR is driving sensor technology yet because it’s such a small market. With mobile devices, when you put a GPS in a cell phone, that drove down the price of GPS chips and it made it possible for us to use GPS for augmented reality on consumer devices. And that same chip that goes into a cellphone that costs 50c now, you can put into a flying robot. But when we first started doing augmented reality – especially mobile augmented reality – you had tens of thousands of dollars of hardware that you were carrying around.

 

Heavy GPS hardware, compass, inertial units and some of the robotics researchers were using the same pieces of hardware. We were paying a high price  and I think the US military, as they started putting sensors into their platforms, drove the price down. And especially with mobile, that drove the price down substantially and we benefitted from that. Both AR and robotics. So AR is too small to provide a benefit back to robotics but we both benefit now from gaming, entertainment, automotive.

Your presentation on multimodal communication has clear applications for HRI.

Definitely, I’ve been working on that for a while and it turns out that when you have a robot that’s embodied in a human form, then people want to use human communication with it. So it’s very natural to point at something and tell a robot to ‘go over there’. But if a robot can’t understand what I’m pointing at, or has no concept of space, or can’t understand my voice, then it’s not going to be able to go over there.

 

Previously when I was doing my PhD at MIT, they were doing some work on speech and gesture recognition with human characters and robotics. Cynthia Breazeal does a lot of work with social robotics there. People are starting to develop taxonomies for what gestures and speech mean together. And that’s come out of, not so much AR but interacting with avatars and robots.

 

One of the other interesting things that my colleague Christoph Bartneck is doing at HIT Lab is he’s invented an artificial language for communicating with robots. Because English is quite imprecise for communication and it’s difficult to learn. So he invented a language called ROILA which is very easy to learn. It has a very small vocabulary and provides very precise information. So, his idea is that in the future people will communicate with robots using an artificial language that will reduce the amount of miscommunication and tailored to the needs of both learnability and understandability from the robot’s perspective. He’s had some success at getting the ROILA language used by some robotics groups.



tags:


Andra Keay is the Managing Director of Silicon Valley Robotics, founder of Women in Robotics and is a mentor, investor and advisor to startups, accelerators and think tanks, with a strong interest in commercializing socially positive robotics and AI.
Andra Keay is the Managing Director of Silicon Valley Robotics, founder of Women in Robotics and is a mentor, investor and advisor to startups, accelerators and think tanks, with a strong interest in commercializing socially positive robotics and AI.





Related posts :



Robot Talk Episode 103 – Keenan Wyrobek

  20 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.

Robot Talk Episode 102 – Isabella Fiorello

  13 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Isabella Fiorello from the University of Freiburg about bioinspired living materials for soft robotics.

Robot Talk Episode 101 – Christos Bergeles

  06 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Christos Bergeles from King's College London about micro-surgical robots to deliver therapies deep inside the body.

Robot Talk Episode 100 – Mini Rai

  29 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.

Robot Talk Episode 99 – Joe Wolfel

  22 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.

Robot Talk Episode 98 – Gabriella Pizzuto

  15 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.

Online hands-on science communication training – sign up here!

  13 Nov 2024
Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.

Robot Talk Episode 97 – Pratap Tokekar

  08 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association