Robohub.org
 

Multimodal interaction at AWE2014 with HIT Lab NZ


by
02 June 2014



share this:
photo (44)

The Augmented World Expo (AWE) was on at Santa Clara Convention Center from May 27-29. The conference, organized by Ori Inbar and Tish Shute, has grown rapidly in recent years as augmented reality technologies come closer to mainstream adoption. As well as major companies like Bosch, Intel and Qualcomm, AWE had the latest gadgets and interfaces, a fair bit of fashion and some of interesting research in human machine interaction.

There was a proliferation of eyewear, a smattering of gestural controllers and also browser-based AR, no apps required. Ori Inbar, conference organizer and CEO of Augmented Reality.ORG, a global non-for-profit organization dedicated to advancing augmented reality (AR), described the trends for AR:

  1. From gimmick to adding value
  2. From mobile to wearable
  3. From consumer to enterprise
  4. From GPS to 3D-fying the world
  5. The New New Interface

By saying that AR is the new new interface, Inbar is describing the distribution of classic user interfaces as well as the proliferation of new technologies. Inbar explained, “Computers are disappearing onto our bodies and we need a new new interface, more natural and gestural.”

The conference session on the ‘new new interface’ was one of the most interesting for me. Speakers were Stanley Yang from Neurosky, Alex McCallum from Thalmic Lab (Myo), Rony Greenberg from EyeSight and Mark Billinghurst from HIT Lab NZ. HIT Lab is  a research center at the University of Canterbury developing innovative human-computer interfaces, with 50 staff and students. Research areas include: visualization, augmented reality, next generation teleconferencing, applied interaction design and human-robot interaction.

Billinghurst’s presentation “Hands and Speech in Space” described variations in structure of multimodal interactions and the implications for communicating with robots or other machine interfaces are clear. I asked Mark to explain the crossovers between AR and robotics research, from his perspective at HIT Lab.

There’s a couple of things. With augmented reality, a key part of it is making the invisible visible. I’ve been involved in the past with some students who’ve used augmented reality with robotics to visualize some of sensors on the robot. For example, if you’ve got a mobile robot going around, you don’t necessarily know from looking at the robot what it’s sensing. It might have some ultrasonic sensors that are used for depth or range sensing and you don’t know what the robot’s sensing or seeing, except with augmented reality. There was one project I was involved with, where you’d look at the robot and you’d see an overlay on the robot – a pattern showing all the sensor data from the robot, so you’d see exactly where the ultrasonic sensor was sweeping and where the barriers were as well. So there are some applications in that space, although none of the companies here are really showing that.

 

Also, AR borrows a lot from previous research in robotics  tracking. People in robotics have been doing path planning for a long time, or camera pose estimations, when a robot moves, and as mobile phones and computers got faster, some of the same algorithms are moved onto mobile devices and into augmented reality. In just the same way that I can locate a robot with pose information, then you can use the same techniques to locate a mobile phone and use them for AR as well.

 

And another application that is being shown here, you can use augmented reality to see through the eyes of another vehicle or robot, so there’s a guy here who’s flying a UAV around and viewing the output from the drone on his google glass display. Whether it’s remote operated, semiautonomous or autonomously flying, using AR technology you can put your eyes into the vehicle and see what the vehicle is seeing basically. It can be used as a kind of telepresence thing.”

Is the flow two-way? Will increasing AR uptake drive improvements in sensors and CV algorithms for robotics?

I don’t think AR is driving sensor technology yet because it’s such a small market. With mobile devices, when you put a GPS in a cell phone, that drove down the price of GPS chips and it made it possible for us to use GPS for augmented reality on consumer devices. And that same chip that goes into a cellphone that costs 50c now, you can put into a flying robot. But when we first started doing augmented reality – especially mobile augmented reality – you had tens of thousands of dollars of hardware that you were carrying around.

 

Heavy GPS hardware, compass, inertial units and some of the robotics researchers were using the same pieces of hardware. We were paying a high price  and I think the US military, as they started putting sensors into their platforms, drove the price down. And especially with mobile, that drove the price down substantially and we benefitted from that. Both AR and robotics. So AR is too small to provide a benefit back to robotics but we both benefit now from gaming, entertainment, automotive.

Your presentation on multimodal communication has clear applications for HRI.

Definitely, I’ve been working on that for a while and it turns out that when you have a robot that’s embodied in a human form, then people want to use human communication with it. So it’s very natural to point at something and tell a robot to ‘go over there’. But if a robot can’t understand what I’m pointing at, or has no concept of space, or can’t understand my voice, then it’s not going to be able to go over there.

 

Previously when I was doing my PhD at MIT, they were doing some work on speech and gesture recognition with human characters and robotics. Cynthia Breazeal does a lot of work with social robotics there. People are starting to develop taxonomies for what gestures and speech mean together. And that’s come out of, not so much AR but interacting with avatars and robots.

 

One of the other interesting things that my colleague Christoph Bartneck is doing at HIT Lab is he’s invented an artificial language for communicating with robots. Because English is quite imprecise for communication and it’s difficult to learn. So he invented a language called ROILA which is very easy to learn. It has a very small vocabulary and provides very precise information. So, his idea is that in the future people will communicate with robots using an artificial language that will reduce the amount of miscommunication and tailored to the needs of both learnability and understandability from the robot’s perspective. He’s had some success at getting the ROILA language used by some robotics groups.



tags:


Andra Keay is the Managing Director of Silicon Valley Robotics, founder of Women in Robotics and is a mentor, investor and advisor to startups, accelerators and think tanks, with a strong interest in commercializing socially positive robotics and AI.
Andra Keay is the Managing Director of Silicon Valley Robotics, founder of Women in Robotics and is a mentor, investor and advisor to startups, accelerators and think tanks, with a strong interest in commercializing socially positive robotics and AI.





Related posts :



Robot Talk Episode 131 – Empowering game-changing robotics research, with Edith-Clare Hall

  31 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Edith-Clare Hall from the Advanced Research and Invention Agency about accelerating scientific and technological breakthroughs.

A flexible lens controlled by light-activated artificial muscles promises to let soft machines see

  30 Oct 2025
Researchers have designed an adaptive lens made of soft, light-responsive, tissue-like materials.

Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence