Robohub.org
 

Multimodal interaction at AWE2014 with HIT Lab NZ

by
02 June 2014



share this:
photo (44)

The Augmented World Expo (AWE) was on at Santa Clara Convention Center from May 27-29. The conference, organized by Ori Inbar and Tish Shute, has grown rapidly in recent years as augmented reality technologies come closer to mainstream adoption. As well as major companies like Bosch, Intel and Qualcomm, AWE had the latest gadgets and interfaces, a fair bit of fashion and some of interesting research in human machine interaction.

There was a proliferation of eyewear, a smattering of gestural controllers and also browser-based AR, no apps required. Ori Inbar, conference organizer and CEO of Augmented Reality.ORG, a global non-for-profit organization dedicated to advancing augmented reality (AR), described the trends for AR:

  1. From gimmick to adding value
  2. From mobile to wearable
  3. From consumer to enterprise
  4. From GPS to 3D-fying the world
  5. The New New Interface

By saying that AR is the new new interface, Inbar is describing the distribution of classic user interfaces as well as the proliferation of new technologies. Inbar explained, “Computers are disappearing onto our bodies and we need a new new interface, more natural and gestural.”

The conference session on the ‘new new interface’ was one of the most interesting for me. Speakers were Stanley Yang from Neurosky, Alex McCallum from Thalmic Lab (Myo), Rony Greenberg from EyeSight and Mark Billinghurst from HIT Lab NZ. HIT Lab is  a research center at the University of Canterbury developing innovative human-computer interfaces, with 50 staff and students. Research areas include: visualization, augmented reality, next generation teleconferencing, applied interaction design and human-robot interaction.

Billinghurst’s presentation “Hands and Speech in Space” described variations in structure of multimodal interactions and the implications for communicating with robots or other machine interfaces are clear. I asked Mark to explain the crossovers between AR and robotics research, from his perspective at HIT Lab.

There’s a couple of things. With augmented reality, a key part of it is making the invisible visible. I’ve been involved in the past with some students who’ve used augmented reality with robotics to visualize some of sensors on the robot. For example, if you’ve got a mobile robot going around, you don’t necessarily know from looking at the robot what it’s sensing. It might have some ultrasonic sensors that are used for depth or range sensing and you don’t know what the robot’s sensing or seeing, except with augmented reality. There was one project I was involved with, where you’d look at the robot and you’d see an overlay on the robot – a pattern showing all the sensor data from the robot, so you’d see exactly where the ultrasonic sensor was sweeping and where the barriers were as well. So there are some applications in that space, although none of the companies here are really showing that.

 

Also, AR borrows a lot from previous research in robotics  tracking. People in robotics have been doing path planning for a long time, or camera pose estimations, when a robot moves, and as mobile phones and computers got faster, some of the same algorithms are moved onto mobile devices and into augmented reality. In just the same way that I can locate a robot with pose information, then you can use the same techniques to locate a mobile phone and use them for AR as well.

 

And another application that is being shown here, you can use augmented reality to see through the eyes of another vehicle or robot, so there’s a guy here who’s flying a UAV around and viewing the output from the drone on his google glass display. Whether it’s remote operated, semiautonomous or autonomously flying, using AR technology you can put your eyes into the vehicle and see what the vehicle is seeing basically. It can be used as a kind of telepresence thing.”

Is the flow two-way? Will increasing AR uptake drive improvements in sensors and CV algorithms for robotics?

I don’t think AR is driving sensor technology yet because it’s such a small market. With mobile devices, when you put a GPS in a cell phone, that drove down the price of GPS chips and it made it possible for us to use GPS for augmented reality on consumer devices. And that same chip that goes into a cellphone that costs 50c now, you can put into a flying robot. But when we first started doing augmented reality – especially mobile augmented reality – you had tens of thousands of dollars of hardware that you were carrying around.

 

Heavy GPS hardware, compass, inertial units and some of the robotics researchers were using the same pieces of hardware. We were paying a high price  and I think the US military, as they started putting sensors into their platforms, drove the price down. And especially with mobile, that drove the price down substantially and we benefitted from that. Both AR and robotics. So AR is too small to provide a benefit back to robotics but we both benefit now from gaming, entertainment, automotive.

Your presentation on multimodal communication has clear applications for HRI.

Definitely, I’ve been working on that for a while and it turns out that when you have a robot that’s embodied in a human form, then people want to use human communication with it. So it’s very natural to point at something and tell a robot to ‘go over there’. But if a robot can’t understand what I’m pointing at, or has no concept of space, or can’t understand my voice, then it’s not going to be able to go over there.

 

Previously when I was doing my PhD at MIT, they were doing some work on speech and gesture recognition with human characters and robotics. Cynthia Breazeal does a lot of work with social robotics there. People are starting to develop taxonomies for what gestures and speech mean together. And that’s come out of, not so much AR but interacting with avatars and robots.

 

One of the other interesting things that my colleague Christoph Bartneck is doing at HIT Lab is he’s invented an artificial language for communicating with robots. Because English is quite imprecise for communication and it’s difficult to learn. So he invented a language called ROILA which is very easy to learn. It has a very small vocabulary and provides very precise information. So, his idea is that in the future people will communicate with robots using an artificial language that will reduce the amount of miscommunication and tailored to the needs of both learnability and understandability from the robot’s perspective. He’s had some success at getting the ROILA language used by some robotics groups.



tags:


Andra Keay is the Managing Director of Silicon Valley Robotics, founder of Women in Robotics and is a mentor, investor and advisor to startups, accelerators and think tanks, with a strong interest in commercializing socially positive robotics and AI.
Andra Keay is the Managing Director of Silicon Valley Robotics, founder of Women in Robotics and is a mentor, investor and advisor to startups, accelerators and think tanks, with a strong interest in commercializing socially positive robotics and AI.





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association