Robohub.org
 

Are gestures the future of robotic control?

by
29 April 2017



share this:

A few decades ago, touchscreens were impressive, yet clunky pieces of technology reserved for applications that did little more than show off that touchscreens were possible. Today’s touchscreens are commonplace and readily accepted as an easy way to interact with an operating system; they’re accurate, support multi-touch, are comfortable to use, and can even operate in multiple dimensions (pressing hard).

We may not have perfected touchscreens yet, but we’re getting close. Accordingly, engineers and researchers are already speculating about the next phase of UI development, especially for robotics control. So far, the leading candidate is gesture-based control—the use of physical gestures to relay commands.


The idea

The major limitation for touchscreens is the fact that they operate only in two dimensions; the third dimension introduced with force touch is extremely limited. Comparatively, hand gestures and physical movements can operate in three dimensions, and depending on how they’re designed, could feel more natural than swiping and tapping a smartphone screen.

Demand for three-dimensional gestural control is increasing with the onset of virtual reality (VR) and augmented reality (AR) technology; because the digital world we experience will be moving from two dimensions to three dimensions, the controls we use to manage those experiences will also need to change. With enough sophistication, these systems could provide better feedback to the users in control; rather than merely responding with visual feedback like movement or lights, users could feasibly be immersed with physical feedback like vibration or resistance.


Where we stand

Currently, one of the most advanced gestural systems is the Real-Time 3D Gesture Analysis for Natural Interaction with Smart Devices, a project headed by researchers in Linnaeus University in Sweden. However, simpler forms of gesture-based control are already available.

For example, there are projects that use Arduino to create a robot that can respond to four different simple hand gestures (plus a neutral position). Of course, iPhone technology also makes use of some simple “gestures,” such as shaking the phone to undo typing or rotating the phone to experience an immersive 360-degree view of an image or video.


The main obstacles

There are a few obstacles preventing gestures from being solidified as the next landmark in robotics control, however:

  • Gesture complexity. The biggest obstacle described by researchers at Linnaeus University is the sheer complexity of physical gestures; to be effective, their recognition systems need to be able to gather and recognize thousands of tiny data points and interpret complex patterns to “understand” what movement is being attempted. Moreover, this big data-based interpretation needs to happen in real-time—especially for applications like live robotics control, or engagement in a VR-based video game. That demands not only an incredibly intelligent system, but a processor that can operate quickly.
  • Accessibility. Gesture recognition systems would likely be developed to accommodate a “standard” human model, such as a “standard” right human hand. How would the system accommodate somebody whose hand was missing, or who is missing a few fingers? What about left-handed people? And people with Parkinson’s disease, or who are unable to operate their hands with precision control?
  • Applications. For gestures to be accepted as a mainstream way to interact with robotics and virtual environments, robotics and virtual environments need to be available. Virtual reality (VR) technology has been advancing strongly for the past several years and is poised to take a big step forward in user adoption by 2020, but until those user adoption numbers are reliable, the demand for gestural systems is relatively low. Using hand movements and other physical gestures to control a two-dimensional screen, for example, would be largely ineffective.

On paper, gestures seem like the best form of control for the digital devices, robotics, and VR systems of the future. However, there are still many obstacles to overcome before we’re ready for large-scale adoption. Fortunately, researchers are ahead of the curve, already preparing the intelligent gesture-based recognition programs we’ll need when touchscreens become obsolete.



tags: , , , , , ,


Anna Johansson is a freelance writer, researcher, and business consultant.
Anna Johansson is a freelance writer, researcher, and business consultant.





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association