Welcome to the voting for the Audience Choice Demo from HRI 2020 (voting closed on May 14 11:59PM BST). Each of these demos showcases an aspect of Human-Robot Interaction research, and alongside “Best Demo” award, we’re offering an “Audience Choice” award. You can see the video and abstract from each demo here. You can also register for the Online HRI 2020 Demo Discussion and Award Presentation on May 21 4:00 PM BST.
1. Demonstration of A Social Robot for Control of Remote Autonomous Systems José Lopes, David A. Robb, Xingkun Liu, Helen Hastie
Abstract: There are many challenges when it comes to deploying robots remotely including lack of situation awareness for the operator, which can lead to decreased trust and lack of adoption. For this demonstration, delegates interact with a social robot who acts as a facilitator and mediator between them and the remote robots running a mission in a realistic simulator. We will demonstrate how such a robot can use spoken interaction and social cues to facilitate teaming between itself, the operator and the remote robots.
2. Demonstrating MoveAE: Modifying Affective Robot Movements Using Classifying Variational Autoencoders Michael Suguitan, Randy Gomez, Guy Hoffman
Abstract: We developed a method for modifying emotive robot movements with a reduced dependency on domain knowledge by using neural networks. We use hand-crafted movements for a Blossom robot and a classifying variational autoencoder to adjust affective movement features by using simple arithmetic in the network’s learned latent embedding space. We will demonstrate the workflow of using a graphical interface to modify the valence and arousal of movements. Participants will be able to use the interface themselves and watch Blossom perform the modified movements in real time.
3. An Application of Low-Cost Digital Manufacturing to HRI Lavindra de Silva, Gregory Hawkridge, German Terrazas, Marco Perez Hernandez, Alan Thorne, Duncan McFarlane, Yedige Tlegenov
Abstract: Digital Manufacturing (DM) broadly refers to applying digital information to enhance manufacturing processes, supply chains, products and services. In past work we proposed a low-cost DM architecture, supporting flexible integration of legacy robots. Here we discuss a demo of our architecture using an HRI scenario.
4. Comedy by Jon the Robot John Vilk, Naomi T. Fitter
Abstract: Social robots might be more effective if they could adapt in playful, comedy-inspired ways based on heard social cues from users. Jon the Robot, a robotic stand-up comedian from the Oregon State University CoRIS Institute, showcases how this type of ability can lead to more enjoyable interactions with robots. We believe conference attendees will be both entertained and informed by this novel demonstration of social robotics.
5. CardBot: Towards an affordable humanoid robot platform for Wizard of Oz Studies in HRI Sooraj Krishna, Catherine Pelachaud
Abstract: CardBot is a cardboard based programmable humanoid robot platform designed for inexpensive and rapid prototyping of Wizard of Oz interactions in HRI incorporating technologies such as Arduino, Android and Unity3d. The table demonstration showcases the design of the CardBot and its wizard controls such as animating the movements, coordinating speech and gaze etc for orchestrating an interaction.
6. Towards Shoestring Solutions for UK Manufacturing SMEs Gregory Hawkridge, Benjamin Schönfuß, Duncan McFarlane, Lavindra de Silva, German Terrazas, Liz Salter, Alan Thorne
Abstract: In the Digital Manufacturing on a Shoestring project we focus on low-cost digital solution requirements for UK manufacturing SMEs. This paper shows that many of these fall in the HRI domain while presenting the use of low-cost and off-the-shelf technologies in two demonstrators based on voice assisted production.
7. PlantBot: A social robot prototype to help with behavioral activation in young people with minor depression Max Jan Meijer, Maaike Dokter, Christiaan Boersma, Ashwin Sadananda Bhat, Ernst Bohlmeijer, Jamy Li
Abstract: The PlantBot is a home device that shows iconographic or simple lights to depict actions that it requests a young person (its user) to do as part of Behavioral Activation therapy. In this initial prototype, a separate conversational speech agent (i.e., Amazon Alexa) is wizarded to act as a second system the user can interact with.
8. TapeBot: The Modular Robotic Kit for Creating the Environments Sonya S. Kwak, Dahyun Kang, Hanbyeol Lee, JongSuk Choi
Abstract: Various types of modular robotic kits such as the Lego Mindstorm [1], edutainment robot kit by ROBOTIS [2], and the interactive face components, FacePartBot [3] have been developed and suggested to increase children’s creativity and to learn robotic technologies. By adopting a modular design scheme, these robotic kits enable children to design various robotic characters with plenty of flexibility and creativity, such as humanoids, robotic animals, and robotic faces. However, because a robot is an artifact that perceives an environment and responds to it accordingly, it can also be characterized by the environment it encounters. Thus, in this study, we propose a modular robotic kit that is aimed at creating an interactive environment for which a robot produces various responses.
We chose intelligent tapes to build the environment for the following reasons: First, we presume that decreasing the expectations of consumers toward the functionalities of robotic products may increase their acceptance of the products, because this hinders the mismatch between the expected functions based on their appearances, and the actual functions of the products [4]. We believe that the tape, which is found in everyday life, is a perfect material to lower the consumers’ expectation toward the product and will be helpful for the consumer’s acceptance of it. Second, the tape is a familiar and enjoyable material for children, and it can be used as a flexible module, which users can cut into whatever size they want and can be attached and detached with ease.
In this study, we developed a modular robotic kit for creating an interactive environment, called the TapeBot. The TapeBot is composed of the main character robot and the modular environments, which are the intelligent tapes. Although previous robotic kits focused on building a robot, the TapeBot allows its users to focus on the environment that the robot encounters. By reversing the frame of thinking, we expect that the TapeBot will promote children’s imagination and creativity by letting them develop creative environments to design the interactions of the main character robot.
9. A Gesture Control System for Drones used with Special Operations Forces Marius Montebaur, Mathias Wilhelm, Axel Hessler, Sahin Albayrak
Abstract: Special Operations Forces (SOF) are facing extreme risks when prosecuting crimes in uncharted environments like buildings. Autonomous drones could potentially save officers’ lives by assisting in those exploration tasks, but an intuitive and reliable way of communicating with autonomous systems is yet to be established. This paper proposes a set of gestures that are designed to be used by SOF during operation for interaction with autonomous systems.
10. CoWriting Kazakh: Learning a New Script with a Robot – Demonstration Bolat Tleubayev, Zhanel Zhexenova, Thibault Asselborn, Wafa Johal, Pierre Dillenbourg, Anara Sandygulova
Abstract: This interdisciplinary project aims to assess and manage the risks relating to the transition of Kazakh language from Cyrillic to Latin in Kazakhstan in order to address challenges of a) teaching and motivating children to learn a new script and its associated handwriting, and b) training and providing support for all demographic groups, in particular senior generation. We present the system demonstration that proposes to assist and motivate children to learn a new script with the help of a humanoid robot and a tablet with stylus.
11. Voice Puppetry: Towards Conversational HRI WoZ Experiments with Synthesised Voices Matthew P. Aylett, Yolanda Vazquez-Alvarez
Abstract: In order to research conversational factors in robot design the use of Wizard of Oz (WoZ) experiments, where an experimenter plays the part of the robot, are common. However, for conversational systems using a synthetic voice, it is extremely difficult for the experimenter to choose open domain content and enter it quickly enough to retain conversational flow. In this demonstration we show how voice puppetry can be used to control a neural TTS system in almost real time. The demo hopes to explore the limitations and possibilities of such a system for controlling a robot’s synthetic voice in conversational interaction.
12. Teleport – Variable Autonomy across Platforms Daniel Camilleri, Michael Szollosy, Tony Prescott
Abstract: Robotics is a very diverse field with robots of different sizes and sensory configurations created with the purpose of carrying out different tasks. Different robots and platforms each require their own software ecosystem and are coded with specific algorithms which are difficult to translate to other robots.
VOTING CLOSED ON THURSDAY MAY 14 AT 11:59 PM BST [British Standard Time]