Robohub.org
 

Towards standardized experiments in human robot interactions


by
23 July 2015



share this:

Standards_In_HRI_NAOWhile the Human-Robot Interaction (HRI) R&D community produces a large amount of research on the efficacy, effectiveness, user satisfaction, emotional impact and social components of HRI, the results are difficult to compare because there are so many different ways to test and evaluate interaction. As a result, we are still missing consensus and tools for benchmarking robot products and applications, even though both producers in industry and researchers in academia would benefit greatly from them.

With standardized means of assessing robot products and applications in terms of safety, performance, user experience, and ergonomics, the community would be able to produce comparable data. In the standardization community, such data is labeled “normative”, meaning that it has been formulated via wide consultation in an open and transparent manner. In this way, the results become widely acceptable, and can be exploited for the creation of international quality norms and standards, which in turn would mean measurable robot performances in terms of HRI.

Experts from academia, industry, and standardization have joined to launch the euRobotics AISBL topic group in “Standardization” that strives to develop standardized HRI experiments. Such experiments will allow the community to assess robotic solutions and compare data over different projects. Some of the topics the group is working on are safety, performance, user experience, and modularity of robots and robotic components.

The group has organized several workshops on standardized HRI experiments, and our next event is a workshop at this year’s IROS conference in Hamburg, Germany on September 28, 2015: “Towards Standardized Experiments in Human-Robot Interaction”.

We invite interested parties to participate and contribute in our effort to tackle HRI as a horizontal topic across all robotic domains.
For more information refer to the workshop website.



tags: , ,


Nicole Mirnig is a PhD Research Fellow at the Center for Human-Computer Interaction
Nicole Mirnig is a PhD Research Fellow at the Center for Human-Computer Interaction

            AUAI is supported by:



Subscribe to Robohub newsletter on substack



Related posts :

Robot Talk Episode 153 – Origami-inspired robots, with Chenying Liu

  24 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Chenying Liu from University of Oxford about how a robot's physical form can actively contribute to sensing, processing, decision-making, and movement.

Sony AI table tennis robot outplays elite human players

  22 Apr 2026
New robot and AI system has beaten professional and elite table tennis players.

AI system learns to keep warehouse robot traffic running smoothly

  20 Apr 2026
This new approach adapts to decide which robots should get the right of way at every moment, avoiding congestion and increasing throughput.

Robot Talk Episode 152 – Dexterous robot hands, with Rich Walker

  17 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Rich Walker from Shadow Robot Company about their advanced robotic hands for research and industry.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

and   14 Apr 2026
Ross King created the first robot scientist back in 2009. He spoke to us about the nature of scientific discovery, the role AI has to play, and his recent work in DNA computing.

Robot Talk Episode 151 – Robots to study the ocean, with Simona Aracri

  10 Apr 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Simona Aracri from National Research Council of Italy about innovative robot designs for oceanography and environmental monitoring.

Generative AI improves a wireless vision system that sees through obstructions

  08 Apr 2026
With this new technique, a robot could more accurately detect hidden objects or understand an indoor scene using reflected Wi-Fi signals.

Resource-constrained image generation and visual understanding: an interview with Aniket Roy

  07 Apr 2026
Aniket tells us about his research exploring how modern generative models can be adapted to operate efficiently while maintaining strong performance.



AUAI is supported by:







Subscribe to Robohub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence