While the Human-Robot Interaction (HRI) R&D community produces a large amount of research on the efficacy, effectiveness, user satisfaction, emotional impact and social components of HRI, the results are difficult to compare because there are so many different ways to test and evaluate interaction. As a result, we are still missing consensus and tools for benchmarking robot products and applications, even though both producers in industry and researchers in academia would benefit greatly from them.
With standardized means of assessing robot products and applications in terms of safety, performance, user experience, and ergonomics, the community would be able to produce comparable data. In the standardization community, such data is labeled “normative”, meaning that it has been formulated via wide consultation in an open and transparent manner. In this way, the results become widely acceptable, and can be exploited for the creation of international quality norms and standards, which in turn would mean measurable robot performances in terms of HRI.
Experts from academia, industry, and standardization have joined to launch the euRobotics AISBL topic group in “Standardization” that strives to develop standardized HRI experiments. Such experiments will allow the community to assess robotic solutions and compare data over different projects. Some of the topics the group is working on are safety, performance, user experience, and modularity of robots and robotic components.
The group has organized several workshops on standardized HRI experiments, and our next event is a workshop at this year’s IROS conference in Hamburg, Germany on September 28, 2015: “Towards Standardized Experiments in Human-Robot Interaction”.
We invite interested parties to participate and contribute in our effort to tackle HRI as a horizontal topic across all robotic domains.
For more information refer to the workshop website.