Robohub.org
 

AI for society: creating AI that supports equality, transparency, and democracy

by
27 February 2019



share this:

By Jessica Montgomery, Senior Policy Adviser

The Royal Society’s artificial intelligence (AI) programme explores the frontiers of AI technologies, and their implications for individuals, communities, and society.

As part of our programme of international science and policy dialogue about AI, last year we worked with the American Academy of Arts and Sciences to bring together leading researchers from across disciplines to consider the implications of AI for equality, transparency, and democracy.

This blog gives some of the key points from discussions, which are summarised in more detail in the workshop discussions note (PDF).

AI for society

Today’s AI technologies can help create highly accurate systems, which are able to automate sophisticated tasks. As these technologies progress, researchers and policymakers are grappling with questions about how well these technologies serve society’s needs.

Experience of previous waves of transformative technological change shows that, even when society has a good understanding of the issues new technologies present, it is challenging to create a common vision for the future, to set in place measures that align the present with that desired future, and to engage collective action in ways that help bring it into being.

Where might this common vision for the future come from? International conventions on human rights and sustainable development already point to areas of internationally-agreed need for action, and a growing list of organisations have produced guidelines or principles aiming to shape the development of AI for societal benefit. However, the mechanisms for putting these principles into practice remain unclear. At the same time, there are many other ways in which the things that societies value seem to be contested. In this context, what might AI mean for core values like fairness, transparency, or democracy?

Fairness and equality

Real-world data is messy: it contains missing entries, it can be skewed or subject to sampling errors, and it is often re-purposed for different types of analysis. These sampling errors or other issues in data collection can skew the outputs of a machine learning system, influencing how well it works for different communities of users.

The models created by a machine learning system can also generate unfair outputs, even if trained on accurate data. In recruitment, for example, systems that make predictions about the outcomes of job offers or training can be influenced by biases arising from social structures that are embedded in data at the point of collection. A lack of diversity in the tech community can compound these technical issues, if it reduces the extent to which developers are aware of potential biases when designing machine learning systems.

In seeking to resolve these issues, both technology-enabled and human-led solutions can play a role. For example:

  • Initiatives to address issues of bias in datasets, for example Datasheets for Datasets (PDF), set out recommended usage for datasets that are made available for open use.
  • A combination of technical and domain insights can improve the performance of machine learning systems on ‘real world’ problems. This requires teams of people from a range of backgrounds and areas of expertise.

Interpretability and transparency

The terms ‘interpretability’ or ‘transparency’ mean different things to different people (PDF), and words such as interpretable, explainable, intelligible, transparent or understandable are often used interchangeably, or inconsistently, in debates about AI. But is this variability in meaning problematic?

There are many reasons why users or developers might want to understand why an AI system reached a decision: interpretability can help developers improve system design; it can help users assess risk or understand how a system might fail; and it might be necessary for regulatory or legal standards.

And there are different approaches to creating interpretable systems. Some AI is interpretable by design, while in other cases researchers can create tools to interrogate complex AI systems.

The significance attached to interpretability, and the type of interpretability that is desirable, will likely depend on the context and user. So the absence of a clear definition of interpretability might ultimately be helpful, if it encourages researchers to reach out to a range of stakeholders to understand what different communities need from AI systems.

Democracy and civil society

Early in the development of digital technologies, a great hope had been that they would enable people to connect and build communities in new ways, strengthening society and promoting new forms of citizen engagement. To some extent, this goal has been achieved: people have an opportunity to communicate with much broader – or much narrower – groups in ways that were not previously possible.

Many countries today are grappling with the unintended consequences of these networks. The information echo chambers that have existed in the physical world have found new manifestations in algorithmically-enabled filter bubbles, and the anonymity afforded by digital interactions has raised new questions also arise about the trustworthiness of online information.

Public and policy debates about AI and democracy have tended to concentrate on how changing patterns of news consumption might shape people’s political opinions. In response, could AI be used to improve the circulation of information, providing people with trustworthy information necessary to inform political debate? Perhaps, but insights from behavioural and social sciences show that the process of making political choices is influenced by emotional and social forces, as well as information.

Democracy is more than the exchange of information in campaigns and elections. It draws from a collection of institutions and civic interactions. Democracy persists because institutions preserve it: in the press and the electoral process, but also in courts, in schools, in hospitals, and more. If democracy resides in institutions, then how can AI support them? There is a need for spaces where people can develop civic networks or new civic institutions that allow people from different backgrounds to engage as citizens on common endeavours, as well as interacting online.

To read more about the key questions raised by this discussion, check out the meeting note (PDF), and you can also read more about the Society’s AI programme.

 




The Royal Society The Royal Society is a Fellowship of many of the world's most eminent scientists and is the oldest scientific academy in continuous existence.
The Royal Society The Royal Society is a Fellowship of many of the world's most eminent scientists and is the oldest scientific academy in continuous existence.





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association