Robohub.org
 

MIT Media Lab to participate in $27 million initiative on AI ethics and governance


by
10 January 2017



share this:
Image: Gerd Altmann

Image: Gerd Altmann

The MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University will serve as the founding anchor institutions for a new initiative aimed at bridging the gap between the humanities, the social sciences, and computing by addressing the global challenges of artificial intelligence (AI) from a multidisciplinary perspective.

“Artificial intelligence agents will impact every part of our lives in every society on Earth. Technology and commerce will see to that,” says Alberto Ibargüen, president and CEO of the John S. and James L. Knight Foundation, which is among those supporting the initiative.

Initially funded with $27 million from the Knight Foundation; LinkedIn co-founder Reid Hoffman; the Omidyar Network; the William and Flora Hewlett Foundation; and Jim Pallotta, founder of the Raptor Group, the Ethics and Governance of Artificial Intelligence Fund’s mission is to catalyze global research that advances AI for the public interest, with an emphasis on applied research and education. The fund will also seek to advance public understanding of AI.

“AI’s rapid development brings along a lot of tough challenges,” explains Joi Ito, director of the MIT Media Lab. “For example, one of the most critical challenges is how do we make sure that the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society? How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only ‘smart,’ but also socially responsible?”

What makes this new initiative different and necessary is that it’s aimed at transcending barriers and breaking down silos among disciplines. As founding academic institutions, the Media Lab and Berkman Klein Center, along with other potential collaborators from the public and private sectors, will act as a mechanism to reinforce cross-disciplinary work and encourage intersectional peer dialogue and collaboration.

The fund — projected to operate with a phased approach over the next several years — will complement and collaborate with existing efforts and communities, such as the upcoming public symposium “AI Now,” which is scheduled for July 10 at the MIT Media Lab. The fund will also oversee an AI fellowship program, identify and provide support for collaborative projects, build networks out of the people and organizations currently working to steer AI in directions that help society, and also convene a “brain trust” of experts in the field.


A collaborative network

The Media Lab and the Berkman Klein Center for Internet and Society will leverage a network of faculty, fellows, staff, and affiliates who will collaborate on unbiased, sustained, evidenced-based, solution-oriented work that cuts across disciplines and sectors. This research will include questions that address society’s ethical expectations of AI, using machine learning to learn ethical and legal norms from data, and using data-driven techniques to quantify the potential impact of AI, for example, on the labor market.

Work of this nature is already being undertaken at both institutions. The Media Lab has been exploring some of the moral complexities associated with autonomous vehicles in the Scalable Cooperation group, led by Iyad Rahwan. And the Personal Robots group, led by Cynthia Breazeal, is investigating the ethics of human-robot interaction.

“AI could be as big a disruptor to the world as the Industrial Revolution was in the 18th and 19th centuries,” Rahwan says. He cites transportation systems and employment as among the areas that will likely be affected by automation and AI. “What we need is something that puts our entire society in the control loop of these systems … technologists, engineers, the public, ethicists, cognitive scientists, economists, legal scholars, anthropologists, faith leaders, government regulators — everyone crucial to protecting the public interest.”

“Artificial Intelligence provides the potential for deeply personalized learning experiences for people of all ages and stages,” says Breazeal, who emphasizes the need for AI to reach people in developing nations and underserved populations. But she adds that it is also “a kind of double-edged sword. What should it be learning and adapting to benefit you? And what should it do to protect your privacy and your security?”


Shared goals and governance

The Berkman Klein Center has been working to develop public interest-oriented solutions to many of the challenges of the digital age, incubating programs such as Creative Commons and the Digital Public Library of America. And, it is currently collaborating with the Media Lab on the “Assembly” program, which gathers high-level developers and tech industry professionals for a rigorous three-week course at Harvard University, followed by a 12-week collaborative development period to explore difficult problems in cybersecurity.

“The thread running through these otherwise disparate phenomena is a shift of reasoning and judgment away from people,” says Jonathan Zittrain, co-founder of the Berkman Klein Center and professor of law and computer science at Harvard University. “Sometimes that’s good, as it can free us up for other pursuits and for deeper undertakings. And sometimes it’s profoundly worrisome, as it decouples big decisions from human understanding and accountability. A lot of our work in this area will be to identify and cultivate technologies and practices that promote human autonomy and dignity rather than diminish it.”

The Ethics and Governance of Artificial Intelligence Fund will be governed by a small board, consisting of leadership from each participating foundation and institution. Ito says that convening all perspectives now is essential. “Instead of setting up an institution, we’ve decided to create a dynamic network. There are a number of areas where the deployment of machine learning is already starting to pose questions that are best answered in an interdisciplinary way. This project that we’re all embarking on is just the beginning.”


You might also enjoy the following articles about AI and AI/robotics policy:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , , , , ,


MIT News





Related posts :



Robot Talk Episode 103 – Keenan Wyrobek

  20 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.

Robot Talk Episode 102 – Isabella Fiorello

  13 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Isabella Fiorello from the University of Freiburg about bioinspired living materials for soft robotics.

Robot Talk Episode 101 – Christos Bergeles

  06 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Christos Bergeles from King's College London about micro-surgical robots to deliver therapies deep inside the body.

Robot Talk Episode 100 – Mini Rai

  29 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.

Robot Talk Episode 99 – Joe Wolfel

  22 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.

Robot Talk Episode 98 – Gabriella Pizzuto

  15 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.

Online hands-on science communication training – sign up here!

  13 Nov 2024
Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.

Robot Talk Episode 97 – Pratap Tokekar

  08 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association