Robohub.org
 

Automation should complement professional expertise, not replace it

by
19 October 2016



share this:

  Photo credit: Robert Shields

Photo credit: Robert Shields

Will your next doctor be an app? A cost-cutting NHS wants more patients to act as “self-carers,” with some technologized assistance. A series of flowcharts and phone trees might tell parents whose children have chicken pox how best to care for them—no visits to surgeries required. Or a mole-checking app might tell a worrywart when a given skin discoloration looks harmless, and when to go to a dermatologist, by comparing it to thousands of images in a database.

Cost-cutters in the legal field also promise an algorithmically cheapened future. Tax software simplifies the process of filing by walking the filer through a series of questions. Documents that might have taken human attorneys months to read, can be scanned for keywords in a matter of seconds. Predictive policing promises to deploy force with surgical precision.

All these initiatives have some promise, and may make health care and legal advice more accessible. But they are also prone to errors, biases, and predictable malfunctions. Last year, the US Federal Trade Commission settled lawsuits against firms who claimed their software could aid in the detection of skin cancer, by evaluating photographs of the user’s moles. The FTC argued that there was not sufficient evidence to support such claims. The companies are now prohibited from making any “health or disease claims” about the impact of the apps on the health of users unless they provide “reliable scientific evidence” grounded in clinical tests. If algorithms designed merely to inform patients aren’t ready for prime time, why presume diagnostic robots are imminent?

Legal automation has also faced some serious critiques lately. The University of North Carolina legal scholar Dana Remus has questioned the value and legitimacy of the “predictive coding” now deployed in many discovery proceedings. She and co-author Frank S. Levy (of MIT) raise serious questions about more advanced applications of legal automation as well. The future cannot be completely anticipated in contracts, nor can difficult judgment calls be perfectly encoded into the oft-reductionist formulae of data processing. Errant divorce software may have caused thousands of errors in the UK lately, just as US software systems have disrupted or derailed proper dispositions of benefits applications.

Moreover, several types of opacity impede public understanding of algorithmic ranking and rating processes in even more familiar contexts, like credit scoring or search rankings. Consumers do not understand all the implications of the US credit scoring process, and things are about to get worse as “alternative” or “fringe” data moves into the lending mix for some startups. If the consequences of being late on a bill are not readily apparent to consumers, how can they hope to grasp new scoring systems that draw on their social media postings, location data, and hundreds of other data points? At the level of companies, many firms do not feel that Google, Facebook, and Amazon are playing a fair game in their algorithmic rankings of websites, ads, and products. These concerns, too, are stymied by widespread secrecy of both algorithms and the data fed into them.

In response, legal scholars have focused on remediable legal secrecy (curbing trade secrets and improving monitoring by watchdogs) and complexity (forbidding certain contractual arrangements when they become so complicated that regulators or citizens cannot understand their impact). I have recommended certain forms of transparency for software—for example, permitting experts to inspect code at suspect firms, and communications between managers and technical staff. The recent Volkswagen scandal served as yet another confirmation of the need for regulators to understand code.

But there is a larger lesson in these failures of algorithmic ordering. Rather than trying to replace the professions with robots and software, we should instead ask how professional expertise can better guide the implementation of algorithmic decision-making procedures. Ideally, doctors using software in medical settings should be able to inspect the inputs (data) that go into them, restrict the contexts in which they are used, and demand outputs that avoid disparate impacts. The same goes for attorneys, and other professionals now deploying algorithmic arrangements of information. We will be looking at “The Promise and Limits of Algorithmic Accountability in the Professions” at Yale Law School this Spring, and welcome further interventions to clarify the complementarity between professional and computational expertise.

This post was originally published on the website of Nesta.



tags: , , , , , , , , , , , , ,


Frank Pasquale is Professor of Law at the University of Maryland Francis King Carey School of Law...
Frank Pasquale is Professor of Law at the University of Maryland Francis King Carey School of Law...





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association