Robohub.org
 

Automation should complement professional expertise, not replace it

by
19 October 2016



share this:

  Photo credit: Robert Shields

Photo credit: Robert Shields

Will your next doctor be an app? A cost-cutting NHS wants more patients to act as “self-carers,” with some technologized assistance. A series of flowcharts and phone trees might tell parents whose children have chicken pox how best to care for them—no visits to surgeries required. Or a mole-checking app might tell a worrywart when a given skin discoloration looks harmless, and when to go to a dermatologist, by comparing it to thousands of images in a database.

Cost-cutters in the legal field also promise an algorithmically cheapened future. Tax software simplifies the process of filing by walking the filer through a series of questions. Documents that might have taken human attorneys months to read, can be scanned for keywords in a matter of seconds. Predictive policing promises to deploy force with surgical precision.

All these initiatives have some promise, and may make health care and legal advice more accessible. But they are also prone to errors, biases, and predictable malfunctions. Last year, the US Federal Trade Commission settled lawsuits against firms who claimed their software could aid in the detection of skin cancer, by evaluating photographs of the user’s moles. The FTC argued that there was not sufficient evidence to support such claims. The companies are now prohibited from making any “health or disease claims” about the impact of the apps on the health of users unless they provide “reliable scientific evidence” grounded in clinical tests. If algorithms designed merely to inform patients aren’t ready for prime time, why presume diagnostic robots are imminent?

Legal automation has also faced some serious critiques lately. The University of North Carolina legal scholar Dana Remus has questioned the value and legitimacy of the “predictive coding” now deployed in many discovery proceedings. She and co-author Frank S. Levy (of MIT) raise serious questions about more advanced applications of legal automation as well. The future cannot be completely anticipated in contracts, nor can difficult judgment calls be perfectly encoded into the oft-reductionist formulae of data processing. Errant divorce software may have caused thousands of errors in the UK lately, just as US software systems have disrupted or derailed proper dispositions of benefits applications.

Moreover, several types of opacity impede public understanding of algorithmic ranking and rating processes in even more familiar contexts, like credit scoring or search rankings. Consumers do not understand all the implications of the US credit scoring process, and things are about to get worse as “alternative” or “fringe” data moves into the lending mix for some startups. If the consequences of being late on a bill are not readily apparent to consumers, how can they hope to grasp new scoring systems that draw on their social media postings, location data, and hundreds of other data points? At the level of companies, many firms do not feel that Google, Facebook, and Amazon are playing a fair game in their algorithmic rankings of websites, ads, and products. These concerns, too, are stymied by widespread secrecy of both algorithms and the data fed into them.

In response, legal scholars have focused on remediable legal secrecy (curbing trade secrets and improving monitoring by watchdogs) and complexity (forbidding certain contractual arrangements when they become so complicated that regulators or citizens cannot understand their impact). I have recommended certain forms of transparency for software—for example, permitting experts to inspect code at suspect firms, and communications between managers and technical staff. The recent Volkswagen scandal served as yet another confirmation of the need for regulators to understand code.

But there is a larger lesson in these failures of algorithmic ordering. Rather than trying to replace the professions with robots and software, we should instead ask how professional expertise can better guide the implementation of algorithmic decision-making procedures. Ideally, doctors using software in medical settings should be able to inspect the inputs (data) that go into them, restrict the contexts in which they are used, and demand outputs that avoid disparate impacts. The same goes for attorneys, and other professionals now deploying algorithmic arrangements of information. We will be looking at “The Promise and Limits of Algorithmic Accountability in the Professions” at Yale Law School this Spring, and welcome further interventions to clarify the complementarity between professional and computational expertise.

This post was originally published on the website of Nesta.



tags: , , , , , , , , , , ,


Frank Pasquale is Professor of Law at the University of Maryland Francis King Carey School of Law...
Frank Pasquale is Professor of Law at the University of Maryland Francis King Carey School of Law...





Related posts :



Robot Talk Episode 99 – Joe Wolfel

In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.
22 November 2024, by

Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association