Automation should complement professional expertise, not replace it

19 October 2016

share this:

  Photo credit: Robert Shields

Photo credit: Robert Shields

Will your next doctor be an app? A cost-cutting NHS wants more patients to act as “self-carers,” with some technologized assistance. A series of flowcharts and phone trees might tell parents whose children have chicken pox how best to care for them—no visits to surgeries required. Or a mole-checking app might tell a worrywart when a given skin discoloration looks harmless, and when to go to a dermatologist, by comparing it to thousands of images in a database.

Cost-cutters in the legal field also promise an algorithmically cheapened future. Tax software simplifies the process of filing by walking the filer through a series of questions. Documents that might have taken human attorneys months to read, can be scanned for keywords in a matter of seconds. Predictive policing promises to deploy force with surgical precision.

All these initiatives have some promise, and may make health care and legal advice more accessible. But they are also prone to errors, biases, and predictable malfunctions. Last year, the US Federal Trade Commission settled lawsuits against firms who claimed their software could aid in the detection of skin cancer, by evaluating photographs of the user’s moles. The FTC argued that there was not sufficient evidence to support such claims. The companies are now prohibited from making any “health or disease claims” about the impact of the apps on the health of users unless they provide “reliable scientific evidence” grounded in clinical tests. If algorithms designed merely to inform patients aren’t ready for prime time, why presume diagnostic robots are imminent?

Legal automation has also faced some serious critiques lately. The University of North Carolina legal scholar Dana Remus has questioned the value and legitimacy of the “predictive coding” now deployed in many discovery proceedings. She and co-author Frank S. Levy (of MIT) raise serious questions about more advanced applications of legal automation as well. The future cannot be completely anticipated in contracts, nor can difficult judgment calls be perfectly encoded into the oft-reductionist formulae of data processing. Errant divorce software may have caused thousands of errors in the UK lately, just as US software systems have disrupted or derailed proper dispositions of benefits applications.

Moreover, several types of opacity impede public understanding of algorithmic ranking and rating processes in even more familiar contexts, like credit scoring or search rankings. Consumers do not understand all the implications of the US credit scoring process, and things are about to get worse as “alternative” or “fringe” data moves into the lending mix for some startups. If the consequences of being late on a bill are not readily apparent to consumers, how can they hope to grasp new scoring systems that draw on their social media postings, location data, and hundreds of other data points? At the level of companies, many firms do not feel that Google, Facebook, and Amazon are playing a fair game in their algorithmic rankings of websites, ads, and products. These concerns, too, are stymied by widespread secrecy of both algorithms and the data fed into them.

In response, legal scholars have focused on remediable legal secrecy (curbing trade secrets and improving monitoring by watchdogs) and complexity (forbidding certain contractual arrangements when they become so complicated that regulators or citizens cannot understand their impact). I have recommended certain forms of transparency for software—for example, permitting experts to inspect code at suspect firms, and communications between managers and technical staff. The recent Volkswagen scandal served as yet another confirmation of the need for regulators to understand code.

But there is a larger lesson in these failures of algorithmic ordering. Rather than trying to replace the professions with robots and software, we should instead ask how professional expertise can better guide the implementation of algorithmic decision-making procedures. Ideally, doctors using software in medical settings should be able to inspect the inputs (data) that go into them, restrict the contexts in which they are used, and demand outputs that avoid disparate impacts. The same goes for attorneys, and other professionals now deploying algorithmic arrangements of information. We will be looking at “The Promise and Limits of Algorithmic Accountability in the Professions” at Yale Law School this Spring, and welcome further interventions to clarify the complementarity between professional and computational expertise.

This post was originally published on the website of Nesta.

tags: , , , , , , , , , , , , ,

Frank Pasquale is Professor of Law at the University of Maryland Francis King Carey School of Law...
Frank Pasquale is Professor of Law at the University of Maryland Francis King Carey School of Law...

Related posts :

Ranking the best humanoid robots of 2023

Is Rosie the Robot Maid from the Jetsons here yet? As more and more companies announce their work towards the affordable humanoid robot, I wanted to create a reference chart.
03 June 2023, by

Robot Talk Episode 51 – James Kell

In this week's episode of the Robot Talk podcast, host Claire Asher chatted to James Kell from Jacobs Engineering UK all about civil infrastructure, nuclear robotics and jet engine inspection.
02 June 2023, by

Automate 2023 recap and the receding horizon problem

“Thirty million developers” are the answer to driving billion-dollar robot startups, exclaimed Eliot Horowitz of Viam last week at Automate.
01 June 2023, by

We are pleased to announce our 3rd Reddit Robotics Showcase!

The 2021 and 2022 events showcased a multitude of fantastic projects from the r/Robotics Reddit community, as well as academia and industry. This year’s event features many wonderful robots including...
30 May 2023, by

European Robotics Forum 2023 was a success!

One of the highlights of the conference for us was our workshop "Supporting SMEs in Bringing Robotics Solutions to Market", where experts gave insights on how DIHs can create a greater impact for SMEs and facilitate a broad uptake and integration of robotics technologies in the industry.
28 May 2023, by

Helping robots handle fluids

Researchers create a new simulation tool for robots to manipulate complex fluids in a step toward helping them more effortlessly assist with daily tasks.
27 May 2023, by

©2021 - ROBOTS Association


©2021 - ROBOTS Association