Robohub.org
 

Legal artificial intelligence: Can it stand up in a court of law?

by
21 February 2017



share this:

In his book Outliers, Malcolm Gladwell repeatedly mentions what has become known as the “10,000-hour rule”, which states that to become world-class in any field you must devote 10,000 hours of “deliberate practice”. Whether or not you believe the 10,000-hour figure, many would acknowledge that to become an accomplished legal professional requires considerable legal, communicative and, particularly in in-house environments, interpersonal skills that are often acquired after a tremendous amount of effort exerted over many years.

Enter artificial intelligence (AI)

There has been much hoopla about AI-based legal systems that, some might have you believe, may soon replace lawyers (no doubt causing a degree of anxiety among some legal professionals). There is some misunderstanding among many lawyers, and much of the public, about what AI systems are presently capable of. Can a legal AI, based on current technology, actually “think” like a lawyer? No. At best, today’s AI is an incomplete substitute for a human lawyer, although it could reduce the need for some lawyers (I’ll get to all that later).

However, something we should think seriously about right now is the long-term implication of the introduction of AI into the legal environment—notably the potential loss of legal wisdom.

Why doesn’t AI think like a human?

Let’s explore why AI doesn’t actually mimic the human brain. As an example, let’s look at automated translation systems such as those available from Google, Facebook or Microsoft. Such systems might appear to work the way human translators do, but what they actually do is match patterns derived from analyses of thousands, if not millions, of pages of text found on the web, employing a technology known as statistical machine translation. For instance, if such a system wants to know how to translate the English greeting “hello” into French, it scans English and French translations on the web, statistically analyses the correlations between “hello” and various French greetings, then comes to the conclusion that the French equivalent of “hello” is “bonjour”.

Current AI is good at this kind of pattern matching, but less so at cognition and deductive reasoning. Consider the human brain: not only does it store a large number of associations, and accesses useful memories (sometimes quickly, sometimes not), it also transforms sensory and other information into generalisable representations invariant to unimportant changes, stores episodic memories and generalises learned examples into understanding. These are key cognitive capabilities yet to be matched by current AI technology.

Thus, while present AI-based legal systems might analyse judicial decisions—for example, to help litigators gain insights to a judge’s behaviour or a barrister’s track record—they do so by scrutinising existing data to reveal patterns, and not by extrapolating from the content of those decisions the way an experienced human legal professional might.

The temptation to make redundant

As AI systems become more capable, the temptation grows to use such systems not only to supplement but also to eliminate the need for some personnel. An AI system weak in cognition but strong in pattern matching probably could not replace an experienced professional in terms of drawing inferences, deductive reasoning or combining different practice areas to arrive at more comprehensive solutions. However, it could perform certain tasks such as searching for patterns of words in documents for evidence gathering that have hitherto been delegated to lower level staff—such as paralegals, trainees, and junior associates—and do so better than any human could.

While one might argue that the introduction of AI systems will lighten the workload of legal professionals and thereby improve their quality of life, it also potentially diminishes the need for junior legal staff, which would only exacerbate the oversupply problem in the legal profession.

Shrink now, suffer later?

If fewer junior legal professionals are hired, this implies a smaller population of lower level staff, thus a smaller feeder pool for more senior positions. And, as more tasks are automated, this could deprive junior legal professionals of opportunities to gain important experience—ie, get their 10,000 hours. Will this result in fewer quality, experienced legal professionals in the future?

And the future of legal AI?

There are yet two more (albeit related) things to think about.

First: the development and maintenance of a good AI system requires both technical and legal competency. Put another way, a legal AI system programmed by systems experts ignorant in the law will be seriously, if not fatally, flawed. Thus, if we want to continue to develop more capable legal AI systems, good content providers—ie, good lawyers—will be needed.

Second: as laws, the legal business and social environments in their respective jurisdictions evolve, developments that might not have been anticipated just a few years earlier will emerge. Only the very best legal and other minds will be able to cope with some of these developments—and update the relevant legal AI systems accordingly. For example, when the US passed the Leahy-Smith America Invents Act (AIA) in 2011, it introduced new review procedures for existing patents with the intent of improving patent quality. It also had the effect of introducing several unintended consequences, including the use of such procedures by hedge funds to invalidate patents for the purpose of affecting the stock price of the companies holding the patents and the negative impact the AIA has had on inventors. Updating an AI system to properly incorporate these developments requires not only a deep understanding of US patent law but also a perspective on patents, finance and the impact of patent policy and procedures on innovation—something that can only really be appreciated after years of experience. Moreover, this is something that could not have been programmed into an AI system half a decade ago, and such content could probably not have been provided by a less capable, less experienced legal professional to an AI developer.

So what, if anything, can be done?

Sadly, there are no easy answers. Graduating fewer lawyers might alleviate the problem of oversupply, but would also result in unemployment at educational institutions.

While the government (or government-backed NGO) could establish some sort of training centre for under-employed junior lawyers, where these professionals could offer services pro bono to build their experience, this also smacks of government interference in the private practice market. But we need to start thinking of solutions now.  The introduction of AI into the legal profession and the potential prospect of putting more lawyers out of work could have profound implications for legal AI systems and the profession as a whole.



tags: , , , , , , , , ,


Ronald Yu is a board member of the International Intellectual Property Commercialization Council...
Ronald Yu is a board member of the International Intellectual Property Commercialization Council...





Related posts :



Robot Talk Episode 99 – Joe Wolfel

In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.
22 November 2024, by

Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association