Robohub.org
 

Legal artificial intelligence: Can it stand up in a court of law?


by
21 February 2017



share this:

In his book Outliers, Malcolm Gladwell repeatedly mentions what has become known as the “10,000-hour rule”, which states that to become world-class in any field you must devote 10,000 hours of “deliberate practice”. Whether or not you believe the 10,000-hour figure, many would acknowledge that to become an accomplished legal professional requires considerable legal, communicative and, particularly in in-house environments, interpersonal skills that are often acquired after a tremendous amount of effort exerted over many years.

Enter artificial intelligence (AI)

There has been much hoopla about AI-based legal systems that, some might have you believe, may soon replace lawyers (no doubt causing a degree of anxiety among some legal professionals). There is some misunderstanding among many lawyers, and much of the public, about what AI systems are presently capable of. Can a legal AI, based on current technology, actually “think” like a lawyer? No. At best, today’s AI is an incomplete substitute for a human lawyer, although it could reduce the need for some lawyers (I’ll get to all that later).

However, something we should think seriously about right now is the long-term implication of the introduction of AI into the legal environment—notably the potential loss of legal wisdom.

Why doesn’t AI think like a human?

Let’s explore why AI doesn’t actually mimic the human brain. As an example, let’s look at automated translation systems such as those available from Google, Facebook or Microsoft. Such systems might appear to work the way human translators do, but what they actually do is match patterns derived from analyses of thousands, if not millions, of pages of text found on the web, employing a technology known as statistical machine translation. For instance, if such a system wants to know how to translate the English greeting “hello” into French, it scans English and French translations on the web, statistically analyses the correlations between “hello” and various French greetings, then comes to the conclusion that the French equivalent of “hello” is “bonjour”.

Current AI is good at this kind of pattern matching, but less so at cognition and deductive reasoning. Consider the human brain: not only does it store a large number of associations, and accesses useful memories (sometimes quickly, sometimes not), it also transforms sensory and other information into generalisable representations invariant to unimportant changes, stores episodic memories and generalises learned examples into understanding. These are key cognitive capabilities yet to be matched by current AI technology.

Thus, while present AI-based legal systems might analyse judicial decisions—for example, to help litigators gain insights to a judge’s behaviour or a barrister’s track record—they do so by scrutinising existing data to reveal patterns, and not by extrapolating from the content of those decisions the way an experienced human legal professional might.

The temptation to make redundant

As AI systems become more capable, the temptation grows to use such systems not only to supplement but also to eliminate the need for some personnel. An AI system weak in cognition but strong in pattern matching probably could not replace an experienced professional in terms of drawing inferences, deductive reasoning or combining different practice areas to arrive at more comprehensive solutions. However, it could perform certain tasks such as searching for patterns of words in documents for evidence gathering that have hitherto been delegated to lower level staff—such as paralegals, trainees, and junior associates—and do so better than any human could.

While one might argue that the introduction of AI systems will lighten the workload of legal professionals and thereby improve their quality of life, it also potentially diminishes the need for junior legal staff, which would only exacerbate the oversupply problem in the legal profession.

Shrink now, suffer later?

If fewer junior legal professionals are hired, this implies a smaller population of lower level staff, thus a smaller feeder pool for more senior positions. And, as more tasks are automated, this could deprive junior legal professionals of opportunities to gain important experience—ie, get their 10,000 hours. Will this result in fewer quality, experienced legal professionals in the future?

And the future of legal AI?

There are yet two more (albeit related) things to think about.

First: the development and maintenance of a good AI system requires both technical and legal competency. Put another way, a legal AI system programmed by systems experts ignorant in the law will be seriously, if not fatally, flawed. Thus, if we want to continue to develop more capable legal AI systems, good content providers—ie, good lawyers—will be needed.

Second: as laws, the legal business and social environments in their respective jurisdictions evolve, developments that might not have been anticipated just a few years earlier will emerge. Only the very best legal and other minds will be able to cope with some of these developments—and update the relevant legal AI systems accordingly. For example, when the US passed the Leahy-Smith America Invents Act (AIA) in 2011, it introduced new review procedures for existing patents with the intent of improving patent quality. It also had the effect of introducing several unintended consequences, including the use of such procedures by hedge funds to invalidate patents for the purpose of affecting the stock price of the companies holding the patents and the negative impact the AIA has had on inventors. Updating an AI system to properly incorporate these developments requires not only a deep understanding of US patent law but also a perspective on patents, finance and the impact of patent policy and procedures on innovation—something that can only really be appreciated after years of experience. Moreover, this is something that could not have been programmed into an AI system half a decade ago, and such content could probably not have been provided by a less capable, less experienced legal professional to an AI developer.

So what, if anything, can be done?

Sadly, there are no easy answers. Graduating fewer lawyers might alleviate the problem of oversupply, but would also result in unemployment at educational institutions.

While the government (or government-backed NGO) could establish some sort of training centre for under-employed junior lawyers, where these professionals could offer services pro bono to build their experience, this also smacks of government interference in the private practice market. But we need to start thinking of solutions now.  The introduction of AI into the legal profession and the potential prospect of putting more lawyers out of work could have profound implications for legal AI systems and the profession as a whole.



tags: , , , , , , , , ,


Ronald Yu is a board member of the International Intellectual Property Commercialization Council...
Ronald Yu is a board member of the International Intellectual Property Commercialization Council...





Related posts :



Robot Talk Episode 119 – Robotics for small manufacturers, with Will Kinghorn

  02 May 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Will Kinghorn from Made Smarter about how to increase adoption of new tech by small manufacturers.

Multi-agent path finding in continuous environments

  01 May 2025
How can a group of agents minimise their journey length whilst avoiding collisions?

Interview with Yuki Mitsufuji: Improving AI image generation

  29 Apr 2025
Find out about two pieces of research tackling different aspects of image generation.

Robot Talk Episode 118 – Soft robotics and electronic skin, with Miranda Lowther

  25 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Miranda Lowther from the University of Bristol about soft, sensitive electronic skin for prosthetic limbs.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

  17 Apr 2025
Find out how Amina is using machine learning to develop an explainable multi-output virtual metrology system.

Robot Talk Episode 117 – Robots in orbit, with Jeremy Hadall

  11 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Jeremy Hadall from the Satellite Applications Catapult about robotic systems for in-orbit servicing, assembly, and manufacturing.

Robot Talk Episode 116 – Evolved behaviour for robot teams, with Tanja Kaiser

  04 Apr 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Tanja Katharina Kaiser from the University of Technology Nuremberg about how applying evolutionary principles can help robot teams make better decisions.

AI can be a powerful tool for scientists. But it can also fuel research misconduct

  31 Mar 2025
While AI is allowing scientists to make technological breakthroughs, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence