Robohub.org
 

Artificial intelligence could transform healthcare, but we need to accept it first

by
19 August 2016



share this:
Abstract electric circuit with brain technology concept- deep-learning

Scientists in Japan reportedly saved a woman’s life by applying artificial intelligence to help them diagnose a rare form of cancer. Faced with a 60-year-old woman whose cancer diagnosis was unresponsive to treatment, they supplied an AI system with huge amounts of clinical cancer case data, and it diagnosed the rare leukemia that had stumped the clinicians in just ten minutes.

The Watson AI system from IBM matched the patient’s symptoms against 20m clinical oncology studies uploaded by a team headed by Arinobu Tojo at the University of Tokyo’s Institute of Medical Science that included symptoms, treatment and response. The Memorial Sloan Kettering Cancer Center in New York has carried out similar work, where teams of clinicians and data analysts trained Watson’s machine learning capabilities with oncological data in order to focus its predictive and analytic capabilities on diagnosing cancers.

IBM Watson first became famous when it won the US television game show Jeopardy in 2011. And IBM’s previous generation AI, Deep Blue, became the first AI to best a world champion at chess when it beat Garry Kasparov in a game in 1996 and the entire match when they met again the following year. From a perspective of technological determinism, it may seem inevitable that AI has moved from chess to cancer in 20 years. Of course, it has taken a lot of hard work to get it there.

But efforts to use artificial intelligence, machine learning and big data in healthcare contexts have not been uncontroversial. On the one hand, there is wild enthusiasm – lives saved by data, new medical breakthroughs, and a world of personalised medicine tailored to meet our needs by deep learning algorithms fed by smartphones and FitBit wearables. On the other there’s considerable scepticism – a lack of trust in machines, the importance of individuals over statistics, privacy concerns over patient records and medical confidentiality, and generalised fears of a Brave New World. Too often the debate dissolves into anecdote rather than science, or focuses on the breakthrough rather than the hard slog that led to it. Of course, the reality will be somewhere in the middle.

Human_Brain_Project_UHEI_Chip

There’s not just a technical battle to win

In fact, it may surprise you to learn that the world’s first computerised clinical decision-support system, AAPhelp, was developed in the UK way back in 1972 by Tim De Dombal and one of my colleagues, Susan Clamp.

This early precursor to the genius AI of today used a naive Bayesian algorithm to compute the likely cause of acute abdominal pain based on patient symptoms. Feeding the system with more symptoms and diagnosis helped it to become more accurate over time and, by 1974, De Dombal’s team had trained the system to the point where it was more accurate at diagnosis than junior doctors, and almost as accurate as the most senior consultants. It took AAPhelp overnight to give a diagnosis, but this was on 1970s computer hardware.

The bad news is that 40 years on, AAPhelp is still not in routine use.

This is the reality check for the most ardent advocates of applying technology to healthcare: to get technology such as predictive AIs into clinical settings where they can save lives means tackling all those negative connotations and fears. AI challenges people and their attitudes: the professionals that the machine can outperform, and the patients that are reduced to statistical probabilities to be fed into complex algorithms. Innovation in healthcare can take decades.

Nevertheless, while decades apart both AAPHelp and IBM Watson’s achievements demonstrate that computers can save lives. But the use of big data in healthcare implies that patient records, healthcare statistics, and all manner of other personal details might be used by researchers to train the AIs to make diagnoses. People are increasingly sensitive to the way personal data is used and, quite rightly, expect the highest standards of ethics, governance, privacy and security to be applied. The revelations that one NHS trust had given access to 1.6m identifiable patient records to Google’s DeepMind AI laboratory didn’t go down well when reported a few months ago.

The hard slog is not creating the algorithms, but the patience and determination required to conduct careful work within the restrictions of applying the highest standards of data protection and scientific rigour. At the University of Leed’s Institute for Data Analytics we recently used IBM Watson Content Analytics software to analyse 50m pathology and radiology reports from the UK. Recognising the sensitivities, we brought IBM Watson to the data rather than passing the data to IBM.

Using natural language processing of the text reports we double-checked diagnoses such as brain metastases, HER-2-positive breast cancers and renal hydronephrosis (swollen kidneys) with accuracy rates already over 90%. Over the next two years we’ll be developing these methods in order to embed these machine learning techniques into routine clinical care, at a scale that benefits the whole of the NHS.

While we’ve had £12m investment for our facilities and the work we’re doing, we’re not claiming to have saved lives yet. The hard battle is first to win hearts and minds – and on that front there’s still a lot more work to be done.

This article was originally published on The Conversation. Read the original article.

Disclosure statement

Owen A Johnson receives research funding from MRC, EPSRC, NIHR, the NHS and InnovateUK. He is a director of X-Lab Ltd., an e-health software company focused on disruptive innovation in healthcare.


If you liked this article you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: ,


Owen A Johnson is a Senior Fellow, at University of Leeds.
Owen A Johnson is a Senior Fellow, at University of Leeds.





Related posts :



Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by

Robot Talk Episode 94 – Esyin Chew

In the latest episode of the Robot Talk podcast, Claire chatted to Esyin Chew from Cardiff Metropolitan University about service and social humanoid robots in healthcare and education.
18 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association