Robohub.org
 

Artificial intelligence could transform healthcare, but we need to accept it first

by
19 August 2016



share this:
Abstract electric circuit with brain technology concept- deep-learning

Scientists in Japan reportedly saved a woman’s life by applying artificial intelligence to help them diagnose a rare form of cancer. Faced with a 60-year-old woman whose cancer diagnosis was unresponsive to treatment, they supplied an AI system with huge amounts of clinical cancer case data, and it diagnosed the rare leukemia that had stumped the clinicians in just ten minutes.

The Watson AI system from IBM matched the patient’s symptoms against 20m clinical oncology studies uploaded by a team headed by Arinobu Tojo at the University of Tokyo’s Institute of Medical Science that included symptoms, treatment and response. The Memorial Sloan Kettering Cancer Center in New York has carried out similar work, where teams of clinicians and data analysts trained Watson’s machine learning capabilities with oncological data in order to focus its predictive and analytic capabilities on diagnosing cancers.

IBM Watson first became famous when it won the US television game show Jeopardy in 2011. And IBM’s previous generation AI, Deep Blue, became the first AI to best a world champion at chess when it beat Garry Kasparov in a game in 1996 and the entire match when they met again the following year. From a perspective of technological determinism, it may seem inevitable that AI has moved from chess to cancer in 20 years. Of course, it has taken a lot of hard work to get it there.

But efforts to use artificial intelligence, machine learning and big data in healthcare contexts have not been uncontroversial. On the one hand, there is wild enthusiasm – lives saved by data, new medical breakthroughs, and a world of personalised medicine tailored to meet our needs by deep learning algorithms fed by smartphones and FitBit wearables. On the other there’s considerable scepticism – a lack of trust in machines, the importance of individuals over statistics, privacy concerns over patient records and medical confidentiality, and generalised fears of a Brave New World. Too often the debate dissolves into anecdote rather than science, or focuses on the breakthrough rather than the hard slog that led to it. Of course, the reality will be somewhere in the middle.

Human_Brain_Project_UHEI_Chip

There’s not just a technical battle to win

In fact, it may surprise you to learn that the world’s first computerised clinical decision-support system, AAPhelp, was developed in the UK way back in 1972 by Tim De Dombal and one of my colleagues, Susan Clamp.

This early precursor to the genius AI of today used a naive Bayesian algorithm to compute the likely cause of acute abdominal pain based on patient symptoms. Feeding the system with more symptoms and diagnosis helped it to become more accurate over time and, by 1974, De Dombal’s team had trained the system to the point where it was more accurate at diagnosis than junior doctors, and almost as accurate as the most senior consultants. It took AAPhelp overnight to give a diagnosis, but this was on 1970s computer hardware.

The bad news is that 40 years on, AAPhelp is still not in routine use.

This is the reality check for the most ardent advocates of applying technology to healthcare: to get technology such as predictive AIs into clinical settings where they can save lives means tackling all those negative connotations and fears. AI challenges people and their attitudes: the professionals that the machine can outperform, and the patients that are reduced to statistical probabilities to be fed into complex algorithms. Innovation in healthcare can take decades.

Nevertheless, while decades apart both AAPHelp and IBM Watson’s achievements demonstrate that computers can save lives. But the use of big data in healthcare implies that patient records, healthcare statistics, and all manner of other personal details might be used by researchers to train the AIs to make diagnoses. People are increasingly sensitive to the way personal data is used and, quite rightly, expect the highest standards of ethics, governance, privacy and security to be applied. The revelations that one NHS trust had given access to 1.6m identifiable patient records to Google’s DeepMind AI laboratory didn’t go down well when reported a few months ago.

The hard slog is not creating the algorithms, but the patience and determination required to conduct careful work within the restrictions of applying the highest standards of data protection and scientific rigour. At the University of Leed’s Institute for Data Analytics we recently used IBM Watson Content Analytics software to analyse 50m pathology and radiology reports from the UK. Recognising the sensitivities, we brought IBM Watson to the data rather than passing the data to IBM.

Using natural language processing of the text reports we double-checked diagnoses such as brain metastases, HER-2-positive breast cancers and renal hydronephrosis (swollen kidneys) with accuracy rates already over 90%. Over the next two years we’ll be developing these methods in order to embed these machine learning techniques into routine clinical care, at a scale that benefits the whole of the NHS.

While we’ve had £12m investment for our facilities and the work we’re doing, we’re not claiming to have saved lives yet. The hard battle is first to win hearts and minds – and on that front there’s still a lot more work to be done.

This article was originally published on The Conversation. Read the original article.

Disclosure statement

Owen A Johnson receives research funding from MRC, EPSRC, NIHR, the NHS and InnovateUK. He is a director of X-Lab Ltd., an e-health software company focused on disruptive innovation in healthcare.


If you liked this article you may also want to read:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.



tags: , , , , ,


Owen A Johnson is a Senior Fellow, at University of Leeds.
Owen A Johnson is a Senior Fellow, at University of Leeds.





Related posts :



Interview with Dautzenberg Roman: #IROS2023 Best Paper Award on Mobile Manipulation sponsored by OMRON Sinic X Corp.

The award-winning author describe their work on an aerial robot which can exert large forces onto walls.
19 November 2023, by

Robot Talk Episode 62 – Jorvon Moss

In the latest episode of the Robot Talk podcast, Claire chatted to Jorvon (Odd-Jayy) Moss from Digikey about making robots at home, and robot design and aesthetics.
17 November 2023, by

California is the robotics capital of the world

In California, robotics technology is a small fish in a much bigger technology pond, and that tends to conceal how important Californian companies are to the robotics revolution.
12 November 2023, by

Robot Talk Episode 61 – Masoumeh Mansouri

In the latest episode of the Robot Talk podcast, Claire chatted to Masoumeh (Iran) Mansouri from the University of Birmingham about culturally sensitive robots and planning in complex environments.
10 November 2023, by

The 5 levels of Sustainable Robotics

Robots can solve the UN SDGs and not just via the application area.
08 November 2023, by

Using language to give robots a better grasp of an open-ended world

By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.
06 November 2023, by





©2021 - ROBOTS Association


 












©2021 - ROBOTS Association