What’s next for AI in healthcare?
Artificial intelligence (AI) in health care has arrived, with enormous potential for change in the delivery of care, but experts published in the Medical Journal of Australia are asking if we are ready.
“AI, machine learning, and deep neural network tools can assist medical decision making and management, and have already permeated into at least three different levels: AI-assisted image interpretation; AI-assisted diagnosis; and, AI-assisted prediction and prognostication,” wrote the authors.
These authors included Joseph Sung, the Mok Hing Yiu Professor of Medicine at the Chinese University of Hong Kong, Cameron Stewart, Professor of Health, Law and Ethics at the University of Sydney, and Professor Ben Freedman, the Deputy Director of Research Strategy at the Heart Research Institute and the University of Sydney’s Charles Perkins Centre and Concord Clinical School.
“From diagnosing retinopathy to cardiac arrhythmias, from screening for skin cancer to breast cancer, from predicting outcome of stroke to self-management of chronic diseases, AI and machine learning devices can replace many time-consuming, labour-intensive, repetitive and mundane tasks of clinicians and give possible suggestions of management plans,” Sung and colleagues wrote.
The quality of AI in health care is dependent on the quality of the data on which it is based.
“Algorithms are being developed and validated on data generated by health care systems where current practices may already be inequitable,” they wrote.
“A system built on poor-quality, biased data will reflect those problems (‘garbage in, garbage out’). If a health care system has excluded populations of patients, the structural inequalities of health care will be repeatedly reinforced by the AI.”
AI is built on access to big data.
“Big data in health care is primarily generated by public health systems, funded by the public for the public. Increasingly, claims over the health data generated by these public systems are being contested,” Sung and colleagues wrote.
“Issues of data sovereignty threaten the existence of effective AI. Patient data should not be provided to technology giants without a good governance structure to protect data sovereignty.”
Changing standards of care.
“If AI keeps its promise of benefit and it is integrated more into practice, standards of care must require AI use, and traditional forms of therapeutics will be forced to change.
“We will see a time when all medicine and allied health work as a team with AI. Those who refuse to partner with AI might be replaced by it.”
AI-caused injury.
“A doctor using AI should be responsible for AI decisions made in the course of treatment, especially if the doctor retains the power to make the final decision regarding treatment,” wrote Sung and colleagues.
“But as AI takes on more autonomous decision making, it might be argued by some doctors that they should not be responsible for that which they cannot control. Similarly, it seems unfair for doctors to be held responsible for an AI decision when they are unable to deduce how and why that decision was made.
“A stepwise gradation model of shared responsibility between the human doctor and the machine in diagnosis and clinical management has been proposed.”
Sung and colleagues concluded that before AI tools can be put into daily use in medicine, “data quality and ownership, transparency in governance, trust-building in black box medicine, and legal responsibility for mishaps are some of the hurdles that need to be resolved”.
Open Forum is a policy discussion website produced by Global Access Partners – Australia’s Institute for Active Policy. We welcome contributions and invite you to submit a blog to the editor and follow us on Facebook, Linkedin and Mastadon.