WHO warns of potential AI risks for healthcare

WHO warns there were documented risks that LMMs could produce false, inaccurate, biased or incomplete outcomes
A representational image. — Canva
A representational image. — Canva

World Health Organisation (WHO) warned of the pitfalls in rushing to embrace artificial intelligence (AI) despite its potential to transform healthcare through things like drug development and quicker diagnoses.

The UB health agency has been working closely on the likely consequences and benefits of the AI large multi-modal models (LNMs), which are relatively new.

It should be noted that algorithms trained on data sets can be used to produce new content in generative AI.

LMMs are a type of generative AI which can use multiple types of data input, including text, images and video, and generate outputs that are not limited to the type of data fed into the algorithm.

Read more: From smart sickbeds to soothing robotics, tech cares for you

Addressing a press conference, WHO Digital Health and Innovation Director Alain Labrique said: "Some say this mimics human thinking and behaviour, and the way it engages in interactive problem-solving.”

The WHO said LMMs were predicted to have wide use and application in health care, scientific research, public health and drug development.

The UN health agency outlined five broad areas where the technology could be applied.

These are diagnosis, such as responding to patients' written queries; scientific research and drug development; medical and nursing education; clerical tasks; and patient-guided use, such as investigating symptoms.

While this holds potential, WHO warned there were documented risks that LMMs could produce false, inaccurate, biased or incomplete outcomes.

They might also be trained on poor-quality data, or data containing biases relating to race, ethnicity, ancestry, sex, gender identity or age.

Read more: Exploring the depths of augmented reality

"As LMMs gain broader use in health care and medicine, errors, misuse and ultimately harm to individuals are inevitable," the WHO cautioned.

They could lead to "automation bias", where users blindly rely on the algorithm -- even if they have good grounds to disagree.

"Generative AI technologies have the potential to improve health care but only if those who develop, regulate and use these technologies identify and fully account for the associated risks," said WHO chief scientist Jeremy Farrar.

"We need transparent information and policies to manage the design, development and use of LMMs."