Home » Health News »
WHO Offers Recommendations for AI Guidelines in Healthcare
In a new publication, the World Health Organization (WHO) has put together the most important regulatory considerations on artificial intelligence (AI). Experts emphasize the importance of checking the safety and efficacy of AI systems.
In its publication, the WHO compiled what it considers the most important principles that governments and governmental authorities can use as a basis to develop new guidelines for AI, or to adapt existing ones, at a national or regional level.
“The introduction of AI in medicine is very promising,” said WHO Director General Tedros Adhanom Ghebreyesus, PhD. At the same time, AI can involve an element of risk through unethical data collection, cyberattacks, and malfunctions. “These new guidelines will help countries to effectively regulate AI and utilize its potential, whether it is to treat cancer or recognize tuberculosis, and at the same time minimize any associated risks,” said Ghebreyesus.
Laboratory to Practice
The WHO publication has arrived at just the right time: more health data are being collected than ever before. Cloud technology is making these data increasingly available. Moreover, rapid advancements are being made in analytical technologies, including machine learning and logic-based or statistical procedures.
The WHO sees a lot of potential in such technologies; they could optimize not only clinical research, but also diagnostics, therapy, and prevention. Physicians and other specialists could also use them to expand their expertise.
AI’s Weaknesses
“AI technologies, including comprehensive speech models, are sometimes being implemented too rapidly without knowing exactly how they work and what could also harm users,” wrote the WHO experts.
When utilizing health data, AI systems may also have access to sensitive personal details. The WHO proposes the development of technical and regulatory mechanisms to guarantee a high level of protection.
The WHO Requests
In reaction to countries’ growing need for a responsible approach to AI health technologies, WHO experts have outlined the following multiple areas that it says should be regulated more strongly:
-
To build trust, transparency and documentation are hugely important. This includes documentation of the AI product’s entire life cycle and its development.
-
Risk management must include issues such as proper use, continuous learning, human interventions, training models, and threats such as cyberattacks.
-
The external validation of data and the clear definition of the intended purpose of a particular AI system help to ensure its safety and facilitate regulation.
-
A commitment to data quality (eg, through rigorous analysis of the systems before release) is crucial to ensure that tools do not contain any bias or mistakes.
-
Regulatory provisions must also deal with questions concerning privacy and data protection.
-
Improved collaboration between regulatory authorities, patients, healthcare professionals, industry representatives, and government partners may help AI applications to remain compliant with the provisions throughout their life cycles.
Train the Applications
Whether the AI systems fulfill their purposes does not depend solely on the code used to create them, but also on the data with which they are trained, such as that from clinical studies or from databases of user data. “Better regulation can help to manage the risks of AI reinforcing biases in the training data,” wrote the WHO.
For example, it can be difficult for AI models to illustrate the differences in patient populations precisely, which, in the worst case, can lead to biases, errors, or even failures in the AI. To minimize these risks, the WHO is calling for provisions to ensure that important characteristics such as sex or ethnicity are illustrated representatively.
This article was translated from the Medscape German edition.
Source: Read Full Article