Home » Health News »
Data can be a 'force for evil,' AI and machine learning experts say
The COVID-19 pandemic has highlighted and exacerbated existing disparities in the healthcare system, including the consequences of bias on racialized or marginalized groups.
Some of the ways racial bias in the healthcare system emerge are more obvious, such as horror stories of Black people being turned away at emergency departments.
Others, experts said during the HIMSS Machine Learning and AI for Healthcare Digital Summit this week, are less visible – but can still be incredibly harmful.
“There are other ways this bias manifests structurally that are not as potentially sort of obvious,” said Kadija Ferryman, industry assistant professor of ethics and engineering, NYU Tandon School of Engineering, at a panel on Tuesday. “That is through informatics and data.”
For instance, COVID-19 is a disease that attacks the respiratory system, meaning clinicians rely on devices that measure lung capacity and other related patient data, said Ferryman. But those devices themselves may have “corrections” based on a patient’s race built into their interpretations, which can be difficult to detect.
And, of course, biased algorithms stem from biased data. Ziad Obermeyer, associate professor at the Berkeley School of Public Health, noted that people with less access to COVID-19 testing were unlikely to show up in statistics around the disease – and, in turn, hospitals in those areas may get fewer resources.
That discrepancy “isn’t AI; this is a policy,” he said.
“But it highlights the fact that the very data we learn from, that gets put into these artificial intelligence algorithms, is in many ways what is leading these algorithms to reproduce the bias,” he continued.
In order to address bias, he said, it’s vital to look critically at which data are being used.
“The difference between variable 1 and variable 2 can make a huge difference between a biased algorithm – a biased policy – and one that’s fundamentally more just,” Obermeyer explained.
A biased algorithm isn’t just bad for patients. It’s also bad for business. Obermeyer noted that an algorithm that is missing key data about some groups of people isn’t performing at the top of its range: Reducing bias can make an algorithm more effective overall.
That said, Ferryman pointed out that sometimes an algorithm can be doing its job, but still have a negative impact on racialized groups. A diagnostic technology that isn’t being offered to some populations, she said, is working well in terms of accuracy, but not well in terms of overall population health.
There is, however, reason for optimism, the experts said. Namely, AI and ML can be used to zero in on bias, not just to propagate it.
“There’s growing knowledge about the danger of algorithms, of biased data,” said Ferryman. “We can use data to further the actions and intentions that lead to equity, and I think there’s also reason for hope when thinking about how we can analyze data, identify where there might be biases, and say, ‘Well, how can these data reveal new information about disparities in the healthcare system that we may not be fully cognizant of?’
“Data can be a force for evil, and reinforce disparities, but data can also illuminate disparities and show us where they exist so we can fix them,” said Obermeyer.
Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.
Source: Read Full Article