UC Berkeley's Ziad Obermeyer is optimistic about algorithms

As an associate professor at University of California, Berkeley, Ziad Obermeyer has made waves throughout the healthcare informatics industry with his work on machine learning, public policy and computational medicine.

In 2019, he was the lead author on a paper published in Science showing that a widely used population health algorithm exhibits significant racial bias.    

And in recent years, the subject of identifying and confronting bias in machine learning has continued to emerge in healthcare spaces.   

Obermeyer, who will present at the HIMSS Machine Learning for AI and Healthcare event next week – alongside Michigan State University Assistant Professor Mohammad Ghassemi, Virginia Commonwealth University Assistant Professor Shannon Harris and HIMSS Outside Counsel Karen Silverman – sat down with Healthcare IT News to discuss how stakeholders can take bias into consideration when developing algorithms and why he feels optimistic about artificial intelligence.  

Q. Could you tell me a bit about your background when it comes to studying bias in machine learning?  

A. I came to this work, in many ways, from a place of great optimism about what artificial intelligence can and will do for medicine. So a lot of the work that led to the work on bias was actually trying to build algorithms that work well and generally do what we want them to do, and not reinforce structural inequalities and racism. You know, I still actually have a lot of that optimism.   

But I think we need to be so careful along the way toward that vision of an artificial intelligence that helps doctors and other decision-makers of health do their jobs better and serve the people they need to serve.   

That’s kind of the overriding message that I try to stick to in my work: This is really going to transform medicine and healthcare for the better, as long as we are so careful and aware of all of the places that it can go wrong.  

Q. And how can stakeholders and developers – and also providers – be careful in that way? What should they be taking into consideration when they’re relying on artificial intelligence to treat patients?  

A. We got a lot of publicity for some of our work on bias. And what we tried to do is turn that publicity into collaborations with a lot of organizations in health, whether they were insurers, or healthcare systems, or even technology companies. 

We learned some lessons from that very applied work that I think are really important for everyone who is working in this area to keep in mind.   

Maybe it sounds a little trite, but the most important thing is to know what you actually want the algorithm to be doing. What is the decision that we’re trying to improve? Who is making that decision? What is the information that the algorithm should be providing to that person to help her make a decision better?   

Even though it sounds so obvious, that is often missing from the way that we build algorithms. It often starts from, “Oh, I have this data, what can I do with it?” or these putting the cart before the horse situations.   

I think that’s really the first and most important place to start, is to really try to articulate exactly what we want the algorithm to be doing, and then hold it accountable for that.   

That’s where we started when we did our initial work, which was: OK, we want all of these population health management algorithms to be helping us understand who’s sick. That’s what we want to be doing. But what are the algorithms actually doing? Well, they’re predicting who’s going to cost money.  

And even though those two things are related, they’re actually quite different, especially for non-white people, and poor people, and rural people, and anyone who lacks access or is treated differently by the healthcare system.   

I think that [question of algorithmic purpose] is easy to say, but it’s much harder to do, because it requires you to really understand the context in which algorithms are operating, understand where the data comes from, understand how structural biases can work their way into the data, and then work around them.   

One of the really important things that I learned from this work is that, even though we’ve found bias now way beyond that initial algorithm – almost everywhere we’ve looked in the healthcare system, through these partnerships – we’ve also found that bias can be fixed if we are aware of it, and we work around it when building algorithms.   

When we do that, we turn algorithms from tools that reinforce all of these ugly things about our healthcare system into tools that are just and equitable and do what we want them to do, which is help sick people.  

Q. One thing I’ve been wondering about is bias in application. Even if an algorithm were set up to be as neutral as possible, are there implementations that could be using it in biased ways? How could organizations guard against that?  

A. Let’s imagine that you were a profit-maximizing insurance company. It’s still not the case that you would build an algorithm that predicts total costs, because total costs are not avoidable costs. 

And if you start thinking carefully about what avoidable costs are and where they come from, in our healthcare system, even those kinds of costs are going to be concentrated in the most disadvantaged people, because who doesn’t go to their primary care doctor because they can’t get the day off of work? Or because they can’t afford the copay? Who are the people who had a heart attack hospitalization that could have been prevented, had the person taken aspirin? The diabetic foot amputation that could have been prevented, had the person checked their glucose and been taking insulin?

Even for a purely profit-maximizing insurer or health system, those are [interventions] you really need to get to disadvantaged people and prevent these expensive problems before they happen.   

Health is special, because how do we use algorithms? Well, we can use algorithms to target sick people, and give them extra help and resources. Who do you want to find? It’s the most needy people who are going to get sick, and those people are the most disadvantaged people in our healthcare system.

Q. You mentioned at the beginning of this conversation that you’re feeling optimistic. What makes you feel hopeful about this field?  

A. Through a lot of these collaborations with insurers or health systems, we’ve seen a lot of really great use cases of algorithms. I think algorithms can do good basically wherever human decision-making falls short.  

If you’ve looked at the health system, you’ve no doubt seen at least one or two cases where humans don’t make the best decision. I trained as a doctor; I still practice emergency medicine. And decision-making is just really hard in healthcare. It’s a complicated sector, with a lot of really hard things that humans have to do – complex data to process, whether it’s clinically or in population health or in insurance.  

Anywhere that humans are faced with this super complicated set of data, and decisions that need to be grounded in those data, I think algorithms have a huge potential to help. We have this paper that shows that algorithms can really help a lot when we’re trying to figure out who to test in the ER for a heart attack.  

There are lots of other population health management settings where algorithms can really help predict who’s going to get sick, rather than who just costs a lot of money.

So there are lots of cases where I think algorithms are really, really important, and they’re going to do a lot of good. That’s point one.  

Point two is that we have to be really careful when we’re building those algorithms. Because very subtle-seeming technical choices can get you into a lot of trouble.   

They can get you into a lot of trouble by doing harm to the people that you’re supposed to protect, but they can also get you into a lot of trouble with regulatory agencies and state law enforcement officials. It has not been a very good defense for organizations to say, “Oh, well, we don’t even have race in our algorithms or in our datasets, so we couldn’t be doing anything.” Ignorance is a very bad look in this area. That might be the most concrete message.

We’ve published this algorithmic bias playbook, meant for an audience of people exactly like forum attendees. It’s a step-by-step guide to thinking about how to deal with bias in algorithms that you’re using or thinking about using.   

Starting to think about that organizationally, having someone responsible for strategic oversight of algorithms in your organization, having ways to quantify performance and bias in general – those things are really important for your mission and your strategic priorities. Algorithms are very powerful tools to help you achieve your goals, but also for staying on the right side of the law.  

This interview has been condensed and lightly edited for clarity.  

Obermeyer’s virtual panel with Ghassemi, Harris and Silverman, “AI Models, Bias and Inequity” is scheduled for 3 p.m. ET on Tuesday, December 14.

Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.

Source: Read Full Article