In a special Meeting of the Minds episode recorded at HLTH, Andrew Toy, president and incoming CEO of Clover Health, joined Senior Director of Commercial Intelligence Chris Hemphill. They discussed using technology and data science to make healthcare more equitable.

Sign up to learn about future episodes of Meeting of the Minds
Key Points
–Inequitable access to quality care, especially for people of color, has long been an unintended consequence of healthcare’s payment structure. Challenging these inequities is both morally right and a smart business strategy.
–Innovators, such as Clover Health, leverage technology and data science to tailor plans to customer needs and offer coverage to those who might previously be turned down for insurance due to complicated and exclusionary eligibility criteria.
–In-depth, nuanced data analysis coupled with a diverse workforce leads to insights that allow companies to meet the patients where they are and offer more culturally competent care.
Watch the Full Episode
Watch a Highlight
Sign up to learn about future episodes of Meeting of the Minds
Read the Transcript
Chris Hemphill: All right, folks. Meeting of the Minds. We’re at HLTH 2022, and we’re having exciting interviews about healthcare innovation and technology. But when we think about innovation, like when we look at HLTH Conference, we tend to narrow it down to… We hear it all the time: blockchain, AI, “What are the latest technologies that are available to use?” But what I’ve really seen at this conference… And I’ve seen a significant number of sessions focusing on things like health equity and social impact, social justice, and healthcare among gender lines. And it’s really opening up the picture that innovation isn’t just limited to technology. So, we’re looking at innovation in a broader sense of: How do we innovate in terms of strategy? How do we innovate in terms of site? How do we innovate in terms of payment models?
If you’ve been watching or listening to our episodes, you’ll have heard over and over again, I ask the same question to everybody, which is, “If there’s anything that you can change in how healthcare is delivered, what would you change?” Everybody has pointed to incentives and the financial chassis, the financial structure, around how healthcare is delivered. So, that makes me extremely excited to speak with Andrew Toy, who leads Clover Health and can talk to us deep more deeply about these financial structures and how they impact our care. But we’re not just going to stop at just the overall financial structures. Andrew and Dr. Carladenise Edwards delivered a session focused on structural racism. So, this is a good opportunity, and it was so awesome hearing you be able to get vulnerable and speak on this stuff, because we really want to dig in to the financial structures that you’re aware of and have a deep understanding of, and how they relate to structural racism and how they combat that. I just thought that a one-on-one conversation would be a way to be able to dive in and dig deep.
So, just a quick intro on your background, Andrew. I’ve done you a disservice. I would love for you to just give a quick overview on your background and Clover Health.
Andrew Toy: Absolutely. Well, thank you for having me on. Thanks for the opportunity. So, myself, my name’s Andrew Toy. I’m the president and the incoming CEO of Clover Health. Thank you so much. And we are a Medicare Advantage company, meaning we are a private form of Medicare. We look after those who are 65 and older and the disabled. And really, as you were saying, the interesting thing about Medicare Advantage is it is a payment model which aligns incentives so that we are really responsible for the people who are on our plan, and we are given a fixed amount of money per patient, per member, to actually look after them and pay the medical bills. And if those costs come in lower, we create some margin, a profit. If those costs have come in higher, then we are responsible for those costs.
So, the government and society actually has a good control of costs through the Medicare Advantage program. And as you were saying, in terms of a payment model, what the interesting thing that we are really obsessed with doing is: How do we look at that structure? Where a lot of other insurance companies would say, “Hey, if I’m going to do that, how do I find the healthiest people? How do I find those populations? Maybe they’re poorer people. We don’t want those folks. Maybe we want more middle-class folks. Maybe we want, well, folks from more white communities. Maybe we want fewer Hispanic folks.” A lot of times, we look at that and say, “Okay, well that’s in the data, right?”
It’s just not possible. It’s structural in the cost that these folks are more expensive to look after. So, why would an insurance company want to look after those people? And I think the way we see it at Clover is: We need to solve that problem. It’s not an easy problem to solve, but we need to solve it so that we as a society can fund healthcare and we can show that when we make healthcare personalized, when we really make it accessible to everybody, what that means is we will now serve everyone in the country, we will help them, which feels great mission-wise, to have great health outcomes, and we will do it in a way that is sustainable, to your point, and sustainable from a financial perspective; which is very different to how most financial institutions would think about healthcare.
C.H.: So, you phrased that in a powerful way, because we know that capitated models and the pursuit of being profitable under this model is not necessarily the lowest-hanging fruit.
So, the question I want to get into… I can’t talk about this without taking a step back into just looking at your personal background. I saw a history at Google and a history with technology and a technology focus. And I’m just curious about how your mission has driven you from big tech into healthcare and just more about your personal mission and why you’re embarking on this very difficult challenge.
A.T.: Oh, well, thank you for that. So, healthcare’s really important to me. Let me get to that in a second. But for my background, I’m a computer scientist by training. I actually grew up outside the U.S. I came to the U.S. to get my master’s degree in computer science from Stanford. Software engineer by training. Had my own startup from 2010 to 2014. Totally focused on mobile. IoT [Internet of Things]. Those are my backgrounds. I’m a systems engineer, operating system engineer, in terms of how my technical work has evolved. And I sold that company, along with my co-founders, to Google in 2014. Spent an amount of time there building technology. Our code ships in almost every Android device these days, really proud of that, to really bring security and control to the Android environment. And so, the idea of really, really big scale, I think, is built into me as a technologist, right? Really big scale. How do you help as many people as possible?
So, coming out of Google, I was thinking, “What am I going to do next?” And what I wanted to do next was really mission-oriented. I wanted to really help people. And to be clear, build a huge business, a great business, while helping people at the same time. And so, I looked at healthcare. It makes sense, right? As you said before, there’s a lot of misaligned incentives. It’s not really using technology. This conference is great, but it exists because it’s not the most forward-thinking industry. And as we looked at it, I think it became very clear to me that there’s a lot of these structural problems, as we discussed, in healthcare itself, where I don’t think anyone’s deliberately going out there to not try to look after everyone. No one’s really deliberately saying, “Well, I’m just not going to use technology.” Right? But it’s built into the way people think, the way they’re trained, the way the incentives are lined up, staying on fee-for-service as a chassis. Like you and I were talking just now how just fee-for-service as a payment mechanism can prevent the adoption of an AI bot or something like that. Right?
I wanted to really help people. And to be clear, build a huge business, a great business, while helping people at the same time. And so, I looked at healthcare. It makes sense, right? There’s a lot of misaligned incentives. It’s not really using technology.
Andrew Toy
You’re like, “Well, that’s weird. Why are those related?” Those are absolutely related. At the end of the day, those incentives aren’t aligned. So, when I looked at technology, I said, “Well, technology without adoption is meaningless.” Right? There’s no impact from that. So, we got to get the adoption. So, it really started me going down that rabbit hole, looking at those payment models, looking at how we could make adjustments and create this virtuous cycle where if we build the right tech and get it adopted, it leads to sort of like a shift in that payment model, which lets me serve more people and serve everyone, which is my goal. Which leads to more adoption of our technology, which is the Clover Assistant platform, and that cycle then can actually create a virtuous cycle versus a vicious cycle. The vicious cycle keeps technology adoption down. The virtuous cycle, when connected to the chassis of healthcare and finance, can drive its adoption. I think that’s what we’re seeing.
C.H.: During the presentation, Andrew said something that really stuck with me, that was really powerful. Very much alluded to it earlier, but he framed up inequity as the basis of insurance, about what you were just talking about. If you’re looking at the data, then if you just look at it purely without any kind of ethical lens or perspective, then you’re creating a negative cycle for people who have been in previously disadvantaged areas. So, can we go into some specifics on how-
A.T.: I know, obviously, you have a data background, too, so we look at this all the time, right? It’s very easy… I love the way you framed that. It’s very easy to justify almost any action by looking at data, because then, “Well, that’s just what the data says,” right? So, let’s roll back insurance. Insurance was originally created so that major adverse events would not destroy an entire industry, right? So, you know 5 percent of ships will sink at sea, so what you want is to say, “Okay, well, that 5 percent loss is going to put those merchants out of business.” You can’t afford to lose a ship. But if all of that trade buy some insurance, well, that 5 percent can get a claim and they can afford to buy back their ship, and the whole industry stays in, right?
All insurance is about a major catastrophic event which is effectively covered by a broader set of folks to which that doesn’t happen. Right? Okay, that makes sense. Healthcare used to be that way. It’s actually not illogical. If you roll back to the 1930s or the ’50s or something, most healthcare was catastrophic. We didn’t have enough therapeutics, and you’re like, “Oh no,” and that’s really bad and you kind of either died or insurance covered the hospital bill so you could get back from it. So, that was what it used to be. Now, insurance actually has to be something where we can look after a lot of people. We should be looking after a lot of people. It’s not catastrophic anymore. But I think that built into that structure of how insurance has run, as you were saying, is this idea that previously…
Like if you were insuring ships, you didn’t really want any ships to sink at sea. And if you could avoid it, you would say, “Well, how do I figure out which ships are going to sink and how do I make sure I’m not the one insuring those ships?” That would be a very logical thing for you to say, right? If you were saying, “Hey…” like in property and casualty, you would try… Also in car insurance. You’d be like, “Who are the risky drivers? If they’re a risky driver, I’m going to actually increase their premium. I’m not going to charge everyone the same.” Right? And that’s another form of inequity, where you say, “Well, I’m going to charge some people more, and I’m going to charge some people less for the same product.” Right? That’s a definition of inequity.
So, it all mathematically makes sense until you take it and apply it to healthcare. And then, when you go to healthcare, and you suddenly start asking yourself, “Do you think insurance companies should be saying, ‘Well, you’re sick so you don’t get insurance?’” Well, I mean, that defeats the whole point of health insurance. You’re actually saying, “Well, yeah, that’s great. You just cover all the healthy people,” but what’s the point in that case? And I think that when you look at that, that’s why the word “insurance” doesn’t even really apply to health insurance anymore. It’s closer to SaaS actually than it is to insurance anymore.
There’s, of course, risk in there, but when we apply the idea that people should be able to get this care irrespective of your background or how you grew up or your individual risks. On a personal basis, I have certain genetic risks that other people don’t have. Everyone carries specific aspects of their life with them, and we need to factor this in and not let that insurance calculation mindset take over. Otherwise, to your point, if we just follow the data, do what’s probabilistic, we run a probability analysis, we look at who we want to cover, we go to the profit, what happens is you end up throwing out a bunch of people and you end up keeping some other people; and that is the definition of inequity in healthcare.
C.H.: Well, yeah, and great point. And actually, that was a really major part of my previous job, was to look at the algorithms that we were using and understand… And honestly, these algorithms, before any kind of adjustment, before we started asking a question around bias in healthcare, this was the healthcare data sets and algorithms based on hospital EMR data, and there was significant bias, significant under-serving of Black patients, Asian patients, etc. So, these are things that definitely show up in the data.
A.T.: 100 percent. And I think that’s the thing we have to think about when we look at this, right? It’s easy to say, “If we had perfect data, we could solve this.” We’re not going to have perfect data. We know that. Does that mean we shouldn’t try and solve it? Of course we should try and solve it. And I think what I’ll tie together here is obviously the mission-moral part of me says, “It’s the right thing to do,” and that’s a big part of it. But the other part of it is, if you can solve the hard problem, if you can solve the actual hard problem, the lazy/easy way is to say, “You know what? I just won’t take those people’s money.” You know what I mean? “I won’t take their premium. I won’t sell them a policy.” But you know what? That’s the shortsighted business view, too. Because what if you could sell them a policy, and what if you could find a way to make them healthier so you don’t lose money on those people, right? Isn’t that an even better form of insurance, in this case?
So, I actually think, just to be clear, and I did say this earlier, but I’ll say it here, insurance companies who are doing this thing where they’re throwing out people, or just charging more, et cetera, are taking a lazy way out. You can use the data, and if you build the right tools, actually jump to the next phase, where I think we are actually improving all everyone’s life and actually still making money; and then now, accessing an even bigger addressable market and helping more people, and that’s the next generation. That’s the opportunity here. So, there’s a moral aspect, and there’s a laziness aspect. It just made me think of that when you were talking about looking at the data. There’s a laziness in there.
There’s a laziness when you just look at the data. You’re like, “I’m just going to look at it this way.” But if you spent more effort, you could do better. Not perfect, but better. Right?
C.H.: Yeah, yeah. There’s a laziness aspect, and there’s a failure to understand. A failure to bring leaders to the team who are going to ask questions about that data in the first place. And what that does is… Yeah. But I don’t like to say that the reason you should focus on health equity is to drive more money into the system. Our lives shouldn’t be valued just based on that. But a reason to focus on health equity is the opportunity. Like if you’re looking at the structure of the data and using that to understand deeper and more nuanced things about who you’re serving, then that does open up opportunities, right?
A.T.: 100 percent. So, I think one of the answers that we can look at to solve structural racism that causes tremendous moral issues as well as inequities, as well as a smaller opportunity, is to create a new structure that is actually superior, if that makes sense. I’m like, “Well, this new structure… Yes, it’s hard to create that new structure, but if I do, there’s so many benefits to doing so.” And then, that new structure is what then, once again, gets us off the vicious cycle, their self-reinforcing by definition vicious cycle, and gets us onto the virtuous cycle. I think that’s the goal of what we need to use technology, because that’s what technology is actually good at.
C.H.: And just curious about any programs or uses of technology that have been directly aimed at combating the structural racism problem.
I asked a very limiting question. I said “uses of technology,” but just in general, I’m sure technology might have supported a few best efforts, but if you were talking about these addressing other aspects of people’s lives that are correlated to poor health outcomes… But let’s leave the door open for that, too. I don’t want to limit it to just technology.
A.T.: Let’s start with this. It’s an example I love to give because it’s a little counterintuitive, then maybe becomes a little more intuitive. So, I think the COVID pandemic, while obviously very challenging and caused a lot of suffering, did open up creativity. Outside of healthcare, distributed work. Remote work. We all know it’s possible now. If someone says remote work’s not possible, we’re like, “Well, of course it is.” I mean, come on. But we were forced into that, right? People didn’t say, “Now let’s go to remote work.” We were forced into that.
So, one interesting aspect that came up is when we looked at telehealth, we had to go to heavy, heavy telehealth during the pandemic. We all know that. Right? Video and audio. Because doctors’ offices were closed. The brick-and-mortar offices were often closed, or not a good idea for a senior to go into those offices. So, we went to heavy telehealth.
And let’s talk about video telehealth. We offered it to all our seniors. We moved to it very quickly. Clover Assistant had a video technology we built out really quickly. And then, what we saw was really interesting in the data. What we saw was that there was a different uptake in willingness to do video calls and video telehealth depending on the background of the senior. And it’s not a technology background directly. It’s not an age background. What we saw, for example, this is what caught us onto it, is that people from a Hispanic background were far and away more likely to take a telehealth visit. They were very comfortable taking a telehealth visit versus people from other ethnic backgrounds. And we were like, “Huh. Well, that’s weird.” We wouldn’t have anticipated that.
But what we actually found when we dug in was that seniors from a Hispanic background are, not always of course, but actually generally much more likely to be comfortable with video-calling in general because they have a much higher chance of having WhatsApp installed. And they’ve done many calls on WhatsApp, video calls on WhatsApp, because they often have family members who live overseas. And so, they’re just used to having WhatsApp video calls being part of their life even prior to the pandemic. What that meant was when we shifted to telehealth video and telehealth, it was a very natural motion for them to be able to move towards televideo, because they were like, “Yeah, sure. Why not? I have WhatsApp. Will you do a call over WhatsApp?” And so, what we realized was, “Okay, forget all these portal-based technologies. The answer is let them do their video call on WhatsApp because that is what they’re comfortable with, and they will just do that doctor call if you can do that.”
Whereas if you forced them to use video through, say, a member app or if I was like, “No, you have to use the Clover app to do it,” no. That was very confusing to them. But if you just said, “We can do it over WhatsApp,” if you just said those words, boom, no problem. “Call me on WhatsApp. Here’s my number.” Right? And the doctor calls. Does the video visit. Works great. So, that’s a very interesting aspect of something which feels like basic, relatively, technology probably to you and me, but was completely transformative in the delivery of care during the pandemic, and then we’re now rolling forward into other… There was an exception to how you could use these from CMS during the pandemic and they’re continuing to look at these things, because I think it makes a huge difference if you say, “You can use WhatsApp,” versus, “No, you have to use your health systems member app,” that you download and you can’t remember the password and all these things. Huge difference in that uptake. So, very basic example to kick off.
C.H.: I love that example. And I want to ask a leading question. The purpose of this question is so that you can have an example of how to discover something like that within your population. So, transparently, how did your team discover that Hispanic patients, Hispanic members, had a higher likelihood of using WhatsApp?
A.T.: Yeah, I’m sure it’s only because you know a little bit of what we’re going to say. So, first of all, we start with some quant [quantitative analysis], right? So, in the quant, we’re like, “Huh. This is a higher uptick than we thought we were going to have in general.” Right? And then, I think a lot of people stop there. They stop at the too high roll up, and you’re like, “Huh, interesting,” and then you can’t find anything from there. But if you break that down into maybe a next level down, that’s going to get to a next level of granularity, then you start slicing the cohorts, then you find out, “Okay, it’s actually people from this background.” We didn’t have that hypothesis. I told you right now, it surprised us. But now after you slice it a few ways, the data science team’s like, “Okay, no, it’s this population that’s disproportionately…” It’s not a generally high increase. This population’s disproportionately high.
Then we switch to qual, like qualitative analysis. That’s when we reach out to the nurses who care for these people, and say, “Hey, wait, what do you think is going on with these folks?” And this is where it’s great to have a diverse working population. A lot of our nurses are also Hispanic themselves, and they’re the ones who say, “No, honestly, I just offered them WhatsApp. I know they’re going to have WhatsApp, so I just say, ‘You know what? I’ll call you on WhatsApp,’ and they’re like, ‘Yeah, I’ll do it.'” And it’s so intuitive to them, they would never think to bring it up, unless we ask them that question. “Why is this population higher?” They’re like, “I don’t know. It’s because when I know it’s them, I call them, I say, ‘Hey, just do WhatsApp.'” And the nurses, they get right on. That’s how it happened.
C.H.: Well, I’m sad that I have to wrap this up pretty soon, but I have to follow that up with having the right people on the team, having the right people on your data science teams, that’s going to get people to ask the right questions.
The technology is there to ask the question of: “Well, we see the overall performance, but how does it break down by cohort?” But it’s not going to happen unless people are asking the right questions. So, get those right people… This is for technologists. If you’re in the data, if you’re a leader who is making decisions that affect more broadly, get the right people in place and ask questions. Don’t look at the performance of some model or the overall performance before you disambiguate and go down, get more granular, the overall could be hiding some very murky truths.
So, I do have to wrap up. And I’ve already sold you my data sample. I already told you. I told the whole audience, but everybody’s been seeing I asked, “What’s the one thing you would change about healthcare?” But what’s the one thing you would change about healthcare?
A.T.: So, I talked a lot about the financial side, so let me pick something different. It’s about the decentralization of healthcare. It’s decentralization of healthcare. So, right now, we are so concentrated in brick-and-mortar settings, and especially hospital-based brick-and-mortar settings, we need to decentralize care out of these brick-and-mortar settings and use technology to get the right amount of care from exactly the right license of clinician to wherever is the most convenient for the actual patient. Our members, right? That decentralization then, once again, changes the structure and nature of care.
Because if you walk into a hospital, you’re going to get the person you get when you walk in. If I can look at you in your home, people are like, “Oh, that’s more convenient.” It is more convenient, but I can also now match your clinician with you. We have somebody who you’ll trust more and resonate more maybe with your background or culturally appropriate care, and I’m going to get a better outcome from that. So, it’s actually not just because I want to… It’s not a real estate play. It’s because decentralization lets us curate how we’re going to give care and actually make it much more flexible. And I truly believe that’s the future.
C.H.: Love that response. I love it because it is also controversial. I was in a presentation earlier, and one panel was talking about how we need something to have extremely focused use cases. While the other panel was saying, “Hey, we’re not interested in anything unless it’s big and hits a bunch of different things at the same time.” Subject for another conversation, but I love that you brought it up.
Andrew, for the people that want to follow you and just understand your perspectives and just keep the conversation going, what’s the best way to do that?
A.T.: I write pretty frequently on LinkedIn. I think we got together there on LinkedIn earlier, so you can find me on LinkedIn. Just look for “Andrew Toy” from Clover and go ahead and follow me there.
C.H.: Well, thank you again, Andrew, for being frank, being vulnerable, providing a detailed description about these answers. And honestly, my leading question, I didn’t know where I was leading. I didn’t know what your process was, but just hearing about-
A.T: We didn’t prepare ahead of time. Yeah, yeah.
C.H.: Yeah. So, I understand that, and I just thought it was a brilliant answer. I hope that people take a lot away from it. For folks who want to follow up on these kinds of conversations and just keep on viewing from different healthcare leaders with different perspectives, follow Meeting of the Minds. If you hop on our landing page, we’ll make sure that you get an email whenever these types of conversations are coming out or when we’re doing one live.
Until next time, thank you.