Founder & President Alison Darcy, PhD, sat down with Eric Topol, MD, a leading expert in A.I. and medicine to discuss large language models (LLMs) and their opportunities for medicine.
Topol is the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again among other books. He is also a cardiologist and the Founder & Director of Scripps Research Translational Institute
This is the first of a two-part interview: “A.I. in Healthcare: The Hope and the Hype.” See part two here.
Why now: Health data has become more abundant and accurate (genome, microbiome, and EHR). At the same time, A.I. has moved from unimodal, meaning it could only handle one type of data, to multimodal, which can process text and images. This allows doctors to process all those layers of data and opens up a range of possibilities, including freeing up doctors to spend more time with patients, more personalized care, and a more empowered patient.
On how A.I. helps clinicians: “Keyboard liberation,” a concept Topol coined, is the hope that doctors will soon be able to use LLMs for time-consuming administrative tasks, such as note-taking and data entry. This will free them up to spend more time with patients and ultimately lead to better care and deeper trust.
On how A.I. empowers patients: Topol notes that once LLMs are properly trained in medicine including our personal health data (and privacy issues are addressed), patients will be able to ask it many of the same questions they would normally ask a doctor. This empowers them with more personalized and accurate information.
Dangers: Amplifying bias and privacy concerns are two of the potential dangers discussed. Both Darcy and Topol see the potential and power of these tools, but acknowledge there is much work to be done to make them part of standard care.
(edited for clarity)
Hi, everybody. It’s an absolute pleasure for me to be here today to interview Dr. Eric Topol, who’s a cardiologist, one of the world’s leading experts on AI in medicine, a researcher, and just a prolific author as well. Dr. Topol, thank you for your time and for joining us today.
Oh, thank you, Alison. It’s great to be with you.
I’m sure you are at the door busy given the time it is for AI right now, and all of the patterns shall benefits not in medicine. But before we jump into that, I realize after having read a couple of your books and loads of your papers, it occurred to me that I don’t really know how you ended up being…well, you’re a cardiologist, right? But how you also ended up being one of the world’s foremost thought leaders in the role of AI in medicine. What was the journey for you? Were there key moments that you can recall that fed into that perspective? Because it’s a unique perspective, and we’re so lucky to have it in the field.
I think it was a natural progression for me. I’ve always tried to think of where the puck’s going or where medicine’s headed.
So back in the late Nineties, it was clear that we were starting to accumulate a lot of data. That’s when genomics was really taking off. And then we started the sequencing, and then the sensors really were clicking, a smartphone connected to the Internet in ‘07 with the iPhone. And it was becoming clear that the problem with data was analytics, and the solution was going to be A.I.
So that’s where I had to kind of go into autodidactic mode because I’m not a computer scientist, but I had to hang out with a lot of great ones, like some people you work with and many others. So that’s kind of how I got into it.
It’s so exciting. I have to say, as an old doc in medicine, there’s no time I can remember that’s as exciting as what we’re seeing right now with these foundation and large language models (LLMs) and how they are inevitably going to transform medicine.
Let’s jump into the large language models. I know you’ve written a lot recently on the opportunities for medicine. Could you speak to some of that? What do you see as additional opportunities now based specifically on the LLMs?
I think what’s happening now with these large language or foundation models, generative A.I., whatever you want to call it, is we’ve gone from this narrow A.I., unimodal, where the charge was, help me with one type of data. And now we are into multimodal, particularly with the GPT-4, which is really been the first one to cross that line.
Now it’s text and images, and soon we will even go beyond that as these evolve. So no one in medicine has been able to integrate and process all these layers of data in a given individual, no less in a population. So to be able to have models that will do that and do it accurately and meaningfully, that’s what has been holding us up.
So there are so many things out of this multimodal API, like a virtual health coach that looks holistically, not just at one thing. For example, in mental health, what about the interactions with your sleep and your physical activity? You have to take data from multiple sources and, in real-time, be able to extract the meaning of that data and reflect it back.
So this is exciting because we didn’t really have a way to do that, and now–we haven’t done it yet in medicine–but we’re on the cusp of being able to.
I think some of the focus has also been on the quality of the data. Ten years ago, we were all very excited about the potential of big data and then realized it was like a lot of big noise as well. I know this recently in a piece in Nature Medicine, you spoke about the need for really well-annotated datasets according to phenotype and so on. Could you speak to that? In terms of data quality and the potential that data itself has to unlock some of these problems that we have in medicine, where are you on the spectrum of optimistic-disillusionment?
This is a great question you’re asking. I think we have to agree that each of us is defined by many layers of data, or at least partly defined. And so this multimodal or multi-layered data that defines our uniqueness, it’s not only what is captured in various medical encounters with electronic health records (EHR), which is only a small part.
It’s also things like our genome and our microbiome and all these biological layers. And then our physiome, through sensors, our anatome, through scans and environmental, our immunome, our socioeconomic level. So some of those are really high-fidelity data: the genome now is really gotten complete and highly accurate. And even the gut microbiome sensors have gotten better and better.
But some are still somewhat shaky. Our electronic records–there are many sources. People move, and the records are cut-and-pasted largely–80% of what’s in them–lots of errors. So it depends on which layer you’re talking about. But overall, the trend is toward higher fidelity data.
And the charge here is about inputs. If the inputs are compromised, if they’re incomplete, then the outputs are going to follow suit. And that’s the problem.
The garbage in, garbage out principle, basically. Yeah, I love that. I mean, you’re speaking to so much that is meaningful for us as well. We tend to think about data captured in the context of everyday living and the lived experience, which is so important for behavioral health and mental health because, of course, it’s in that everyday lived experience in which the problems arise.
I’m really glad you mentioned that. And I think of the mental health story as part of our physical. That is, we have sensors that can help objectively capture metrics. We’re not using them nearly as much as I hope we will in the future to help with subjective things. And I think that will be another part of the accuracy of data or the completeness of data as we move forward.
I love the phrase that you coined, ‘keyboard liberation.’ If we can deliver true value to the physician and the clinical relationship, that’s what I’m interested in as well. Do you see that? Do you see the potential for that in terms of clinician tooling, apart from raw insight delivery or information delivery? What other kinds of use cases have been knocking around in your head?
The one that I think clinicians will embrace is keyboard liberation. Over the years, since the introduction of electronic records, we have been transformed into data clerks. And this is really tied into the burnout disenchantment and depression that’s pretty widespread among clinicians. So to be able to remedy that where their keyboard use would be de minimis.
Now being tested in 2000 doctors at the University of Kansas, they’re all done through voice. Whether that’s the actual clinic note, whether that’s discharge summaries, whether that operative procedure note, and on and on. This is a pluripotent story, but the immediate kind of fantasy is that the clinicians would no longer be data clerks, and we’re, of course, just at the starting gate, but it looks like this is going to be a winner.
Now. It still means humans in the loop. You have to review these notes. But the point is then you can say, I want that note to be in the language and the literacy level of my patient, and I want that to lead to nudges to the patient. You know, every X amount of days about this or that.
And you basically can use this format to do things that we can’t do. So it goes even beyond just this kind of magical transcription level. It gets rid of having to say: I want to make the appointment. Such data, the prescriptions, the coding, and all these functions are just taken away from the clinician.
The pre-auth.The idea that the way you write a letter for the insurance company can affect whether your patient has to pay out of pocket for care or not, I mean, there are so many potential use cases.
In The Patient Will See You Now, one of my favorite books, how does this fit with –I know it’s very early to say. We’re brainstorming at this point–do you see immediate use cases or ways in which this maps onto the concept of the empowered patient?
Sure, because we tend to just think about large language models helping clinicians. But in this democratization that we’re moving further and further along with, the patients are benefiting, are going to benefit equally or more. The reason for that is… in fact, in this new book by Peter Lee and colleagues about GPT-4, there’s a whole chapter about the empowered patient with the large language models, and I really thought that was important because the idea that you could do so much as a patient when you have this type of ability to prompt.
Just today there was a paper published about patients asking about breast cancer screening advice, which is obviously a very complicated topic. They asked 25 questions. This is the University of Maryland, and the answers were 88% correct, and only one was inappropriate. And so, instead of having to turn to the doctor to be able to get some preliminary advice and put your data, not just a Google search, which tells you about the general population, but ‘here’s my data, you know, GPT-X, now what do I do?’
And so there’s going to be a lot of this. Of course, this is why we need regulatory oversight because it has to work well. If it gives bad recommendations, that could be dangerous. (See more on the role of regulation in part 2.)
I’ve written about this, too. I feel like one of those dangers is undermining public confidence because of one or two ill-informed design decisions that a couple of players have made.
Yeah, we have to acknowledge there are liabilities, but let’s not miss what’s really happening here, something that is quite extraordinary and unique in our time. And it’s just the beginning. I think most people…I recently wrote a piece about GPT-X just because we went from ChatGPT on November 30th to GPT-4, released on March 14th. This is extraordinary. It’s so hyper-accelerated.
Does Moore’s Law even apply anymore? I think we’ve gone way beyond Moore’s Law at this point.
This is a new law. That was every 18 months. This is something happening every few weeks. This is crazy stuff, but it just means it’s going to get better, and we’re going to start to weed out that hallucinatory behavior to some extent. It will never get perfect. There’s still the amplification of bias and lack of fairness and privacy issues–they take it to another level.
This is a challenge that we are confronting, just a remarkable ability to analyze and process data. But acknowledging that we don’t ever want to rely on machines 100%.
What I’m hearing is this is a tool. It’s going to have limitations. It’s going to have benefits. But actually, it’s a very powerful tool, and we owe it to ourselves, the world, and the potential benefit of industry to really lean into the benefit and maximize those.
So as a tech optimist, do you see any places where you think that we should not tread with this with LLM technology or with A.I. more broadly?
Yes. There are concerns that are very legitimate about the idea that we could make misinformation, disinformation, in general, much worse. Whatever we thought about deepfakes and GANS (Generative Adversarial Networks), we could take to a whole new order. I think this is a big problem. We’ve already seen, particularly in the last few years with the pandemic, the blurring of truth, fact-free, and a movement towards mis- and disinformation.
My biggest worry is that we could see that on a whole new scale. We’ve already seen the beginnings of LLMs being misused. And I’m worried about that and how it could certainly make things considerably worse. We’re not in a good state right now. And that’s, to me, perhaps the number one concern about how this could go, the downside.
Coming to the end. We always ask everybody that we interview this one question: if you could change anything in the way that healthcare is delivered today, what would it be? And I’m going to suggest that you would probably say everything, or a lot of it, but if you choose the number one.
It depends on what time frame you’re asking. Right now, I would say, just liberate the damn keyboards and get the gift of time for patients and the patient-doctor relationship to be restored.
But if you say, what do I want in five years? I want to get rid of hospital rooms. Hospitals become the place where you have the ICU and emergency rooms, operating rooms. But the regular hospital room has become obsolete in the next 5 to 10 years. We have the ability to do that, and we should be doing it.
Right now there’s that immediate thing I’d want to see, which is let’s get the gift of time. Let’s get patient autonomy going, the ability for doctor-less screening of all the common conditions that are not life-threatening–skin lesions and cancer and urinary tract infections and ear infections in children. The list goes on and on. Obviously, that extends to mental health as well, to support people. But I do think it’s the power and the willingness to validate things as we get more and more bold and capable over time. So the answer to the question for me is it’s just a matter of time.
Founder & President Alison Darcy, PhD, sat down with Eric Topol, MD, a leading expert in A.I. and medicine to discuss large language models (LLMs) and their opportunities for medicine.
Topol is the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again among other books. He is also a cardiologist and the Founder & Director of Scripps Research Translational Institute
This is the first of a two-part interview: “A.I. in Healthcare: The Hope and the Hype.” See part two here.
Why now: Health data has become more abundant and accurate (genome, microbiome, and EHR). At the same time, A.I. has moved from unimodal, meaning it could only handle one type of data, to multimodal, which can process text and images. This allows doctors to process all those layers of data and opens up a range of possibilities, including freeing up doctors to spend more time with patients, more personalized care, and a more empowered patient.
On how A.I. helps clinicians: “Keyboard liberation,” a concept Topol coined, is the hope that doctors will soon be able to use LLMs for time-consuming administrative tasks, such as note-taking and data entry. This will free them up to spend more time with patients and ultimately lead to better care and deeper trust.
On how A.I. empowers patients: Topol notes that once LLMs are properly trained in medicine including our personal health data (and privacy issues are addressed), patients will be able to ask it many of the same questions they would normally ask a doctor. This empowers them with more personalized and accurate information.
Dangers: Amplifying bias and privacy concerns are two of the potential dangers discussed. Both Darcy and Topol see the potential and power of these tools, but acknowledge there is much work to be done to make them part of standard care.
(edited for clarity)
Hi, everybody. It’s an absolute pleasure for me to be here today to interview Dr. Eric Topol, who’s a cardiologist, one of the world’s leading experts on AI in medicine, a researcher, and just a prolific author as well. Dr. Topol, thank you for your time and for joining us today.
Oh, thank you, Alison. It’s great to be with you.
I’m sure you are at the door busy given the time it is for AI right now, and all of the patterns shall benefits not in medicine. But before we jump into that, I realize after having read a couple of your books and loads of your papers, it occurred to me that I don’t really know how you ended up being…well, you’re a cardiologist, right? But how you also ended up being one of the world’s foremost thought leaders in the role of AI in medicine. What was the journey for you? Were there key moments that you can recall that fed into that perspective? Because it’s a unique perspective, and we’re so lucky to have it in the field.
I think it was a natural progression for me. I’ve always tried to think of where the puck’s going or where medicine’s headed.
So back in the late Nineties, it was clear that we were starting to accumulate a lot of data. That’s when genomics was really taking off. And then we started the sequencing, and then the sensors really were clicking, a smartphone connected to the Internet in ‘07 with the iPhone. And it was becoming clear that the problem with data was analytics, and the solution was going to be A.I.
So that’s where I had to kind of go into autodidactic mode because I’m not a computer scientist, but I had to hang out with a lot of great ones, like some people you work with and many others. So that’s kind of how I got into it.
It’s so exciting. I have to say, as an old doc in medicine, there’s no time I can remember that’s as exciting as what we’re seeing right now with these foundation and large language models (LLMs) and how they are inevitably going to transform medicine.
Let’s jump into the large language models. I know you’ve written a lot recently on the opportunities for medicine. Could you speak to some of that? What do you see as additional opportunities now based specifically on the LLMs?
I think what’s happening now with these large language or foundation models, generative A.I., whatever you want to call it, is we’ve gone from this narrow A.I., unimodal, where the charge was, help me with one type of data. And now we are into multimodal, particularly with the GPT-4, which is really been the first one to cross that line.
Now it’s text and images, and soon we will even go beyond that as these evolve. So no one in medicine has been able to integrate and process all these layers of data in a given individual, no less in a population. So to be able to have models that will do that and do it accurately and meaningfully, that’s what has been holding us up.
So there are so many things out of this multimodal API, like a virtual health coach that looks holistically, not just at one thing. For example, in mental health, what about the interactions with your sleep and your physical activity? You have to take data from multiple sources and, in real-time, be able to extract the meaning of that data and reflect it back.
So this is exciting because we didn’t really have a way to do that, and now–we haven’t done it yet in medicine–but we’re on the cusp of being able to.
I think some of the focus has also been on the quality of the data. Ten years ago, we were all very excited about the potential of big data and then realized it was like a lot of big noise as well. I know this recently in a piece in Nature Medicine, you spoke about the need for really well-annotated datasets according to phenotype and so on. Could you speak to that? In terms of data quality and the potential that data itself has to unlock some of these problems that we have in medicine, where are you on the spectrum of optimistic-disillusionment?
This is a great question you’re asking. I think we have to agree that each of us is defined by many layers of data, or at least partly defined. And so this multimodal or multi-layered data that defines our uniqueness, it’s not only what is captured in various medical encounters with electronic health records (EHR), which is only a small part.
It’s also things like our genome and our microbiome and all these biological layers. And then our physiome, through sensors, our anatome, through scans and environmental, our immunome, our socioeconomic level. So some of those are really high-fidelity data: the genome now is really gotten complete and highly accurate. And even the gut microbiome sensors have gotten better and better.
But some are still somewhat shaky. Our electronic records–there are many sources. People move, and the records are cut-and-pasted largely–80% of what’s in them–lots of errors. So it depends on which layer you’re talking about. But overall, the trend is toward higher fidelity data.
And the charge here is about inputs. If the inputs are compromised, if they’re incomplete, then the outputs are going to follow suit. And that’s the problem.
The garbage in, garbage out principle, basically. Yeah, I love that. I mean, you’re speaking to so much that is meaningful for us as well. We tend to think about data captured in the context of everyday living and the lived experience, which is so important for behavioral health and mental health because, of course, it’s in that everyday lived experience in which the problems arise.
I’m really glad you mentioned that. And I think of the mental health story as part of our physical. That is, we have sensors that can help objectively capture metrics. We’re not using them nearly as much as I hope we will in the future to help with subjective things. And I think that will be another part of the accuracy of data or the completeness of data as we move forward.
I love the phrase that you coined, ‘keyboard liberation.’ If we can deliver true value to the physician and the clinical relationship, that’s what I’m interested in as well. Do you see that? Do you see the potential for that in terms of clinician tooling, apart from raw insight delivery or information delivery? What other kinds of use cases have been knocking around in your head?
The one that I think clinicians will embrace is keyboard liberation. Over the years, since the introduction of electronic records, we have been transformed into data clerks. And this is really tied into the burnout disenchantment and depression that’s pretty widespread among clinicians. So to be able to remedy that where their keyboard use would be de minimis.
Now being tested in 2000 doctors at the University of Kansas, they’re all done through voice. Whether that’s the actual clinic note, whether that’s discharge summaries, whether that operative procedure note, and on and on. This is a pluripotent story, but the immediate kind of fantasy is that the clinicians would no longer be data clerks, and we’re, of course, just at the starting gate, but it looks like this is going to be a winner.
Now. It still means humans in the loop. You have to review these notes. But the point is then you can say, I want that note to be in the language and the literacy level of my patient, and I want that to lead to nudges to the patient. You know, every X amount of days about this or that.
And you basically can use this format to do things that we can’t do. So it goes even beyond just this kind of magical transcription level. It gets rid of having to say: I want to make the appointment. Such data, the prescriptions, the coding, and all these functions are just taken away from the clinician.
The pre-auth.The idea that the way you write a letter for the insurance company can affect whether your patient has to pay out of pocket for care or not, I mean, there are so many potential use cases.
In The Patient Will See You Now, one of my favorite books, how does this fit with –I know it’s very early to say. We’re brainstorming at this point–do you see immediate use cases or ways in which this maps onto the concept of the empowered patient?
Sure, because we tend to just think about large language models helping clinicians. But in this democratization that we’re moving further and further along with, the patients are benefiting, are going to benefit equally or more. The reason for that is… in fact, in this new book by Peter Lee and colleagues about GPT-4, there’s a whole chapter about the empowered patient with the large language models, and I really thought that was important because the idea that you could do so much as a patient when you have this type of ability to prompt.
Just today there was a paper published about patients asking about breast cancer screening advice, which is obviously a very complicated topic. They asked 25 questions. This is the University of Maryland, and the answers were 88% correct, and only one was inappropriate. And so, instead of having to turn to the doctor to be able to get some preliminary advice and put your data, not just a Google search, which tells you about the general population, but ‘here’s my data, you know, GPT-X, now what do I do?’
And so there’s going to be a lot of this. Of course, this is why we need regulatory oversight because it has to work well. If it gives bad recommendations, that could be dangerous. (See more on the role of regulation in part 2.)
I’ve written about this, too. I feel like one of those dangers is undermining public confidence because of one or two ill-informed design decisions that a couple of players have made.
Yeah, we have to acknowledge there are liabilities, but let’s not miss what’s really happening here, something that is quite extraordinary and unique in our time. And it’s just the beginning. I think most people…I recently wrote a piece about GPT-X just because we went from ChatGPT on November 30th to GPT-4, released on March 14th. This is extraordinary. It’s so hyper-accelerated.
Does Moore’s Law even apply anymore? I think we’ve gone way beyond Moore’s Law at this point.
This is a new law. That was every 18 months. This is something happening every few weeks. This is crazy stuff, but it just means it’s going to get better, and we’re going to start to weed out that hallucinatory behavior to some extent. It will never get perfect. There’s still the amplification of bias and lack of fairness and privacy issues–they take it to another level.
This is a challenge that we are confronting, just a remarkable ability to analyze and process data. But acknowledging that we don’t ever want to rely on machines 100%.
What I’m hearing is this is a tool. It’s going to have limitations. It’s going to have benefits. But actually, it’s a very powerful tool, and we owe it to ourselves, the world, and the potential benefit of industry to really lean into the benefit and maximize those.
So as a tech optimist, do you see any places where you think that we should not tread with this with LLM technology or with A.I. more broadly?
Yes. There are concerns that are very legitimate about the idea that we could make misinformation, disinformation, in general, much worse. Whatever we thought about deepfakes and GANS (Generative Adversarial Networks), we could take to a whole new order. I think this is a big problem. We’ve already seen, particularly in the last few years with the pandemic, the blurring of truth, fact-free, and a movement towards mis- and disinformation.
My biggest worry is that we could see that on a whole new scale. We’ve already seen the beginnings of LLMs being misused. And I’m worried about that and how it could certainly make things considerably worse. We’re not in a good state right now. And that’s, to me, perhaps the number one concern about how this could go, the downside.
Coming to the end. We always ask everybody that we interview this one question: if you could change anything in the way that healthcare is delivered today, what would it be? And I’m going to suggest that you would probably say everything, or a lot of it, but if you choose the number one.
It depends on what time frame you’re asking. Right now, I would say, just liberate the damn keyboards and get the gift of time for patients and the patient-doctor relationship to be restored.
But if you say, what do I want in five years? I want to get rid of hospital rooms. Hospitals become the place where you have the ICU and emergency rooms, operating rooms. But the regular hospital room has become obsolete in the next 5 to 10 years. We have the ability to do that, and we should be doing it.
Right now there’s that immediate thing I’d want to see, which is let’s get the gift of time. Let’s get patient autonomy going, the ability for doctor-less screening of all the common conditions that are not life-threatening–skin lesions and cancer and urinary tract infections and ear infections in children. The list goes on and on. Obviously, that extends to mental health as well, to support people. But I do think it’s the power and the willingness to validate things as we get more and more bold and capable over time. So the answer to the question for me is it’s just a matter of time.