Thomas lnsel, MD, a psychiatrist, neuroscientist, entrepreneur and author, is a national leader in mental health research, policy, and technology. For 13 years, Dr. Insel served as Director of the National Institute of Mental Health (NIMH). He has also co-founded numerous startups, including Mindstrong Health and Humanest Care, and sits on the board of Steinberg Institute, Fountain House, Foundation for NIH, among others.
This conversation stems from deep expertise in the mental health field, from both Dr. Insel and our founder and president, Alison Darcy, PhD. Join us for an in-depth discussion about the future of mental health care and a vision for necessary change in 2024 and beyond.
- Standards of fidelity are key to improving the quality of care patients receive
- Technology can serve to enhance human capabilities, offering more nuanced insights and support for patients and providers
- Science, curiosity and passion all intersect as driving forces for change in mental healthcare
(edited for clarity)
Alison Darcy
Thank you so much for your time. I can imagine a lot of people want to talk to you. And I was so thrilled because, as I mentioned in the email, I cannot think of another person that has the breadth and depth of experience that you have with mental health and mental illness. And from personal experience and family experience, which many of us have. Being a psychiatrist and neuroscientist, researcher, head of National Institutes of Mental Health, startup entrepreneur, founder. And then of course, at government level as the California czar for mental health. How do those perspectives fit together? I’m so intrigued, as you went through your career, and you just have such a beautiful explanation in your book, but how do they work with each other? Is there any one perspective that you keep coming back to that sort of anchors the rest? Or informs the rest? Or are they just sort of mutually beneficial?
Tom Insel
It’s a question that partly just reflects my age, I’ve been around a long time. And I’ve been doing this in one form or another for five decades. And I think most of it comes out of just the passion for individuals that I’ve known, either in my family or close friends that struggle with some aspect of mental illness. And then just for me, just a curiosity of trying to understand what is this about? So I think the grounding really comes out of that, out of that lived experience, the part that continues to drive me still is the mystery of it all, and how difficult it is to really understand what these disorders are about. I suppose the part of my career that I still keep coming back to is the science. At the end of the day, I think I’m fundamentally a scientist more than anything else. And so the idea that there is a truth there, something that we need to keep exploring, to understand better, is always a driving force. And I keep wondering, it’s a curiosity, especially as you age is a really good thing to hold onto. When I was about your age, I had a friend who was in her late 80s, and still very active, and we went out to dinner one night. And I said, How do you do it? I mean, how do you keep going? You’re still involved! She said, it’s just two things. She said I’m curious and I’m furious.—
Alison Darcy
That is fantastic. I love that. When I think about what people asked me, why I founded the company, I don’t know how to express this more eloquently. But it came from this discontent, like an annoyance that there’s something not quite right here. And then that coupled with the creativity, you know, the best way to complain is to make something which really is excellent for finding a company, but I couldn’t agree with you more. A curiosity is really interesting, because you could get burnt out. In this field, in lots of ways it is really hard. It is very complex. And I think that’s very clear in your book healing. So, we’re obviously talking about technology here today. There was a beautiful chapter on your vision around a tech-enabled future for mental health care delivery and the role it plays. And you’re saying it’s not going to be the panacea, obviously, but it can enable things. And I wonder if I could read you an excerpt you mentioned from Carl Sagan, his natural history written in 1975. He says, “no such computer program is adequate for psychiatric use today, but the same can be remarked about some human psychotherapists in a period when more and more people in our society seem to be in need of psychiatric counseling. And when timeshare in computers is widespread, I can imagine the development of a network of computer psychotherapeutic terminals, something like arrays of large telephone booths, in which for a few dollars a session, we would be able to talk with an attentive, tested and largely non directive, psychotherapist.” So that was written in 1975, 50 years ago. How close do you think we are today? Like how on the money was Carl Sagan in terms of a vision there?
Tom Insel
Well, a lot has happened recently. And here we are, at the end of 2023. If you look back 10 years ago, we were just at the very beginning of what I think history will see as a turning point for the way we think about interventions in mental health. I’m really interested in what machine learning in AI can do. I don’t know that the Carl Sagan vision, the idea that we will have a vast network of intelligent bots that can serve a psychotherapist is where I would start.
What excites me about the technology we have now is the way it’s able to improve what humans do. And in my first glimpse of this, I’ve had several kinds of entries into this. But at the company that I’m in now, Vanna Health, we’re taking on very complicated patients, people who have been ill for many, many years, and I’ve had many hospitalizations, and just the ability to be able to look at a vast medical record, and reduce that to 300 or 400 word summaries, saves hours of work, it’s just remarkable. And to be able to do that pretty accurately within seconds, it’s a commodity, it’s not complicated. That’s great. The idea that when you interview someone, whether you’re a caseworker or a psychiatrist that you can have a digital scribe that captures that interview with fidelity, and then has a bunch of tools that allow you to assess the sentiment, the coherence, a whole range of different aspects of that interview, including the quality of the report, and they have therapeutic alliances, that’s really a commodity as well. I mean, that’s pretty amazing to realize where we’re at. So my excitement about a lot of this stuff is actually a little bit prosaic but it’s really important to take the things that are getting in the way right now. And allow us to take the current workforce and make them much, much better. That’s not really pie in the sky fantasy, that’s very doable with the tools we have right now.
Alison Darcy
That’s so exciting. There’s no clinician that went to school to fill out the EMR or to write letters to the insurance companies to convince them to cover a patient. You know, that’s not why you train clinically, right? It’s to be present with the patient. I love that.
Tom Insel
I have to say, Alison, we have to be really careful here, because we would have the same conversations 20 years ago, 30 years ago with the EHR coming into primary care and into medicine. And it’s been the worst thing, all of the digital transformation of healthcare has destroyed the experience of providers and healthcare.
It’s created a barrier not a facilitator. It’s actually created this thing that is literally between you and your patient.
So it’s kind of a wonderful idea that you take technology to fix the problem that technology created. And that’s what I’m hoping we can do here. We’re seeing the first glimpses of that already. And in the kinds of detailed digital scribes that are being already adopted, they’re not perfect, they need some work. But that’s the kind of promise I think we can begin to look at. I was just thinking about this, because my wife went in for an MRI last week, and it was one of these high strength die enhanced MRIs because she had some vascular problems. And we had this wonderful system of health care through Stanford where we get access to all the medical records, and they provided a report of the MRI, but it was completely indecipherable. I mean, the report was written for a provider. And it was nice that she had access to that. And I could read it and interpret it, but why not? And ChatGPT can do this in a nanosecond. Why not write that report in multiple languages; one for the payer, one for the provider, one for the patient, one for the patient’s family? We can do that in multiple languages as well. So all of that capacity is there, we’re just not using it. And those are the kinds of things which I think we should be doing immediately where AI really is very, very good.
Alison Darcy
I was just hearing on the radio this morning, people talking about when they’re getting a difficult diagnosis, and your brain goes blank. And so people are trying to record or they bring someone with them to be in the appointment with them or try and write things down. But it’s a real problem. We know that when people are engaged in their own care, they can tend to do better. But it’s very hard to engage if there’s an awful lot of complicated feelings that are coming up, life changes that may need to happen. Do you think tech and AI, but tech broadly might be able to help there in meaningful ways?
Tom Insel
Absolutely. So we’ve covered sort of the first part of this in my own head, which is some of it is just reporting translations, documentation, capturing interviews. I think the next stage of that is actually providing more decision support for both patients and providers. So what could that look like? Well, it could take the interview and provide those insights that right now are still being developed. But whether you call those vocal biomarkers, or whether you think about this as just understanding the therapeutic alliance, those are really kind of exciting and useful things. It’s different when we get into the mental health space, because so much of what we’re talking about is just communication, and understanding and being able to be empathic and to listen. So you can go much, much further with AI in our space than you would in gastroenterology or even in neurology. It gets really exciting. And this is kind of going back to the original Carl Sagan idea but could we actually get to a point where we can develop a tech-enabled psychotherapy system. And there, I think, we’re already doing this in lots of ways. You’ll see the beginnings of it in training and helping new therapists, by having very smart avatars that can give them the experience of seeing patients in a way that gives you all the complications and lots of experiences and challenges. So that’s kind of an easy one that I think will be fun to develop and see. I guess the real question is, and it’s kind of the Woebot question. Is it necessarily tech-enabled treatment? Or is it a provider, a human-enabled tech treatment? You know, where the tech is taking on most of the heavy lifting. And I think that’s what we’re learning and trying to figure out, how is that going to work? And where will that work, where it will not work? And I just don’t know yet that we’ve got that story. I think it’s still being written. But it’s super interesting to think about.
Alison Darcy
Are there any innovations on the horizon that you’re particularly excited about? Or conversely, are there any innovations on the horizon that you think, oh, no, that’s never going to work? That we should never even delve into?—-
Tom Insel
So I tend to be an optimist. I tend to be in that first camp, excited about things. I do think the idea of vocal biomarkers is pretty exciting. You know, looping back to this conversation we’ve been having already about how to feel about all of this. And you know, are we stepping into something we’ll regret? I’ve been to several of these panels and conversations and salons and meetings around this topic. And it’s just been fascinating to me, because they always start in the same way, which is, people who are well meaning but not in health care, saying I’m so concerned about the ways in which AI will hallucinate and destroy the fidelity of good care. Those are people who don’t really know much about what health care looks like. So to me, in every one of these meetings when this happens, the people who actually are in the trenches are always saying, compared to what? Look at what happens right now, today, right? And if you think this is the ideal, if you think this is as good as it can get, that is really a pathetic perspective. So, I do think that we have to keep that in mind that we’re trying, you know, as Tom Friedman likes to say, perfect is not on the menu here. That’s not what we’re talking about. It’s how do we improve over the really dire quality of care that exists right now for most people? And can we do better than that? And I do think we can.
Alison Darcy
That’s right. I think one of the bigger risks is that sort of narrative, and the problems and things it does, it starts to undermine public confidence in the ability of these tools to help. Everything is tarred with the same brush. And actually, a lot of it comes down to the nuance of design and thoughtful teams and leadership. What is the problem that any tech or any service is trying to solve? How well does it do that? What’s the risk benefit ratio, rather than, how good is a technology in and of itself? I mean, I believe that you have to look at what you’re trying to do, what is the intended use? And how good is it at that, at delivering on that? What’s the risk benefit ratio of that?
Tom Insel
Have you done that? So like a robot? I mean, this is sort of right in your wheelhouse, you’ve had to be very clear about this is, what it’s for, this and not for that.
Alison Darcy
Exactly, and it’s constant. It’s constantly communicating that and over communicating that. And then obviously, we have a full sort of product research. And we publish literature to communicate those things, to show both an openness to understanding where the limitations may be and where the benefits are, but also to actively explore those things such that you can, feed back that insight into the product development process. And so I agree with you, I feel like, fundamentally, science is the only process that enables us to push the needle forward in a responsible, predictable way. Just in the last year, we have seen this tremendous advancement in the technology tools that are available to us. The question is, how are we going to apply them in care? Where are the pitfalls? And, you know, for us, we have a specific kind of way that we look at that, which is, what is the benefit or risk to the person? How are they experiencing delivery of care in this way? What kind of care can you deliver? And even if it’s just a nugget here and there, what does that look like? And where should we not be veering into? And so I think it’s shifting sands for us, and it’s trying to be really good at communication.
Tom Insel
I really liked that. And I think it is clear that that’s the way this field has to develop, step by step. It’s not going to be the answer for all problems. But there are definitely issues where it can help. And I think, you know, we should be really clear that the current state of mental healthcare is not great. And there are just really important issues that we haven’t addressed with the current tools that we’ve got, perhaps the most obvious being that most people who could and should be in care don’t get it.
The other thing I would say is that when people do get care, and this is kind of where you were going and talking about, we know the quality of care is not great. And for many, many people, it’s not only that it’s delayed, but what they get is not in any way, a reflection of where the science is. And so to have something that at least is based on the evidence we have about what works, is really important. I wouldn’t say this so much in other areas of medicine, I think in most of medicine, we have a more scientifically based set of interventions. But when you go out into the world, and you ask, what are people getting, when they reach out for a therapist, it’s usually what the therapist wants to do, it’s not necessarily what the patient needs. And I think we have to get honest about that at some point. And make it clear that this is a field where we need to raise the bar for quality. You’ll hear a lot in the conversations now about technology and about startups that it’s all about access, and we’re improving access. But if it’s access to crap, that’s not progress. You want to make sure that you’re not only democratizing care, but that you’re improving the quality of what people are receiving, and that’s a place where having standards and having the technology can help, having measurement and beginning to look at outcomes. And ultimately, even reimbursing for outcomes becomes really the way that progress can happen.
Alison Darcy
I was just talking about this this morning, about the variation and quality issue, you would never have the case where an oncologist can just do what they feel is right with the patient. You would obviously want them to do the latest evidence-based treatment, most likely to help you given your specific presentation. And we can’t seem to get there with health care. I think less than 10% of clinical psychologists practice measurement based care, and about 20% of psychiatrists do, even though it can improve outcomes by about 50%.
Tom Insel
The current state of play ain’t great. It’s not really where we want to be. And it does frustrate me a little bit, going back to curious and furious that so much of our field is resistant to change. They really are so hostile to the idea that it could be better. And I think in some ways, they don’t want to know, they don’t really want to know what’s going on. But it’s going to be important for policy, for parity, for reimbursement, for the growth of the field and for the survival of the field, that ultimately it becomes accountable, and there’s not been accountability in this field whatsoever. I’m always kind of struggling with this because, you know, in medicine, we do have accountability. People are held to account for results and it does matter. You get thrown out of a hospital if you’re not following surgical procedures, you get cited if you’re a physician who writes a prescription for Oxycontin when you shouldn’t. There are a whole bunch of things that we have built into general medical practice, there’s a certain standard of care that we just don’t have in the mental health space. I don’t think that we want to necessarily adopt a medical model here but we are going to have to figure out a way to improve the quality of care and to have some kind of standards of fidelity.
Alison Darcy
Do you think that the failure to consistently measure outcomes has actually hampered innovation in the field?
Tom Insel
I just think there’s a polarity here that people become absolutist about this. And I think we have to hold two things in our heads at the same time. One is that there’s a scientific foundation that says certain things work better than other things. And we ought to be trying to deliver the things that are evidence-based. I’m not big on that term. But that is the term of art. The second thing we have to hold in our heads is the evidence is pretty good that the therapist is more important than the therapy, and that the relationship really does matter. And it’s the relationship that allows change in many aspects. So these are not necessarily opposite conclusions that, ideally, you want both to go together. And for our conversation, I think the kind of weird and fascinating question is, can that relationship be tech-enabled in some way? We just published a paper called ‘Can Digital Make Therapy More Human?’ And that’s the question: is there some way in which going forward, particularly for digital natives, having a bot in the loop is going to be helpful and allow the relationship between two humans to become deeper, more honest and more helpful? At the same time, I guess what I worry about is perpetuating a world in which people go to therapy to have a paid friend, and it’s a treatment for loneliness that never ends. That gets us into the whole bind of, so why should insurance pay for that? Why would we cover three years of handholding for somebody who doesn’t want to actually move on in their lives? And obviously, the incentives are not there for the therapists. There’s not a lot of incentive to end it. And for the patient, there’s not a lot of incentive to end it either. So those are the kinds of places where maybe having a bot in the loop instead of a human in the loop helps to provide some guardrails and some standards.
Alison Darcy
I totally agree. I don’t know if you saw a paper we published a few years ago, but it showed 36,000+ of our Woebot users filled out this working alliance inventory and had scores on bond with Woebot that were in the human range. But we’re scoring that after just three to five days of an initial conversation, that’s when we first administered the measure. And I think people often mistake that we’re trying to replace humans. Again, it’s not about that, but it’s about creating the necessary condition for change, which, of course, is trust. And a non judgmental stance and relationship and rapport. And being able to do that quickly will hopefully enacts change quickly.
Tom Insel
One of the things I thought about, I was playing with Woebot last night actually, and thinking we talk about a lot particularly in the public mental health space, in a world of serious mental illness and marginalized populations, about how for so many people, the world of mental health care is just not their world. I mean, as an example, here in California, there are 10 million children, 60% of them are on Medi-Cal, which is our Medicaid program in California. 81% of those kids are kids of color. And very, very few of those families, and very few of those kids actually ever get any kind of mental health care, although we know a great many of them need it. And I’m just beginning to think, could we create the kind of Woebot accessories that are really tuned for different populations. They have the language, they have the smarts, they have the kind of cultural awareness that, frankly, doesn’t exist in our current provider population, which is not, you know, 80-81% people of color, it’s like 20% people of color. So is that something you’ve thought about?
Alison Darcy
Under Dr. Athena Robinson’s leadership, we have recruited incredibly diverse samples into our studies, and we consistently show that people of color actually have the highest bond with Woebot and are in that group, they are the efficient users, which is really amazing because it makes sense when you think when you consider how Woebot shows up as a peer, respectful, and these are often disenfranchised groups of people right? That haven’t had great experiences in the healthcare system. But it’s also wonderful to see because these groups have not traditionally been recruited into studies. And so we’re very excited about Woebot’s potential to engage the traditionally thought of as difficult to engage populations, right? There’s certainly something interesting there.
Tom, in your book ‘Healing’, you spoke about a vision for recovery, including people, place and purpose. I wonder if you could speak a little bit about that.
Tom Insel
Oh, yeah, thanks. This really tracks back to a concern I had about the field. Again, it’s sort of adopted a medical model, but not in a smart way. So in medicine, you know, our model is basically infectious disease, you look for a simple bug and you find a simple drug and you write a prescription and the problem is over. That just doesn’t work for us here. These are far more complicated problems. And they require more than just a prescription or, you know, a brilliant interpretation. Something that you know, Hollywood loves, but it actually doesn’t work in real life. It’s all about a process. And I used to think that that process, which many people will call recovery, was too vague, and I wasn’t really a believer that that was something that was achievable for a lot of people. But when I was working on the book I went all around the world talking to smart people.
And I was with this street psychiatrist in Los Angeles on Skid Row. And I was asking him about this. He said, It’s not that complicated. Recovery is just the three P’s. And I thought, you got Paxil, Prozac. What’s the third P? Could be psychotherapy. He said, No, no, no, no, it’s People, Place and Purpose. It’s social support. It’s having a safe environment. And it’s having a reason, a mission, something that you recover for, something you deeply care about. And I realized, gosh, we don’t talk about any of that. There’s not a CPT code and a reimbursable way of delivering the three P’s. And yet, that’s exactly what people with mental illness need. It’s what we all need right, social support, a good environment, and a mission. We’re really running healthcare as a business. And we’ve dropped that out of the business model.
Alison Darcy
Well, that is fantastic. And I think, a beautiful conclusion. I was going to ask you, what should we as a field be talking about that we’re not, but I think you just answered it. This holistic approach to enabling people to get on a path towards recovery and some changes that maybe the field would really benefit from meaningfully. I’m delighted to hear that a lot of that is fairly straightforward and known, well understood. Anything else that you’d like to conclude with?
Tom Insel
Well, I wrote about this in a recent paper. We have to understand that in terms of technology, we’re at the very beginning of something really interesting. And I know people wring their hands and think there’s too much hype. Overstating what it can do. And of course, that’s true. But as I like to say, this is the first act of a five act play. And we are just finishing that first act. And I think we know, at the end of the first act, who some of the characters are, what the plot is going to look like. But we have so much to do, we have to develop the regulatory framework for this field, the guidelines for quality and assurance. We have to figure out how to serve people who are currently not being served at all, the people who are in the deep end of the pool, because they have really serious illnesses. And we have to actually demonstrate that all of this works in the way we wanted to. We’ve got four more acts to do. And I just hope people don’t give up, because this isn’t perfect here early in the game.
Alison Darcy
Me too. And well, thank you for leading the way. And thank you so much for your time today. This was such a wonderful conversation. I’m so honored that you’ve shared your time with us.
Tom Insel
Thanks, Ali. Really a pleasure.
Thomas lnsel, MD, a psychiatrist, neuroscientist, entrepreneur and author, is a national leader in mental health research, policy, and technology. For 13 years, Dr. Insel served as Director of the National Institute of Mental Health (NIMH). He has also co-founded numerous startups, including Mindstrong Health and Humanest Care, and sits on the board of Steinberg Institute, Fountain House, Foundation for NIH, among others.
This conversation stems from deep expertise in the mental health field, from both Dr. Insel and our founder and president, Alison Darcy, PhD. Join us for an in-depth discussion about the future of mental health care and a vision for necessary change in 2024 and beyond.
- Standards of fidelity are key to improving the quality of care patients receive
- Technology can serve to enhance human capabilities, offering more nuanced insights and support for patients and providers
- Science, curiosity and passion all intersect as driving forces for change in mental healthcare
(edited for clarity)
Alison Darcy
Thank you so much for your time. I can imagine a lot of people want to talk to you. And I was so thrilled because, as I mentioned in the email, I cannot think of another person that has the breadth and depth of experience that you have with mental health and mental illness. And from personal experience and family experience, which many of us have. Being a psychiatrist and neuroscientist, researcher, head of National Institutes of Mental Health, startup entrepreneur, founder. And then of course, at government level as the California czar for mental health. How do those perspectives fit together? I’m so intrigued, as you went through your career, and you just have such a beautiful explanation in your book, but how do they work with each other? Is there any one perspective that you keep coming back to that sort of anchors the rest? Or informs the rest? Or are they just sort of mutually beneficial?
Tom Insel
It’s a question that partly just reflects my age, I’ve been around a long time. And I’ve been doing this in one form or another for five decades. And I think most of it comes out of just the passion for individuals that I’ve known, either in my family or close friends that struggle with some aspect of mental illness. And then just for me, just a curiosity of trying to understand what is this about? So I think the grounding really comes out of that, out of that lived experience, the part that continues to drive me still is the mystery of it all, and how difficult it is to really understand what these disorders are about. I suppose the part of my career that I still keep coming back to is the science. At the end of the day, I think I’m fundamentally a scientist more than anything else. And so the idea that there is a truth there, something that we need to keep exploring, to understand better, is always a driving force. And I keep wondering, it’s a curiosity, especially as you age is a really good thing to hold onto. When I was about your age, I had a friend who was in her late 80s, and still very active, and we went out to dinner one night. And I said, How do you do it? I mean, how do you keep going? You’re still involved! She said, it’s just two things. She said I’m curious and I’m furious.—
Alison Darcy
That is fantastic. I love that. When I think about what people asked me, why I founded the company, I don’t know how to express this more eloquently. But it came from this discontent, like an annoyance that there’s something not quite right here. And then that coupled with the creativity, you know, the best way to complain is to make something which really is excellent for finding a company, but I couldn’t agree with you more. A curiosity is really interesting, because you could get burnt out. In this field, in lots of ways it is really hard. It is very complex. And I think that’s very clear in your book healing. So, we’re obviously talking about technology here today. There was a beautiful chapter on your vision around a tech-enabled future for mental health care delivery and the role it plays. And you’re saying it’s not going to be the panacea, obviously, but it can enable things. And I wonder if I could read you an excerpt you mentioned from Carl Sagan, his natural history written in 1975. He says, “no such computer program is adequate for psychiatric use today, but the same can be remarked about some human psychotherapists in a period when more and more people in our society seem to be in need of psychiatric counseling. And when timeshare in computers is widespread, I can imagine the development of a network of computer psychotherapeutic terminals, something like arrays of large telephone booths, in which for a few dollars a session, we would be able to talk with an attentive, tested and largely non directive, psychotherapist.” So that was written in 1975, 50 years ago. How close do you think we are today? Like how on the money was Carl Sagan in terms of a vision there?
Tom Insel
Well, a lot has happened recently. And here we are, at the end of 2023. If you look back 10 years ago, we were just at the very beginning of what I think history will see as a turning point for the way we think about interventions in mental health. I’m really interested in what machine learning in AI can do. I don’t know that the Carl Sagan vision, the idea that we will have a vast network of intelligent bots that can serve a psychotherapist is where I would start.
What excites me about the technology we have now is the way it’s able to improve what humans do. And in my first glimpse of this, I’ve had several kinds of entries into this. But at the company that I’m in now, Vanna Health, we’re taking on very complicated patients, people who have been ill for many, many years, and I’ve had many hospitalizations, and just the ability to be able to look at a vast medical record, and reduce that to 300 or 400 word summaries, saves hours of work, it’s just remarkable. And to be able to do that pretty accurately within seconds, it’s a commodity, it’s not complicated. That’s great. The idea that when you interview someone, whether you’re a caseworker or a psychiatrist that you can have a digital scribe that captures that interview with fidelity, and then has a bunch of tools that allow you to assess the sentiment, the coherence, a whole range of different aspects of that interview, including the quality of the report, and they have therapeutic alliances, that’s really a commodity as well. I mean, that’s pretty amazing to realize where we’re at. So my excitement about a lot of this stuff is actually a little bit prosaic but it’s really important to take the things that are getting in the way right now. And allow us to take the current workforce and make them much, much better. That’s not really pie in the sky fantasy, that’s very doable with the tools we have right now.
Alison Darcy
That’s so exciting. There’s no clinician that went to school to fill out the EMR or to write letters to the insurance companies to convince them to cover a patient. You know, that’s not why you train clinically, right? It’s to be present with the patient. I love that.
Tom Insel
I have to say, Alison, we have to be really careful here, because we would have the same conversations 20 years ago, 30 years ago with the EHR coming into primary care and into medicine. And it’s been the worst thing, all of the digital transformation of healthcare has destroyed the experience of providers and healthcare.
It’s created a barrier not a facilitator. It’s actually created this thing that is literally between you and your patient.
So it’s kind of a wonderful idea that you take technology to fix the problem that technology created. And that’s what I’m hoping we can do here. We’re seeing the first glimpses of that already. And in the kinds of detailed digital scribes that are being already adopted, they’re not perfect, they need some work. But that’s the kind of promise I think we can begin to look at. I was just thinking about this, because my wife went in for an MRI last week, and it was one of these high strength die enhanced MRIs because she had some vascular problems. And we had this wonderful system of health care through Stanford where we get access to all the medical records, and they provided a report of the MRI, but it was completely indecipherable. I mean, the report was written for a provider. And it was nice that she had access to that. And I could read it and interpret it, but why not? And ChatGPT can do this in a nanosecond. Why not write that report in multiple languages; one for the payer, one for the provider, one for the patient, one for the patient’s family? We can do that in multiple languages as well. So all of that capacity is there, we’re just not using it. And those are the kinds of things which I think we should be doing immediately where AI really is very, very good.
Alison Darcy
I was just hearing on the radio this morning, people talking about when they’re getting a difficult diagnosis, and your brain goes blank. And so people are trying to record or they bring someone with them to be in the appointment with them or try and write things down. But it’s a real problem. We know that when people are engaged in their own care, they can tend to do better. But it’s very hard to engage if there’s an awful lot of complicated feelings that are coming up, life changes that may need to happen. Do you think tech and AI, but tech broadly might be able to help there in meaningful ways?
Tom Insel
Absolutely. So we’ve covered sort of the first part of this in my own head, which is some of it is just reporting translations, documentation, capturing interviews. I think the next stage of that is actually providing more decision support for both patients and providers. So what could that look like? Well, it could take the interview and provide those insights that right now are still being developed. But whether you call those vocal biomarkers, or whether you think about this as just understanding the therapeutic alliance, those are really kind of exciting and useful things. It’s different when we get into the mental health space, because so much of what we’re talking about is just communication, and understanding and being able to be empathic and to listen. So you can go much, much further with AI in our space than you would in gastroenterology or even in neurology. It gets really exciting. And this is kind of going back to the original Carl Sagan idea but could we actually get to a point where we can develop a tech-enabled psychotherapy system. And there, I think, we’re already doing this in lots of ways. You’ll see the beginnings of it in training and helping new therapists, by having very smart avatars that can give them the experience of seeing patients in a way that gives you all the complications and lots of experiences and challenges. So that’s kind of an easy one that I think will be fun to develop and see. I guess the real question is, and it’s kind of the Woebot question. Is it necessarily tech-enabled treatment? Or is it a provider, a human-enabled tech treatment? You know, where the tech is taking on most of the heavy lifting. And I think that’s what we’re learning and trying to figure out, how is that going to work? And where will that work, where it will not work? And I just don’t know yet that we’ve got that story. I think it’s still being written. But it’s super interesting to think about.
Alison Darcy
Are there any innovations on the horizon that you’re particularly excited about? Or conversely, are there any innovations on the horizon that you think, oh, no, that’s never going to work? That we should never even delve into?—-
Tom Insel
So I tend to be an optimist. I tend to be in that first camp, excited about things. I do think the idea of vocal biomarkers is pretty exciting. You know, looping back to this conversation we’ve been having already about how to feel about all of this. And you know, are we stepping into something we’ll regret? I’ve been to several of these panels and conversations and salons and meetings around this topic. And it’s just been fascinating to me, because they always start in the same way, which is, people who are well meaning but not in health care, saying I’m so concerned about the ways in which AI will hallucinate and destroy the fidelity of good care. Those are people who don’t really know much about what health care looks like. So to me, in every one of these meetings when this happens, the people who actually are in the trenches are always saying, compared to what? Look at what happens right now, today, right? And if you think this is the ideal, if you think this is as good as it can get, that is really a pathetic perspective. So, I do think that we have to keep that in mind that we’re trying, you know, as Tom Friedman likes to say, perfect is not on the menu here. That’s not what we’re talking about. It’s how do we improve over the really dire quality of care that exists right now for most people? And can we do better than that? And I do think we can.
Alison Darcy
That’s right. I think one of the bigger risks is that sort of narrative, and the problems and things it does, it starts to undermine public confidence in the ability of these tools to help. Everything is tarred with the same brush. And actually, a lot of it comes down to the nuance of design and thoughtful teams and leadership. What is the problem that any tech or any service is trying to solve? How well does it do that? What’s the risk benefit ratio, rather than, how good is a technology in and of itself? I mean, I believe that you have to look at what you’re trying to do, what is the intended use? And how good is it at that, at delivering on that? What’s the risk benefit ratio of that?
Tom Insel
Have you done that? So like a robot? I mean, this is sort of right in your wheelhouse, you’ve had to be very clear about this is, what it’s for, this and not for that.
Alison Darcy
Exactly, and it’s constant. It’s constantly communicating that and over communicating that. And then obviously, we have a full sort of product research. And we publish literature to communicate those things, to show both an openness to understanding where the limitations may be and where the benefits are, but also to actively explore those things such that you can, feed back that insight into the product development process. And so I agree with you, I feel like, fundamentally, science is the only process that enables us to push the needle forward in a responsible, predictable way. Just in the last year, we have seen this tremendous advancement in the technology tools that are available to us. The question is, how are we going to apply them in care? Where are the pitfalls? And, you know, for us, we have a specific kind of way that we look at that, which is, what is the benefit or risk to the person? How are they experiencing delivery of care in this way? What kind of care can you deliver? And even if it’s just a nugget here and there, what does that look like? And where should we not be veering into? And so I think it’s shifting sands for us, and it’s trying to be really good at communication.
Tom Insel
I really liked that. And I think it is clear that that’s the way this field has to develop, step by step. It’s not going to be the answer for all problems. But there are definitely issues where it can help. And I think, you know, we should be really clear that the current state of mental healthcare is not great. And there are just really important issues that we haven’t addressed with the current tools that we’ve got, perhaps the most obvious being that most people who could and should be in care don’t get it.
The other thing I would say is that when people do get care, and this is kind of where you were going and talking about, we know the quality of care is not great. And for many, many people, it’s not only that it’s delayed, but what they get is not in any way, a reflection of where the science is. And so to have something that at least is based on the evidence we have about what works, is really important. I wouldn’t say this so much in other areas of medicine, I think in most of medicine, we have a more scientifically based set of interventions. But when you go out into the world, and you ask, what are people getting, when they reach out for a therapist, it’s usually what the therapist wants to do, it’s not necessarily what the patient needs. And I think we have to get honest about that at some point. And make it clear that this is a field where we need to raise the bar for quality. You’ll hear a lot in the conversations now about technology and about startups that it’s all about access, and we’re improving access. But if it’s access to crap, that’s not progress. You want to make sure that you’re not only democratizing care, but that you’re improving the quality of what people are receiving, and that’s a place where having standards and having the technology can help, having measurement and beginning to look at outcomes. And ultimately, even reimbursing for outcomes becomes really the way that progress can happen.
Alison Darcy
I was just talking about this this morning, about the variation and quality issue, you would never have the case where an oncologist can just do what they feel is right with the patient. You would obviously want them to do the latest evidence-based treatment, most likely to help you given your specific presentation. And we can’t seem to get there with health care. I think less than 10% of clinical psychologists practice measurement based care, and about 20% of psychiatrists do, even though it can improve outcomes by about 50%.
Tom Insel
The current state of play ain’t great. It’s not really where we want to be. And it does frustrate me a little bit, going back to curious and furious that so much of our field is resistant to change. They really are so hostile to the idea that it could be better. And I think in some ways, they don’t want to know, they don’t really want to know what’s going on. But it’s going to be important for policy, for parity, for reimbursement, for the growth of the field and for the survival of the field, that ultimately it becomes accountable, and there’s not been accountability in this field whatsoever. I’m always kind of struggling with this because, you know, in medicine, we do have accountability. People are held to account for results and it does matter. You get thrown out of a hospital if you’re not following surgical procedures, you get cited if you’re a physician who writes a prescription for Oxycontin when you shouldn’t. There are a whole bunch of things that we have built into general medical practice, there’s a certain standard of care that we just don’t have in the mental health space. I don’t think that we want to necessarily adopt a medical model here but we are going to have to figure out a way to improve the quality of care and to have some kind of standards of fidelity.
Alison Darcy
Do you think that the failure to consistently measure outcomes has actually hampered innovation in the field?
Tom Insel
I just think there’s a polarity here that people become absolutist about this. And I think we have to hold two things in our heads at the same time. One is that there’s a scientific foundation that says certain things work better than other things. And we ought to be trying to deliver the things that are evidence-based. I’m not big on that term. But that is the term of art. The second thing we have to hold in our heads is the evidence is pretty good that the therapist is more important than the therapy, and that the relationship really does matter. And it’s the relationship that allows change in many aspects. So these are not necessarily opposite conclusions that, ideally, you want both to go together. And for our conversation, I think the kind of weird and fascinating question is, can that relationship be tech-enabled in some way? We just published a paper called ‘Can Digital Make Therapy More Human?’ And that’s the question: is there some way in which going forward, particularly for digital natives, having a bot in the loop is going to be helpful and allow the relationship between two humans to become deeper, more honest and more helpful? At the same time, I guess what I worry about is perpetuating a world in which people go to therapy to have a paid friend, and it’s a treatment for loneliness that never ends. That gets us into the whole bind of, so why should insurance pay for that? Why would we cover three years of handholding for somebody who doesn’t want to actually move on in their lives? And obviously, the incentives are not there for the therapists. There’s not a lot of incentive to end it. And for the patient, there’s not a lot of incentive to end it either. So those are the kinds of places where maybe having a bot in the loop instead of a human in the loop helps to provide some guardrails and some standards.
Alison Darcy
I totally agree. I don’t know if you saw a paper we published a few years ago, but it showed 36,000+ of our Woebot users filled out this working alliance inventory and had scores on bond with Woebot that were in the human range. But we’re scoring that after just three to five days of an initial conversation, that’s when we first administered the measure. And I think people often mistake that we’re trying to replace humans. Again, it’s not about that, but it’s about creating the necessary condition for change, which, of course, is trust. And a non judgmental stance and relationship and rapport. And being able to do that quickly will hopefully enacts change quickly.
Tom Insel
One of the things I thought about, I was playing with Woebot last night actually, and thinking we talk about a lot particularly in the public mental health space, in a world of serious mental illness and marginalized populations, about how for so many people, the world of mental health care is just not their world. I mean, as an example, here in California, there are 10 million children, 60% of them are on Medi-Cal, which is our Medicaid program in California. 81% of those kids are kids of color. And very, very few of those families, and very few of those kids actually ever get any kind of mental health care, although we know a great many of them need it. And I’m just beginning to think, could we create the kind of Woebot accessories that are really tuned for different populations. They have the language, they have the smarts, they have the kind of cultural awareness that, frankly, doesn’t exist in our current provider population, which is not, you know, 80-81% people of color, it’s like 20% people of color. So is that something you’ve thought about?
Alison Darcy
Under Dr. Athena Robinson’s leadership, we have recruited incredibly diverse samples into our studies, and we consistently show that people of color actually have the highest bond with Woebot and are in that group, they are the efficient users, which is really amazing because it makes sense when you think when you consider how Woebot shows up as a peer, respectful, and these are often disenfranchised groups of people right? That haven’t had great experiences in the healthcare system. But it’s also wonderful to see because these groups have not traditionally been recruited into studies. And so we’re very excited about Woebot’s potential to engage the traditionally thought of as difficult to engage populations, right? There’s certainly something interesting there.
Tom, in your book ‘Healing’, you spoke about a vision for recovery, including people, place and purpose. I wonder if you could speak a little bit about that.
Tom Insel
Oh, yeah, thanks. This really tracks back to a concern I had about the field. Again, it’s sort of adopted a medical model, but not in a smart way. So in medicine, you know, our model is basically infectious disease, you look for a simple bug and you find a simple drug and you write a prescription and the problem is over. That just doesn’t work for us here. These are far more complicated problems. And they require more than just a prescription or, you know, a brilliant interpretation. Something that you know, Hollywood loves, but it actually doesn’t work in real life. It’s all about a process. And I used to think that that process, which many people will call recovery, was too vague, and I wasn’t really a believer that that was something that was achievable for a lot of people. But when I was working on the book I went all around the world talking to smart people.
And I was with this street psychiatrist in Los Angeles on Skid Row. And I was asking him about this. He said, It’s not that complicated. Recovery is just the three P’s. And I thought, you got Paxil, Prozac. What’s the third P? Could be psychotherapy. He said, No, no, no, no, it’s People, Place and Purpose. It’s social support. It’s having a safe environment. And it’s having a reason, a mission, something that you recover for, something you deeply care about. And I realized, gosh, we don’t talk about any of that. There’s not a CPT code and a reimbursable way of delivering the three P’s. And yet, that’s exactly what people with mental illness need. It’s what we all need right, social support, a good environment, and a mission. We’re really running healthcare as a business. And we’ve dropped that out of the business model.
Alison Darcy
Well, that is fantastic. And I think, a beautiful conclusion. I was going to ask you, what should we as a field be talking about that we’re not, but I think you just answered it. This holistic approach to enabling people to get on a path towards recovery and some changes that maybe the field would really benefit from meaningfully. I’m delighted to hear that a lot of that is fairly straightforward and known, well understood. Anything else that you’d like to conclude with?
Tom Insel
Well, I wrote about this in a recent paper. We have to understand that in terms of technology, we’re at the very beginning of something really interesting. And I know people wring their hands and think there’s too much hype. Overstating what it can do. And of course, that’s true. But as I like to say, this is the first act of a five act play. And we are just finishing that first act. And I think we know, at the end of the first act, who some of the characters are, what the plot is going to look like. But we have so much to do, we have to develop the regulatory framework for this field, the guidelines for quality and assurance. We have to figure out how to serve people who are currently not being served at all, the people who are in the deep end of the pool, because they have really serious illnesses. And we have to actually demonstrate that all of this works in the way we wanted to. We’ve got four more acts to do. And I just hope people don’t give up, because this isn’t perfect here early in the game.
Alison Darcy
Me too. And well, thank you for leading the way. And thank you so much for your time today. This was such a wonderful conversation. I’m so honored that you’ve shared your time with us.
Tom Insel
Thanks, Ali. Really a pleasure.