Quyen Ngo, PhD leads Hazelden Betty Ford‘s Butler Center for Research. She joins host Chris Hemphill to explain how to interrogate the research behind digital health solutions to determine if they are effective, equitable, and ethical. 


This image has an empty alt attribute; its file name is MM-Zoom-Brand-Asset_Banner-1024x320.png

Sign up to learn about future episodes of Meeting of the Minds

Key Points

  • Does it do what it says it does? Any research or piloting presented by the manufacturer needs to be focused on the platform or intervention you’re evaluating. A summary of the literature is not enough.
  • Examine who was included in the research. Does it include every subgroup of your population? If not, this may be an opportunity to work with the manufacturer on a pilot.
  • Research should be evaluated by those with clinical knowledge as well as experts in data analysis.
  • Consider how the tool will be used and adopted. Is it something your patients need and want? Will your staff use it, and will it fit into their workflow?
  • Potential partners should know and understand what regulations are required of providers. 

Watch the Full Episode

Watch a Highlight

Sign up to learn about future episodes of Meeting of the Minds


Read the Transcript

C.H.: Welcome to Meeting of the Minds. I’m Chris Hemphill from Woebot Health. We’re going to be focusing on breaking past the hype that we see in digital health. So I’m joined today by Dr. Quyen Ngo from Hazelden Betty Ford.

There are over 55,000 digital health applications. You can look it up on Statista, 55,000. That’s a real number. And they all purport to be the solution for things like remote monitoring, mental health, consumer experience, and a myriad of other subjects. So there’s a great deal of hype out there, but there are also a great many solutions that can help with improving experience and outcomes. So the big question is how do we know what’s going to work and what’s going to be leading into the next Watson or Theranos?

I know you’re all thinking, Chris, you work for a Woebot, you work for a digital health company. So why should we even be here listening to you? Well, you don’t have to take my word for it. Allow me to introduce Dr. Quyen Ngo. Dr. Ngo leads digital health research initiatives to suss out which applications are a good fit for the Hazelden Betty Ford Foundation.

Hazelden Betty Ford focuses on treatment and research for substance use and mental health. So this is a perfect walking of the line between mental health and digital health. She’s the executive director for their Butler Center Research and has a deep research background in clinical psychology, substance use disorder, intimate partner violence, and digital health interventions.

So straddling that line, digital health and mental health, we’re delving into how to know which solutions are effective, equitable, and ethical. So kicking us off, Dr. Ngo, what are you excited for our viewers to learn or change?

The conversation today will be about how we do the best for our patients and for the people we serve. I think it’s important to walk away with this sense that research doesn’t mean that everything’s going to move so slowly that you can’t be competitive or you can’t get things done, but you can responsibly do things, in a scientifically sound way, that both helps business and health care move forward.

But how do you choose? There are so many things out there as you said, so many things to choose from. And people come in with their slides, and it looks fancy, and it looks great. How do you know it’s the right thing for your organization and your customers or your patients?

C.H.: So that’s a fantastic framework, Quinn, about research not being the blocker there, that it doesn’t have to slow everything down to where business can’t be done. There is the term ‘pracademic,’ which means taking the important research foundations of academia and the practical nature and the need to execute in the business world and marrying between those two.

Going over your background, what’s your day-to-day? What are you doing today in digital health and what’s motivated you down the path that you’ve taken?

I lead a team at the Butler Center for Research. We are the research arm of the Hazelden Betty Ford Foundation, an addiction treatment health system. We run clinical trials within our health system that can include everything from how our cells work in response to certain things as well as behavior.

We also do outcomes data collection for our organization. I work closely with the clinical team to make sure we collect good clinical data and how we use that so that we can continue to refine what we do so that we’re doing the best for our patients.

I really love what I do. I came from academia, so I like to say that I’m a recovering academic, but I’ve just really loved the practical side of this and also being able to engage in lots of different kinds of research. I’m a clinician by training. And as much as I loved academia and there’s an incredibly important role for academia, I really wanted to have a more direct impact on patient care. I wanted to see more of that translational aspect of research to practice. This position has allowed me to do that, to really engage and build a team that is taking those principles of research and science into patient care.

So often we think, research is this ivory tower, and a lot of research isn’t seen by people outside of academia. I really wanted my career to have an impact in taking that science, taking that research, and applying it in real-world settings.

We have to think about keeping the doors open. We must consider all the regulations that come along, especially in addiction treatment, with caring for our patients. That’s a different setting. And not that there aren’t regulations in academia–I want to be clear that human subjects research has a lot of regulations–but, here, it’s an intersection of that research and also patient care. My team and I have really filled this niche that’s been missing.

Two of us are clinicians as well as researchers. That matters because when you’re talking behavioral health and when you’re talking about clinical data, there’s an important aspect to understanding the implications of clinical data that you might not understand if you haven’t been a clinician who has been in the room and taken assessment data and translated that into some kind of treatment plan. If you haven’t had that experience, you don’t always know what the implications of the data you’re collecting might be.

And then, on the other side, if you’ve never dealt with data, statistical analysis, or research, you may be measuring things that don’t answer the questions you want to answer. You might be choosing variables that don’t get at what you’re trying to do because you haven’t had the training in that. So having a clinical hat, but also a research hat is something my team does really well. I love my team. I’m really proud of them.

Many folks do not have a clinical or research background, but there are decisions that they may be in charge of that permeate through their companies. What are the key principles that they should be looking for and thinking about before engaging these digital health vendors?

You want to know that whatever you’re getting is working and doing what it says it’s going to do. To be able to determine that there need to be some good studies or at least some piloting that’s happened to give you that data.

Some vendors will come in and they’ll have this slick slide deck and then they might hand you a 100-page PDF or Word document that’s filled with research. The difference is that it’s not research on their platform or their intervention or what they’re doing. It’s a summary of the research literature? So you look at it and you’re like, “I’m not reading this. But it looks good. There’s research there. And so it must work.” 

Look specifically and see if there have been actual studies done. It doesn’t have to be a randomized controlled trial. You just need something there to show that they’ve been thinking about how to gather this data to show that it works. I think that one of the most critical pieces is: have they been thinking about how you measure this? Have they collected the correct data to answer the question “Does my platform work?” 

A lot of times people come with “People really like my platform.” Which isn’t an answer to does your platform work. It needs to be more than just someone’s happy with it. Someone would recommend it–that’s great. It does give you information on whether people will use the platform. But you need to take a step beyond that and say, “Okay, they’ll use it, but is it going to help them? Is it going to make things worse?” Some things have made things worse. It’s really asking some of those questions and knowing what is really behind there to show that this platform is actually going to work.

And it’s really not enough for a platform to be based on good research. So you can have a platform that uses cognitive behavioral therapy (CBT), contingency management and because those are well-researched in the literature, there’s good evidence that they’re effective. But that doesn’t mean that the way you’re doing contingency management or CBT is effective.

You actually have to have the data on what you are doing in order to show that efficacy because it doesn’t matter if, Susie Q out there at University of Whatever showed that her contingency management and CBT program worked. You’re not doing what she did.

Vendors will come in and say this is evidence-based. That implies that they did the research and they have the data to back that up. But a lot of times what I’ve seen is, no, they don’t actually have the data on their intervention. They can tell you about studies on someone else’s intervention that uses the same principles. It’s different. And it matters because you don’t know if they’re doing the same things the same way as this other intervention that was shown to be effective.

It has to be based on do they have the data to back it up and whether they have run their own trials or put it up on their own scenarios.

It also matters who was included in the research to begin with. Has the approach only been researched in one population? This happens a lot with marginalized communities. They’ll take one intervention and just apply it to another community without doing the work to adapt it or show that it’s effective in that community. So you have no idea whether it’s going to work from adults to adolescents or from this community to that community. You really need to understand within your patient population or within your customers, does it work for them?

If they can’t answer the basic question how did it do on this segment of the population? How did it perform on women versus men? 

Now, those are great opportunities to partner with vendors. Say, “Hey, I don’t know that it’s effective with my people, but, maybe we can collect some data and see, one, if people like it. And then two, is it working for them? Is it helping them?”

What if we delve into some examples where you’ve found something that worked? Could you talk about what that intervention was and what the vetting process was?

I can share one of the clinical trials that we have going on right now at Hazelden Betty Ford in partnership with Spark Biomedical. It’s a device, kind of like a TENS machine, those muscle stimulation devices. It’s called a Transcutaneous Auricular Neurostimulation Solution. 

It’s basically a sticker that sits behind your ear and has a piece that comes right on the outside of your ear. It stimulates specific nerves. This device is FDA-approved, so it’s already been through that evaluation. And it has some preliminary evidence already showing that it is effective in reducing withdrawal symptoms for opioid use disorders.

In addition to that, when we met with Spark Biomedical, they also provided the theory behind this device. The theory is important. Is it theory driven? As a researcher, you want to know:  is it theoretically and scientifically sound?

Then after all of that evaluation, is this something that our patients need?  We have patients who when they go through withdrawal or when they’re detoxing, don’t want to be on medication. But that’s currently the best practice we have. It’s effective. There’s a lot of good research behind it, but not all patients want it. And so if we can offer someone a nonmedication alternative, will it help them in their recovery? Absolutely. So it’s a need and it’s something that our patients have communicated to us that they want. 

And then, our staff. They also have to feel like this is an important product that could help within our system. And with this, the staff were on board as well. 

So it’s the scientific setting, the efficacy or preliminary efficacy vetting, the safety vetting of that, and then also confirming with our patients and with our staff that it’s something that’s needed and wanted. 

The third piece of this is safety around the device and the study. So when we do human subjects research, I want to make sure that what we’re doing is safe for our patients. We’re not doing super experimental things on our patients.  We’re doing things that are grounded and have good evidence to them. And we need to make sure that our partners are following all of the rules, laws, and regulations, both for clinical research as well as for patient care.

There are lots of regulations from the Department of Health and Human Services in the U.S. for addiction treatment. And that’s on the treatment side, that’s the clinical side. So we want to make sure that our partners respect that. But also when you’re doing research, there are lots of regulations around human subjects and making sure that they’re safe, that we are protecting privacy, that we have anonymity there.

We want to make sure our partners are following that as well. So when we brought in this device, we had to go through all of those levels of evaluation and vetting and talking across our various departments to make sure that this made sense for us as an organization.

So you’re bringing in not just the research on the product itself, but also the understanding of how your culture might receive the intervention and how teams might be impacted and work with it.

That’s basic change management. You need buy-in from people. They need to believe in what you’re doing, and they need to want to do it as well. And if the organization is very much against it, then there is work to be done there, decisions to be made. But absolutely, it’s working across teams. 

Just because something is fancy doesn’t mean that you should be bringing it in just to look fancy. If it’s not actually going to be helpful or you can’t actually implement it as an organization, then that’s a waste of resources. And we need to be responsible stewards of the resources that we have. Everybody’s talking about how there are not enough people, there’s not enough time, and there’s not enough money. So if you’re throwing resources into something that doesn’t make sense for your organization, it doesn’t matter how fancy it is and it doesn’t matter if it’s effective.

What level would you trust the digital health tech company’s own analysis about effectiveness and outcomes versus an independent analysis of their digital tool?

Independent analysis is usually preferable. If it’s an internal analysis from the organization, I need to know what they did. I want to see what kind of data they collected, when they collected it, who collected it, and from whom it was collected.

Who did they ask? Is data missing? If there’s data missing, there are all sorts of skews and biases that can be introduced into the analysis. So that’s where it’s really important to have a researcher, preferably who’s also at least familiar with clinical work, who can look at the data and say, well, did they just filter out this program because it didn’t have good results? Did they just leave this variable out because it didn’t say what they wanted it to say? Really dig into that research and see what exactly they did.

Let’s flip the focus from evaluation or purchasing to the product designers and digital health vendors out there. What considerations should inform their design decisions and what should they focus on to position themselves well for evaluation? 

Two pitfalls that I see vendors falling into: One is they haven’t spoken to enough providers. It’s not good enough to reach out to your friend who is in this industry. You need to talk to a lot of different providers and a lot of different types of organizations, different sized organizations.

Hazelden Betty Ford is a multi-site, large treatment system. There are some smaller treatment systems that function operationally really differently. There are also different kinds of providers. We have specific therapeutic approaches. Other organizations may use different kinds of therapeutic approaches. So it is a broad sample when you’re talking about building a tool for 

But a lot of times I see one of two things happen. One is someone who is really passionate and excited builds what’s in their mind thinking everyone’s going to love it. The other is you go in with having spoken to just one person. The person might be really smart and have really great insights, but then they work in a really specific system. They work in a really specific way, and it doesn’t work for everybody. Then you have to do all of this customization to integrate.

Also, when you’re talking health and behavioral health, do not ignore compliance and security, because the other pitfall I’ve seen is vendors coming in who have a really great product, but it doesn’t meet the data security and compliance needs that organizations are required to have. 

Then there’s frustration there because, the vendor is like, “What do you mean you need this or that?” And the providers are like, “We are legally obligated to this. We have to meet these regulations.”

So really understanding and knowing what regulations are required–and it’s different by state. You have the federal regulations and then you have by state. So understanding and knowing what is required of providers in that industry is critical. 

When something is not up to snuff from a regulatory perspective or the research finds that the effectiveness or the claims don’t add up–that leads to organizational tension. But there are people who may see this project as their baby and they want it to go through. How do you have that conversation? How do you navigate having to say no to things that people are excited about?

We use an implementation science approach. Implementation science is a whole field of study for that sort of translation. How do you take something that is theoretical and may have evidence of efficacy behind it and take it out of the ideal research environment and put it into the real world where you have to deal with staffing and budgets and compliance and regulations?

The whole point of implementation science is how you translate these things into the real world. So it’s agile, it’s adaptive. What do we need to modify so that it makes sense in this setting? So implementation science is something that I think is really critical to understand, and it’s a really wonderful tool to apply to industry. 

The other piece of this is, just because a tool isn’t meant for the treatment of our patients doesn’t mean that it’s not usable in our system. Maybe there’s a platform that has a piece of it that our organization needs, like a payment system that would smooth things out for us or make things easier. That’s something that we could use. Has nothing to do with a need for efficacy in the treatment of our patients, but there’s a tool there that could be applicable. Maybe it has a wealth of information that our clinicians could use and apply therapeutically. You have to think creatively and strategically within your organization.

This especially applies if it’s someone who’s higher up in the hierarchy or organization really loves a tool. Those can be tricky conversations to have. It’s really about identifying what other piece would be helpful. Maybe it’s not specifically for our residential program, maybe we can use it for outpatient. Maybe we can use it for intensive outpatient, or maybe we can use it for long-term recovery when we’re talking about addiction treatment.

There are ways to think about these tools. And this is where vendors and providers can work closely together to say, okay, maybe it doesn’t work in this original intent, but what other ways can we work together? Both organizations can benefit and both organizations can grow. You’ve got that mutually beneficial interaction.

So when you’re evaluating tools it can be helpful to talk to a lot of different teams, talk to a lot of different departments. Say, “Hey, this company came in, they showed me this thing. And it’s interesting. I don’t think it would work for this, but what are your thoughts on it? Could you see it playing a role anywhere in your space or in a space that’s adjacent to you?” You can socialize it informally, and you come up with some really great ideas that way.

Let’s say that you have an intervention in place and you’re trying to understand how it affects the patient experience. Are we focused on satisfaction surveys or are there other types of analysis that you’re doing, after the fact, after the intervention has been put in place to understand how it’s impacted that patient experience?

I think that is a question for a lot of organizations that provide care. And it’s never too late to collect that data and take a look at what you’re doing right. Part of what we do at Butler Center is support organizations to do that. We’ll work with an organization and look at what data they collect from tip to tail. So from the first point of contact all the way through to the end, where are you collecting data? What data are you collecting? And then we look at that and we say, okay, but what kind of questions are you trying to answer?  Are you trying to negotiate a higher reimbursement rate with your payer? Are you trying to convince family members that they want to be involved? What is the question there? And so then we see if there’s a match. Are you collecting what you need to answer those questions? And if you aren’t, then we make recommendations.

It’s really important to make sure that you’re aligning the information you’re collecting with the question that you’re trying to answer. People are inundated with surveys. Get $5, answer these ten questions or the texted questions you might have after you call in for a customer service issue. People are inundated with requests for their information. So if you’re going to ask, make sure that what you’re asking counts. Can you use that data for a couple of different purposes? Are you asking for data that will answer questions for a couple of different stakeholders so you can use that data to respond to your peers and to families and to patients? You want to make sure that that’s streamlined and that it’s short. It doesn’t have to be a three-hour assessment. You want to get in, get out and make sure that you’re being really respectful of people’s time. But it’s never too late to collect that data and to see what aspects of your program are working.

To do that, you have to understand what is your theory behind the mechanism of change for your program. The mechanism of change is the thing that you think your program provides that motivates people to change. So what is it about your program that you think will encourage people to change? You have to understand that before you start collecting your data because if you don’t know why you think your program works, you’re not going to be able to answer whether or not your program works.

Sometimes a digital health intervention might have surveys or questions that it’s asking within the system. Was this helpful? Was this satisfactory? Do you feel better? What’s been your experience with that data? And are there some hard questions that you ask when you see reporting based on what people are answering in the app experience itself?

I think that’s important data to have. Billions of dollars go into app development and I think at one point someone quoted some statistic or study to me that people will use an app on average for two weeks maybe if you’re lucky, sometimes just like three or four days. So you’ve got a few days to convince somebody that they should integrate your app or your tool into their lives. And if we think about our lives, they’re complicated. Any new thing can throw off an entire workflow or an entire system that someone’s established. So when you’re thinking about that data, it is important to know if someone’s going to use it.   

But you can’t stop there. The point is, don’t just stop there. You also want to know if it’s actually going to help them, if it’s actually doing what it says it’s going to do. So it’s not that that data is not good or not useful, it’s just that that shouldn’t be the only data that you’re evaluating. 

If the data that you’re getting is coming from the app, then keep in mind things like survivorship bias. If it requires reporting within the app, that requires them to still be using the app. So there might be a difference in the population that stayed past that two-week, three-week period versus the people that had the initial response.

Those are situations where you want to ask why did people stop? Even if people don’t maintain their engagement, they’re still an important source of information. If you can get them to answer some questions about they stopped, what they didn’t like, why didn’t they come to come to Hazelden Betty Ford for treatment? It’s really important to ask those questions because that’s how you get better. That’s how you improve, that’s how you continue to refine. Look at the things that didn’t work.

We ask this next question of every guest. If you could change one thing about the way health care is delivered in this country, what would that be?

My pie in the sky would be that access wouldn’t be limited by finances. That everybody who needed care would get it. More practically speaking, I think if health care was more integrated, so behavioral health care with medical health care, addiction treatment–having really connected continuity of care from one area to another so that individuals could be treated as whole people because all those different things affect people’s well-being overall.

Addiction affects how well people comply with chronic disease treatments like diabetes and things like that. So it’s impactful across different health conditions.

Well, hopefully, conversations like this lead to action. A big thank you. I definitely learned a lot from this conversation. And appreciate you for joining us on Meeting of the Minds. For anybody who wants to reach out, what is what’s the best way they can get in touch with you?

You can email me, butlerresearch@hazeldenbettyford.org You can call us at the Butler Center for Research, or you can drop me an email.

As far as this conversation, we went in-depth on how to be more certain whether digital health solutions might make sense for your population. But the big question is, as a healthcare leader, what are the steps you take to put this into action? We actually had a conversation recently with Dr. Tarun Kapoor, who is the Senior Vice President and Chief Digital Transformation Officer at Virtua Health on his three-pillar system for implementing digital health innovation. He provides great advice on taking your organization’s culture into account when readying various tech interventions.  


Sign up to learn about future episodes of Meeting of the Minds

Latest Posts