Engagement with digital mental health interventions remains poorly understood


With recognition of increasing demands on services, health care providers might point us towards a ‘Digital Mental Health Intervention’ (DMHI) on our phones or online through a computer. Many of us already turn to our phones and apps for support and advice for our mental health, including from generic chatbots and informal discussion groups, as well as more traditionally from friends and family. So a DMHI sounds like it should offer something more: perhaps more reliable, more effective, and safer (as suggested in this 2022 blog for Mental Elf).

But what can we expect from digital mental health interventions?

Research suggests that while a DMHI might demonstrate positive results in a research trial, we can’t assume the same in real-world situations. A key factor is engagement, as mentioned in this Mental Elf blog about the barriers and facilitators. For example, one article suggests that people are more than four times more likely to use DMHI during research than real-world usage (Baumel et al, 2019). That makes sense anecdotally. Participants in research trials have different motivations to everyday users. They may persevere where we’d give up. But if we don’t continue to use the DMHI, it’s not going to have the effect shown in the trials.

Four recent reviews have focussed on this one factor of engagement, and it seems useful to consider them here together.

Digital mental health interventions are widely touted as a treatment solution but real world usage may vary considerably from that in research

Digital mental health interventions are widely touted as a treatment solution but real world usage may vary considerably from usage in research

Methods

The four reviews used different methods as summarised in the Table below.

Each of the studies included a review of the literature, although with slightly different questions. Liu et al (2026) updated an existing review, and the others all did a systematic literature review. Liu et al (2026) and Zainal et al (2025) both completed meta-analyses. Eisner et al (2025) used a best-fit framework synthesis, using the Consolidated Framework for Implementation Research (CFIR) (Damschroder et al, 2022). Smith et al (2025) brought together a consensus group of 10 people and an expert group of 10 people.

First author Publication date Title Method Dates of the  literature search Population
Eisner 2025 Barriers and facilitators of User Engagement With Digital Mental Health Interventions for People With Psychosis or Bipolar Disorder Systematic review and best-fit framework synthesis (using CFIR) Jan 2010 – oct 2021 People with psychosis or bipolar disorder
Liu 2026 Uptake, Adherence, and Attrition in Clinical Trials of Depression and Anxiety Apps: A Systematic Review and Meta-Analysis Updated an existing review then meta-analyses 2024 review of RCTs (to January 24)  and updated search January 24-May 25 People with depression and/or anxiety
Zainal 2025 What factors are related to engagement with digital mental health interventions (DMHIs)? A meta-analysis of 117 trials. Systematic literature review; meta analysis and meta regression analysis January 1989 to December 2024 People with at least one symptom of a  common or serious mental disorder.
Smith 2025 Engagement and attrition in digital mental health: Current challenges and potential solutions Consensus group (n=10) and expert group (n=10) + systematic literature review from inception on PubMed – performed in August 2024 Mesh terms: ‘Mental health’ or ‘psychiatry and psychology category’

Results

The four papers each acknowledge the importance of considering engagement in DMHI. Each approaches it differently, although all were based on a review of relevant literature as described above.

Eisner et al (2025)

  • Included 175 papers related to 150 qualitative and quantitative studies with 11,446 participants.
  • Studies were of various methods including qualitative interviews as well as RCTs.
  • Related to usage by people with schizophrenia spectrum psychosis (in 65.3% of studies) and bipolar disorder (41.3%).

Using the CFIR framework, they found that factors that facilitated engagement were:

  • A strong recognition of the relative advantages
  • A clear link between the intervention and patient needs
  • A low-effort digital interface
  • Human-supported delivery
  • Provision of devices.

The barriers were complex interventions, perceived risks, user motivation, discomfort with self-reflection, digital poverty, symptoms of psychosis, poor compatibility with existing clinical workflows, staff and patient fears about loss of traditional care, and limited infrastructure and financial support.

Liu et al (2026)

  • Considered 79 clinical trials addressing uptake, adherence and attrition.
  • Uptake was defined as initial uptake of the app. They acknowledged that uptake would be high (92.4%) as a result of the studies being trials and not real world usage.
  • Adherence was reported in only 20 trials, but with varied definitions. Thirteen studies used a definition of completion of all modules, which gave a pooled rate of 58.7% for adherence.
  • Adherence was higher where there were clear instructions and the apps had personalisation and symptom monitoring.
  • Attrition was defined as the failure to complete outcome assessments and was 18.6% for post-test results and 28.4% at follow up.
  • Attrition was lower in trials where there were reminders, human contact, and no gamification.

Zainal et al (2025)

  • Acknowledge and highlight inconsistent definitions of engagement, which can include uptake (the initial enrolment or treatment initiation), usage (extent of DMHI use regardless of completion), and completion (finishing all modules).
  • In 117 reports of 117 trials, only two of the papers reported on all three of these elements. Sixty nine studies only reported usage, two only reported uptake, and twenty nine only reported completion.
  • Their analysis suggests that positive correlates for engagement were women (engaged more than men), past mental health problems, guided delivery, therapeutic relationship and positive expectancy.

Smith et al (2025)

Used a consensus development panel approach with discussions taking place over two days. They identified three broad areas of challenge in understanding engagement as:

  1. lack of universally agreed definitions of metrics related to engagement,
  2. lack of evidence of how or whether improved engagement improves outcomes, and
  3. user involvement in developing and delivering digital health interventions.
Each of the studies looked at engagement in different ways but all identified factors that facilitate engagement

Each of the studies looked at engagement in different ways, but all identified factors that facilitate engagement.

Conclusions

Across the studies, the conclusions suggest that while we might have some ideas about how to enhance engagement with DMHI, researchers continue to be unsure how to measure it.

Liu et al (2026) suggest that uptake, attrition and adherence are needed together to provide a benchmark for engagement in clinical trials. But Zainal et al (2025) noted the inconsistent definitions across the studies with use of varied measures of uptake, usage and completion. Their paper concluded that people with psychosocial resources and structured daily routines were more likely to engage with DMHI. They suggest clinicians might also consider that guided DMHIs might be particularly useful for people who require more accountability, structure and support.

Eisner et al (2025), using the CFIR framework, concluded that:

  • DMHI should meet specific needs and not be a replacement for human care.
  • Human support can overcome engagement barriers.
  • DMHIs need to be simple and low effort.
  • Financial support is required for development, maintenance and implementation.

Usefully, Smith et al (2025), with their consensus meeting, offer potential solutions to the challenges:

  • Definitions and terminology (standardisation of reporting of engagement in studies, and assessment of the appropriate ‘dose’ of an intervention).
  • Demonstrating efficacy and cost-effectiveness of effective engagement (research studies should be theory driven, and actively report engagement and outcomes).
  • User involvement and user-centred design (improve standards of involvement including more precise reporting; investigate mechanisms of engagement; measure and report the harms of engagement; include clinicians and the wider workforce as users).
Taken together we can conclude from these studies that we are a long way from having clarity on how to assess digital engagement

Taken together we can conclude that we are a long way from having clarity on how to assess digital engagement.

Strengths and limitations

While every paper had a consistent interest in engagement, none of them were able to provide a definitive answer about how to define or measure it. Eisner et al (2025) jump straight into looking at the barriers and facilitators to engagement without providing any definition. Liu et al (2026) and Zainal et al (2025) both consider uptake and adherence, but Liu et al (2026) suggests that attrition is commonly used as a proxy for engagement, whereas Zainal et al (2025) refer to usage. Smith et al (2025) extend the concerns about engagement definitions to include the assessment of an appropriate ‘dose’, recognising that ultra-short interventions may be appropriate and introducing the need to design for disengagement once people have achieved their goals.

Only one of the papers considered the potential harms of DMHI

Eisner et al (2025) reported that it was difficult to draw conclusions regarding the harms because of poor or missing information in the studies. The increasing awareness of the harms of social media should perhaps encourage researchers to ensure that any focus on engagement needs to be accompanied by associated evidence of current and potential harms.

These four papers fundamentally reveal the problem of a lack of a standard definition or metric for engagement

Taken together they build the case for urgent focus on this topic to maximise the benefits of future interventions. Without agreement on what engagement is, we cannot compare different studies, or the difference between research trials and real-world use, or, I would argue, between digital interventions and interventions that rely solely on humans.

It is this human element that perhaps needs closer attention when developers and researchers are pushing a digital model. Every one of these four papers includes a suggestion that human contact with participants supports engagement. But the researchers look to understand the digital element without questioning what is required from the human to make this difference. We might suspect that the digital element, with its background of available data, makes for an easier, replicable, transferable, fundable, research focus.

An older person holding a phone with only their hands visible

Each study spoke about the importance of human support for engagement.

Lived experience involvement

Only one of these studies mentioned lived experience as a form of knowledge that might add to their understanding. None of the studies included an independent lived experience commentary. None of them included a reflective paragraph about their own perspectives and biases. Smith et al (2025) include a ‘reflexivity statement’ but it was limited to recognition that the participants had a range of backgrounds and the meeting was supported by a pharma company. There was no reflection about how different knowledges might impact on the results in any of the four papers.

Smith et al (2025) did describe the inclusion of a person with lived experience and, perhaps coincidentally, was the only paper to make recommendations about user involvement and user-centred design. However, this frustratingly suggested the inclusion of clinicians and the wider workforce as users, indicating that their understanding of user involvement was perhaps not as focussed on people with lived experience as we might assume.

Implications for practice

While DMHI might be recommended to us, the digital equivalent of the patient information leaflet that tells us how much to take and for how long seems to be a gap. How do we know how to use any DMHI effectively so that it works for us? Researchers and developers agree that engagement is an important factor, but cannot tell us the impact of engagement (Elkes et al, 2024). If they have access to the data behind the systems to see the engagement metrics, why is it taking so long to agree what these mean? It makes me suspicious that the numbers aren’t adding up somehow.

It feels worrying that engagement remains ill-defined across multiple fields (Bijkerk et al, 2023; Nahum-Shani et al, 2022), yet the pace of development of DMHI, which argues for the importance of engagement, continues to accelerate without this fundamental understanding. Digital products rapidly become outdated, and research results might not be transferable across them. Researchers are considering engagement in one product, without reporting on harms, while others are rushing to innovate.

The speed and pressure of digital innovations contrasts with the slow pace required for true co-production and participation. It would also be revealing to look at resources allocated for digital innovation and witness how little is invested in hearing from and working with the people who might look to these products for support. I have commented previously, in relation to digital peer support, that ‘perhaps we need to “ask what kind of future we want to create, together” (Bender & Hanna, 2025, p. 196)’. How are people with lived/living experience contributing to the development of DMHI? How do we encourage studies to report on involvement as well as harms?

The common factor in these four reports was the recognition that support from a human increased engagement. If someone we trust encourages us to see the value of using something, we are more likely to use it. That makes sense: it’s the basics of advertising and marketing. If a trusted person tells me they’ve found an app useful, I’m more likely to give it a try. And if they stay in touch, ask how it’s going, and suggest I try a specific feature, of course I’m likely to stay more engaged. Is this what creates the higher engagement in trials, with the researcher taking on that role?

I have yet to see studies that report on a direct comparison of use of a digital intervention, including DMHI supported by humans, with the exact same ‘dose’ of human support. Or what if, instead of encouraging me to use the app, the person was allowed to just ask how I’m doing? We might have a different conversation and find I’d prefer something else, maybe a walk in the park or a conversation with a human therapist. And in the longer term that might improve my mental health.

We started by asking what we can expect from a digital mental health intervention. These four reviews suggest that researchers are still working that one out too. Until there is agreement on what engagement means, how to measure it, and whether improving it actually enhances outcomes, we are being pointed towards something whose benefits remain genuinely unclear. That is not an argument against digital mental health, but it is an argument for taking the hard questions seriously before the next wave of tech arrives.

A leaflet

We seem to be missing the digital equivalent of the patient information leaflet that tells us how much to take and for how long.

Statement of interests

Karen Machin is a co-director of both With-you Consultancy Ltd, which provides training and consultancy related to peer support, and the Survivor Researcher Network CIC, which provides support for researchers working from a lived experience perspective. She also works freelance for various Universities and organisations. Her PhD thesis is titled ‘Navigating the digital world: a grounded theory study of the use of digital technologies by peer supporters’. The views expressed in this blog are personal and do not represent the views of any organisation she is linked with.

Karen did not use AI in the blog writing process.

Edited by

Simon Bradstreet.

Links

Primary papers

Eisner, E., Faulkner, S., Allan, S., Ball, H., Di Basilio, D., Nicholas, J., Priyam, A., Wilson, P., Zhang, X., & Bucci, S. (2025). Barriers and Facilitators of User Engagement With Digital Mental Health Interventions for People With Psychosis or Bipolar Disorder: Systematic Review and Best-Fit Framework Synthesis. JMIR Mental Health, 12, e65246.

Liu, C., Torous, J., Fuller-Tyszkiewicz, M., Messer, M., Anderson, C., Soliman, O. M., & Linardon, J. (2026). Uptake, Adherence, and Attrition in Clinical Trials of Depression and Anxiety Apps: A Systematic Review and Meta-Analysis. JAMA Psychiatry, 83(1), 43.

Smith, K. A., Ward, T., Lambe, S., Ostinelli, E. G., Blease, C., Gant, T., Gold, S. M., Holmes, E. A., Paccoud, I., Vinnikova, A., Klucken, J., Uhlhaas, P. J., Sanchez, C. G., Haining, K., Böge, K., Lahutina, S., Tomelleri, L., Ryan, S., Torous, J., & Cipriani, A. (2025). Engagement and attrition in digital mental health: Current challenges and potential solutions. Npj Digital Medicine, 8(1), 398.

Zainal, N. H., Wang, V., Garthwaite, B., & Curtiss, J. E. (2025). What factors are related to engagement with digital mental health interventions (DMHIs)? A meta-analysis of 117 trials. Health Psychology Review, 1–21.

Other references

Baumel A, Muench F, Edan S, Kane JM. Objective User Engagement With Mental Health Apps: Systematic Search and Panel-Based Usage Analysis. J Med Internet Res 2019;21(9):e14567

Bender, E.M. & Hanna, A. (2025) The AI Con. Penguin Random House

Bijkerk, L. E., Oenema, A., Geschwind, N., & Spigt, M. (2023). Measuring Engagement with Mental Health and Behavior Change Interventions: An Integrative Review of Methods and Instruments. International Journal of Behavioral Medicine, 30(2), 155–166.

Damschroder, L. J., Reardon, C. M., Widerquist, M. A. O., & Lowery, J. (2022). The updated Consolidated Framework for Implementation Research based on user feedback. Implementation Science, 17(1), 75.

Elkes, J., Cro, S., Batchelor, R., O’Connor, S., Yu, L.-M., Bell, L., Harris, V., Sin, J., & Cornelius, V. (2024). User engagement in clinical trials of digital mental health interventions: A systematic review. BMC Medical Research Methodology, 24(1), 184.

Nahum-Shani, I., Shaw, S. D., Carpenter, S. M., Murphy, S. A., & Yoon, C. (2022). Engagement in Digital Interventions. The American Psychologist, 77(7), 836–852.

Photo credits

Hot this week

How often PV ablation converts AF to AT ?

How often PV ablation converts AF to AT ? March...

Can Aspirin Prevent the Spread of Tumors? Researchers Say Yes

A New Series of Health Insights Is on...

Bipolar Disorder Kisay Khte hai | Bipolar Disorder In Urdu | Bipolar Depression | Psychiatry Clinic

In this video, Dr Maheen Rizwan is discussing about...

10 HEALTHY HABIT'S | #daisy | #drsharmika | #daisyhospital …

Daisy Hospital Chennai : Daisy Hospital Erode : Daisy...

20 Old-Fashioned Easter Desserts Worth Bringing Back

Here are some of our favorite Easter desserts....

Topics

Related Articles

Popular Categories

\