The Role of AI in Mental Health: Between Hype and Evidence
AI is transforming mental health care, but not all of it works. What the research shows about digital tools, ethical risks, and the hybrid model.
The Role of AI in Mental Health: Between Hype and Evidence
Artificial intelligence in mental health is simultaneously one of the most promising and most misunderstood areas of current technology. With a market that reached $1.95 billion in 2024 and projections pointing to nearly $13 billion by 2033, the investment is real. But the fundamental question remains: is this technology actually helping people?
What the Research Shows
The latest scientific evidence, published in journals such as JMIR Mental Health and PMC, points to a clear conclusion, with important nuances.
Digital mental health tools, when they include some form of human support, demonstrate outcomes comparable to face-to-face therapy for conditions such as depression and anxiety. Multiple meta-analyses of digitally delivered cognitive-behavioural therapy interventions have confirmed this equivalence.
However, the detail many companies prefer to overlook is that purely digital self-help tools, with no human interaction whatsoever, show significantly lower effectiveness. The model that works is the hybrid one: technology for between-session support, human professionals for the therapy itself.
The Ethical Risks Nobody Wants to Discuss
Researchers at Brown University tested several large-scale AI models in 2026, including systems from OpenAI, Anthropic, and Meta, configured to act as CBT therapists. The findings are troubling: they identified 15 distinct categories of ethical violations.
Among the most serious problems are the mishandling of crisis situations, the reinforcement of harmful beliefs, and something the researchers called "deceptive empathy", the AI sounds as though it cares, but fundamentally does not understand the user's experience.
A parallel study from Stanford University found that AI chatbots display increased stigma towards conditions such as alcohol dependence and schizophrenia, compared to conditions like depression. This bias can have real consequences for people who already face significant barriers to treatment.
The Statistic That Changes Everything: The 85%
Despite the risks, there is a number that deserves attention: 85% of mental health chatbot users had never previously spoken to a professional. This statistic reveals that these tools are reaching a population that would otherwise receive no support at all.
Combining this with the fact that between 28% and 75% of young people drop out of therapy prematurely, a problem documented in multiple meta-analyses, it becomes clear that neither the traditional model alone nor technology in isolation solves the problem. The answer lies in combining both.
The Hybrid Model: What Actually Works
The most recent research converges on a consensus: the future of digital mental health lies in hybrid models. Countries such as Australia, Denmark, Sweden, and Canada have already implemented integrated digital mental health services with promising results.
In practice, this means:
AI handles the administrative and monitoring component, mood tracking, reminders for therapeutic exercises, collection of clinical data between sessions. The human professional retains the central role in the therapeutic relationship, diagnosis, and clinical intervention.
At Mena.ai, this is precisely the model we follow. Our platform does not aim to replace the therapist, rather, it gives them tools to be more effective. From AI-assisted session analysis for clinical decision support, to continuous patient monitoring through mood tracking and therapeutic tasks, the focus is on enhancing the therapeutic relationship, not replacing it.
What This Means for Patients
For those seeking help with mental health, the message is straightforward: technology is a valuable complement, but not a substitute for a qualified professional.
Mood tracking tools can help identify patterns that might otherwise go unnoticed. Digital CBT exercises can reinforce what is learned in session. Communication apps with the therapist can reduce the sense of isolation between appointments.
But when it comes to genuine understanding, navigating complex trauma, or simply having someone who truly listens, nothing replaces a human being on the other side.
What Comes Next
The market will continue to grow. New tools will emerge. The hype will not diminish. But the distinction between responsible and irresponsible companies in this space will become increasingly clear: those who position AI as support for the professional versus those who sell it as a replacement.
With the Portuguese Psychologists' Association, Hospital da Luz Learning Health, and the University of Manchester as partners, Mena.ai's approach is grounded in clinical validation and scientific evidence, because in mental health, rigour is not optional.
Want to see what evidence-based AI in mental health looks like? Mena.ai is a clinical platform built with psychologists, designed to support therapy, not replace it. Explore how it works →
Frequently Asked Questions
Can AI replace a human therapist?
No. Current evidence shows AI works best as a complement to therapy, not a replacement. Multiple meta-analyses confirm that hybrid models — digital tools alongside human professionals — produce outcomes comparable to face-to-face therapy. AI alone, without human support, shows significantly lower effectiveness. The therapeutic relationship, diagnosis, and clinical decision-making remain firmly in the domain of qualified human professionals.
What are the main ethical risks of AI mental health chatbots?
A 2026 Brown University study identified 15 distinct ethical violations in large-scale AI models configured as CBT therapists. The most serious include mishandling of crisis situations, reinforcement of harmful beliefs, and "deceptive empathy" — sounding caring without genuine understanding. A parallel Stanford study found AI chatbots show increased stigma toward conditions like schizophrenia and alcohol dependence compared to depression, with real consequences for already vulnerable populations.
Is it safe to use AI tools as a complement to my therapy?
Purpose-built clinical AI tools — those designed with therapeutic guardrails and used alongside a human professional — can be useful for mood tracking, between-session exercises, and psychoeducation. General-purpose chatbots prompted to act as therapists are not equivalent and should be avoided for clinical use. Always consult your therapist before introducing any digital tool into your care routine, and choose platforms built specifically for clinical settings.
What does Mena.ai do differently?
Mena.ai is built around a hybrid model: AI handles administrative and monitoring tasks — mood tracking, session analysis, therapeutic reminders — while the human professional retains full clinical responsibility. The platform was co-designed with the Portuguese Psychologists' Association, Hospital da Luz Learning Health, and the University of Manchester, ensuring every feature is grounded in clinical evidence and ethical standards.
References:
- DataM Intelligence (2024). Global AI in Mental Health Market Report.
- Brown University (2026). Ethical risks in AI-powered therapy chatbots.
- Stanford HAI (2026). Bias in AI mental health tools.
- JMIR Mental Health (2025). Effectiveness of digital mental health interventions.
- PMC (2025). Digital interventions in mental health: overview and future perspectives.
- Frontiers in Psychology. Investigation into therapy dropout in adolescents with depression.
Note: This article is for informational purposes only and does not substitute professional advice. If you need urgent support, contact the 988 Suicide & Crisis Lifeline (US) or Samaritans 116 123 (UK), or your local emergency services.