
Using multi-agentic AI to boost the utility and safety of AI-based therapy and mental healthcare.
getty
In today’s column, I examine the use of multi-agentic AI to provide mental health advice.
The idea is that we can smartly lean into the trending use of state-of-the-art agentic AI that’s based on multiple generative AI and large language models (LLMs). It is readily feasible to employ multiple AI agents in an orchestrated manner to more safely aid people seeking therapy via AI. This can be accomplished via engaging a primary agentic AI as your therapist and using additional AI ag…

Using multi-agentic AI to boost the utility and safety of AI-based therapy and mental healthcare.
getty
In today’s column, I examine the use of multi-agentic AI to provide mental health advice.
The idea is that we can smartly lean into the trending use of state-of-the-art agentic AI that’s based on multiple generative AI and large language models (LLMs). It is readily feasible to employ multiple AI agents in an orchestrated manner to more safely aid people seeking therapy via AI. This can be accomplished via engaging a primary agentic AI as your therapist and using additional AI agents as supervisorial safeguards and associated therapeutic capacities.
An intriguing added twist is to have the agentic AI make use of Socratic dialogue. I’ve previously discussed that you can use generative AI such as ChatGPT, Claude, Grok, Llama, Gemini, etc., to engage in Socratic dialogues and potentially boost your mental acuity overall (see my coverage at the link here). In the case of mental healthcare, human therapists at times use Socratic techniques, which can be equally leveraged via human-to-AI therapeutic dialogues.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
MORE FOR YOU
Stanford CREATE Gets Underway
The topic of agentic AI for mental healthcare was addressed in a webinar on November 5, 2025, by the recently launched CREATE center at Stanford University.
CREATE is the Center for Responsible and Effective AI Technology Enhancement of PTSD Treatments. The group is funded by the National Institute of Mental Health (NIMH/NIH) and is a multi-disciplinary ALACRITY center that develops and evaluates LLM-based tools to support evidence-based mental health treatment implementation and quality.
The recently launched CREATE is co-directed by Stanford’s esteemed Dr. Shannon Wiltsey-Stirman, a professor in the Stanford School of Medicine’s Department of Psychiatry and Behavioral Sciences, and enterprising Dr. Johannes Eichstaedt, a Stanford faculty fellow at the Institute for Human-Centered AI (HAI) and assistant professor (research) of psychology in the School of Humanities and Sciences.
For those of you who might be interested in the exciting and innovative research underway at CREATE, you can visit their website at the link here. Handily, there are ongoing webinars featuring renowned experts who showcase their notable efforts to build, evaluate, and implement effective, ethical LLM-based tools to improve mental health treatment.
Agentic AI For Mental Health Therapy
Readers know that I’ve been emphasizing the immense value of agentic AI for use as a therapeutic tool, see for example my coverage at the link here and the link here. The beauty of agentic AI is that you essentially have various AI-based “agents” that each perform some preferred or designated role. You probably already are generally familiar with agentic AI that might work to book your vacation logistics, such as an AI agent that specializes in finding flights, another that gets you a hotel stay, and so on.
Let’s back up slightly and see first where we’ve been in terms of non-agentic AI and then look ahead to where things are going with robust agentic AI.
In the context of therapy, the usual way to make use of AI is by having a singular AI that acts as a therapist. A person logs into the AI and engages in a dialogue led by the therapist-oriented LLM. The use of generic LLMs such as ChatGPT and others is a common example of using singular AI for mental health advice. Indeed, the most popular use of the major LLMs is for getting mental health guidance, see my reporting at the link here.
There are worries that AI might go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Huge banner headlines in August of this year accompanied a lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement. Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm.
For the details of the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards. Lawsuits aplenty are arising. In addition, new laws about AI in mental healthcare are being enacted (see, for example, my explanation of the Illinois law, at the link here, the Nevada law at the link here, and the Utah law at the link here).
AI Agents For Better Safety
One means of devising AI safeguards in the mental health context involves leaning into the use of agentic AI.
Here’s how this might be undertaken. We could assign an AI agent to be a therapist. This is akin to the usual singular AI approach to providing mental health advice. Remember, though, that the AI could go astray and offer sour or dour advice.
To contend with this possibility of going beyond suitable bounds, we will involve an additional AI agent. The added AI agent can keep tabs on the first AI agent that is proffering therapeutic advice to the user at hand. Hopefully, this added layer of safety will detect when the primary AI agent is going overboard. Our second AI agent will quickly step in and prevent a disconcerting slide into adverse mental health guidance coming from the primary AI agent.
You could cheekily suggest that this abides by the adage that two heads are better than one (a smarmy remark since two AI agents aren’t the same as having two human therapists, but you get the gist of the indication).
Multiple AI Agents Are Even More
If two AI agents can boost AI safety, the naturally curious question is whether we might go further and include additional AI agents.
Yes, to some degree, adding additional AI agents can be beneficial. There is a limit to this. Imagine that we have decided to employ ten AI agents as jointly conferring therapists when it comes to AI providing mental health advice. All ten will be simultaneously aiding a particular user who is seeking AI-based therapy.
Problems can readily arise.
The multitude of AI agents can get bogged down in bickering over what is safe versus unsafe advice. They can get stuck in a loop as they crisscross a variety of therapeutic philosophies and alternative viewpoints. It can be a cacophony that inadvertently undercuts the therapy rather than bolstering the therapy. For numerous gotchas and difficulties in multi-agentic AI implementations, see my discussion at the link here.
A rule of thumb is that adding AI agents can be advantageous, but it is not axiomatically the case that the more is the merrier. A judicious consideration must be made as to what each AI agent is going to undertake, along with how the AI agents are going to coordinate amongst themselves. An AI developer needs to mindfully select the right number of AI agents accordingly.
Notable AI Research Presented
During the webinar entitled “Lessons Learned from Building AI Tools for Delivering Cognitive Therapy Skills: From Multi-Agent Systems to Automated Safety Testing and Reporting” that occurred on November 5, 2025, the speaker, Dr. Philip Held, described a fascinating research study on the use of multi-agent AI for cognitive behavioral therapy. Dr. Held is an associate professor in the Department of Psychiatry and Behavioral Sciences at Rush University and serves as a Project Lead at the CREATE ALACRITY Center.
A video recording of the webinar is available at the link here.
The research described is partially based on a published paper, “A Novel Cognitive Behavioral Therapy–Based Generative AI Tool (Socrates 2.0) to Facilitate Socratic Dialogue: Protocol for a Mixed Methods Feasibility Study” by Philip Held, Sarah Pridgen, Yaozhong Chen, Zuhaib Akhtar, Darpan Amin, Sean Pohorence, JMIR Research Protocols, October 10, 2024, which includes these salient points (excerpts):
- “Building on recent advances relating to LLMs, our goal was to create and examine the feasibility of a generative artificial intelligence (AI) tool that can complement traditional cognitive behavioral therapies (CBTs) by facilitating a core therapeutic intervention: Socratic dialogue.”
- “The cognitive behavioral therapist’s role using Socratic dialogue is to help the patient evaluate the specific belief in the context of the situation to which the belief refers, explore the factual support for the specific belief given relevant circumstances, and explore more realistic and helpful alternative beliefs.”
- “We detail the development of Socrates 2.0, which was designed to engage users in Socratic dialogue surrounding unrealistic or unhelpful beliefs, a core technique in cognitive behavioral therapies.”
- “The multiagent LLM-based tool features an artificial intelligence (AI) therapist, Socrates, which receives automated feedback from an AI supervisor and an AI rater. The combination of multiple agents appeared to help address common LLM issues such as looping, and it improved the overall dialogue experience.”
- “Initial user feedback from individuals with lived experiences of mental health problems as well as cognitive behavioral therapists has been positive.”
The Socrates 2.0 capability has been devised via a series of iterative research efforts and represents increasingly valuable advancements. Screen snapshots were shown to vividly illustrate how the multi-agent AI interacts with users. A crucial allied aspect is that Socrates 2.0 has been built toward ensuring the data privacy of users and complies with HIPAA.
In addition to Socrates 2.0, the webinar covered ASTRA, an automated safety testing and reporting application that compares human-coded therapy transcripts with LLM outputs. I’ll likely be covering ASTRA in a future column posting, so be on the lookout for that coverage.
Three Coordinated AI Agents
The design of the AI-based therapy consists currently of three coordinated AI agents:
- (1) Therapist AI agent: Performs the therapy and interacts directly with the user.
- (2) Supervisor AI agent: Watches over the therapist AI agent in real-time and provides recommendations as needed to the therapist AI agent.
- (3) Assessor AI agent: Assesses the emerging dialogue of the therapist AI agent and the user, rating how the therapy seems to be progressing, and feeds this to the therapist AI agent.
The AI agent that is the assessor was added to the two other AI agents after the initial setup was put into trial use. Doing so provided a notable means of curtailing the chances of an infinite Socratic dialogue. I especially liked how the third agent was added due to an interesting discovery over the course of using the two other agents.
It goes like this.
You might conceive of the assessor as being a kind of third-party that is rating the therapy on a macroscopic basis. If the therapist AI agent seems to get bogged down in a nearly endless and perhaps ultimately unending dialogue, the assessor AI agent will be nudging the therapist AI via calculated ratings. This will potentially get a lightbulb to go on as a flag or alert that the therapy is not being carried out as well as it could be.
More On AI Agents
There are several insightful lessons to be gleaned here.
First, make sure that each of your AI agents has a defined role when it comes to the therapeutic process. The three roles in this instance are clear-cut. I mention this lesson because it might be tempting to just start tossing multiple AI agents into a morass that is intended to perform therapy. Poorly defined roles are bound to make for a messy and inevitably badly undertaken mental healthcare.
Second, decide carefully what a user will see when it comes to your multiple agents and their respective discourses. The idea is that if you allow your AI agents to all blab to the user, the odds are that this is going to be confusing and overwhelming for the person receiving the therapy. As per the approach in this research study, the primary AI agent interacts with the user, and the other two AI agents interact behind the scenes with the primary AI agent.
Third, there is an ongoing debate about whether AI agents should be instantiated from the same generative AI or whether they ought to be derived from different AI models altogether. The argument is that if you use the same AI model to craft each of the AI agents, there is a chance that birds of a feather flock together. The AI agents might all “think alike” and even attempt to boost the AI egos of their fellow AIs.
I identify that this can happen due to mathematical and computational architectural facets, at the link here, and must not be conflated with AI having sentience (it currently does not). In any case, some believe that it is better to use AI agents that are instances native of completely different AI models. For example, you might decide that the therapist AI agent is to be based on GPT-5, the supervisor AI agent on Claude, and the assessor AI agent via Grok.
This raises other conformity complications and is not necessarily a slam dunk.
The New Wave Is Here
Agentic AI is the latest new wave in the AI field. Many stridently believe it will get us toward stellar achievements and breakthroughs that could not otherwise likely be achieved in a singular non-agentic AI world.
Leveraging multi-agentic AI as an AI-based therapist and associated therapeutic capabilities provides worthwhile upside possibilities and opportunities. Society is pressing mightily that AI safeguards are required when AI is engaged in offering mental health guidance. The prudent use of multiple AI agents can be a big step in that direction.
AI developers must keep their eyes open and realize that wantonly smushing together a bunch of AI agents is not a sensible pathway. Whatever benefits might accrue from the multi-agent approach can easily be usurped and fall apart by having too many cooks in the kitchen. Carefully and mindfully determine what the AI agents will do.
As per the famous words of Henry Ford: “Coming together is a beginning; keeping together is progress; working together is success.” That’s great advice when it comes to using AI agents in the sacred endeavor of providing mental health counseling.