For more than forty years, Dayna Guido has sat across from clinicians in supervision, helping them navigate the gray areas of mental health practice: What do you do when a client discloses something outside the session? How do you manage the competing needs of confidentiality and safety? How do you know when your own reactions are clouding your judgment?
Now, she says, a new layer has complicated every one of those questions: Artificial Intelligence (AI).
“Supervision is where ethics becomes real,” Guido explains. “It’s the space where clinicians learn how to apply abstract codes to living situations. With AI, those situations have multiplied in ways we never anticipated.”
A New Kind of Ethical Dilemma
Guido is quick to point out that AI itself is not unethical. The dilemmas emerge when clinicians use it without awareness. A young practitioner might ask a chatbot for diagnostic clarity, or rely on an app to summarize therapy notes. But what happens if the information generated is inaccurate, incomplete, or stored insecurely? What responsibility does the clinician (and by extension, the supervisor) have for correcting, contextualizing, or even forbidding that reliance?
“These aren’t just technical questions,” Guido says. “They’re ethical questions. If a clinician types client information into a program, they’ve already made a choice about privacy. If they accept a diagnosis without critical evaluation, they’ve already made a choice about clinical responsibility. My role as a supervisor is to make those choices visible.”
Training for Discernment, Not Dependence
Guido worries that the convenience of AI can short-circuit the learning process for early-career professionals. Supervision, at its best, cultivates discernment — the ability to sit with uncertainty, ask deeper questions, and arrive at ethical clarity through reflection. When AI provides immediate answers, that process is at risk of being skipped.
“The more we lean on AI to decide for us, the less we develop our own ethical muscles,” she says. “Supervision must resist that drift. It’s not about banning the technology. It’s about ensuring that clinicians don’t outsource the very judgment they’re supposed to be cultivating.”
To that end, Guido often brings AI directly into supervision sessions. She invites supervisees to share what they asked, what responses they received, and what they might have overlooked. Together, they dissect the gaps and biases in the machine’s output. “It’s not about shaming,” she notes. “It’s about showing how tools can be useful but never sufficient.”
Consent and Transparency
Another area of supervision Guido emphasizes is informed consent. Just as clinicians had to update their policies during the pivot to telehealth, they now need to establish clear agreements with clients about AI use. “If you’re using AI to draft notes, to support interventions, or to manage records, your clients deserve to know,” she insists. “Consent is not a formality; it’s an ethical practice of transparency.”
In supervision, this translates to practical training. Guido coaches clinicians on how to draft policies that are HIPAA-compliant, how to explain AI use in plain language, and how to ensure clients truly understand what they’re agreeing to. “It’s not enough to bury it in a packet of intake forms,” she says. “Consent in therapy must be relational, not perfunctory.”
The Supervisor’s Expanding Role
The introduction of AI has expanded the scope of what supervision must cover. Supervisors can no longer limit themselves to traditional areas like countertransference, boundaries, or cultural humility. They must now also ask: Which digital tools are you using? Are they secure? Are they distorting your clinical judgment?
Guido describes this as both a challenge and an opportunity. “Supervision has always been about staying attuned to the realities of practice,” she says. “AI is simply the newest reality. But it forces supervisors to expand their own competence, to be willing to admit what they don’t know, and to learn alongside their supervisees.”
This humility is crucial, she adds, because many supervisors came of age in an era before digital tools were omnipresent. Younger clinicians may be more comfortable experimenting with technology, while senior supervisors may feel uncertain about it. Bridging that generational divide requires openness, dialogue, and a willingness to hold ethical responsibility above personal discomfort.
Guarding Against Complacency
Guido often frames AI as a test of professional vigilance. It is easy, she argues, for clinicians to assume that because AI provides quick answers, those answers are safe or objective. But that assumption can mask real risks: biased algorithms, privacy breaches, or the erosion of critical thinking.
“Supervision is where complacency gets interrupted,” she says. “It’s where someone asks: Did you double-check that? Did you consider what’s missing? Did you tell your client what you were doing? That accountability is what protects both the client and the profession. It’s important to remember that we are entrusted with the hearts of other human beings. AI can help us, but it cannot take that responsibility from us. Supervision is where we remember that.”
To learn more about Dayna Guido’s approach to maintaining ethics and supervision in the rise of AI, visit the official website.