Estimated reading time: 6 minutes
Summarize this article with ChatGPT
Everywhere you turn, everyone seems to be talking about artificial intelligence. Peers say they're saving hours on admin tasks. Clients say they're consulting ChatGPT between therapy sessions. And you're left wondering: Am I missing out?
The Age of AI is here, and along with all the attention it's getting, you may feel some pressure to adopt these tools or wonder if you're falling behind.
You're curious but cautious. Can AI actually help you shoulder some of the workload of running a small therapy practice? Does HIPAA allow its use, and what are the best practices? How can you responsibly and ethically use AI in your work? Most importantly: Do you want to use it?
Here, we lay out what you need to know about how AI works, what your options are, and how to deploy the technology responsibly, should you decide to use it.
TL;DR: HIPAA does not prohibit therapists from using AI tools. However, if an AI tool handles client information:
The vendor must sign a Business Associate Agreement (BAA).
You must include AI in your risk analysis
Limit the information you share
Carefully review any AI-generated documentation.
AI can support your practice, but you remain responsible for protecting client privacy.
Before we get into how AI is used in small practices, let's clarify what we mean by AI.
"Artificial intelligence" refers to a wide variety of computer systems designed to imitate human behavior.
The AI you hear about most these days, and what you likely think about when you hear the word, tends to be Large Language Models, or LLMs. ChatGPT, Claude, Gemini, and Microsoft Copilot are all LLMs. These models have been trained on vast amounts of data to learn the patterns and rules of human language. They generate outputs, such as writing, based on patterns in their training text.
Despite "intelligence" being in the name, AI tools do not "think." And they don't "understand" nuance or risk exposure the same way we do. (Though they can be trained to mimic it.)
Even though AI-generated content is often convincing, the software can make errors, introducing inaccuracies or biased phrasing. Oversight and safeguards can mitigate these risks.
Here's how AI is being used in therapy practices:
Many therapists are wondering: 'Is ChatGPT HIPAA compliant?' 'Can you use AI tools in your practice at all?'
HIPAA doesn't address AI specifically or ban its use. However, several parts of the law affect how you can use these tools safely and responsibly.
If an AI tool handles client information, the vendor must sign a Business Associate Agreement (BAA). This is one of several key requirements to keep in mind:
Even if AI is doing some of the work, practitioners are ultimately responsible for the care their clients receive. AI is notorious for having "hallucinations," that is, producing false, inaccurate, or even harmful content. If this happens, you (not the technology nor the company that made it) are the one on the hook. To protect your clients and yourself, it's important to oversee AI processes and carefully review any documents, data, or files it generates.
Beyond legal responsibilities and HIPAA compliance, there are also a few ethical concerns about using AI in private practice.
Bias. AI can reproduce and amplify bias from the data it's trained on. For example, AI could underestimate risk for groups that were missing from the training data.
Industry practices. You may object to the AI industry itself and prefer to avoid its technology due to concerns about how AI companies operate, which may influence whether a tool aligns with your professional values and your clients' expectations.
"When working with AI, be sparing and vague with anything about reproductive care, immigration status, and other highly sensitive info. Use the minimum level of detail, as there is no such thing as a fully deidentified transcript. Take the time to thoroughly review the generated notes and make sure they contain only what they should and eliminate what they shouldn't. AI cannot evaluate for and mitigate contextual risk the way you can and must."
Liath Dalton, Director, Person Centered Tech
Learn more about ethical AI use in clinical practice
Curious about putting these principles into action? PCT's course, Beyond Hype and Anxiety: A Practical Framework for Ethical AI Use in Clinical Practice, provides practical guidance for therapists on safely and ethically using AI.
Not all AI tools are suitable for use by therapists. Avoid free general-purpose software, as it generally will not provide the privacy and security required to protect PHI. To help you decide if a specific tool will work for your practice, follow this checklist:
| AI Tech Checklist |
|---|
| ☐ The software is designed specifically for use in a mental health setting |
| ☐ The vendor is HIPAA compliant and can provide details on how it is |
| ☐ The vendor clearly explains how client data is used within the platform |
| ☐ The vendor clearly states whether your data is used to train AI models |
| ☐ Your client data isn't used to train the AI model |
| ☐ The AI company doesn't sell or share client data with third parties |
| ☐ The vendor signs a Business Associate Agreement |
"If you're going to use AI for clinical documentation, it should not just be HIPAA compliance compatible, but one that's specifically designed for mental health application."
Liath Dalton, Director, Person Centered Tech
Should you decide to use AI tools in your practice, here's how to do it responsibly without risking legal exposure, client privacy, or professional integrity.
Conducting and documenting a security risk assessment is an essential part of running a HIPAA-compliant practice. AI tools should be part of that analysis because, like other digital files and platforms, they can expose electronic PHI. Possible hazards include data breaches, insecure platforms, or the sale of client data.
Once you've done a risk analysis, you'll know where potential hazards lie and be able to put safeguards in place. For example, you'll want to make it a policy to carefully review all AI-generated notes, documents, and data. Clinical judgment remains essential. And avoid over-documenting high-risk details like immigration status and reproductive health information, unless clinically necessary.
Even if you feel comfortable using AI in your therapy practice, your clients might not. Disclose your use of AI to them (even if they won't directly interact with it) and obtain their written consent via a secure form. It's a good idea to have alternative options when possible, in case they want to opt out.
"Therapists have both a responsibility of beneficence and to safeguard the well-being of their clients. If you're going to use AI tools, you must be selective, intentional, and transparent."
Liath Dalton, Director, Person Centered Tech
It looks like AI is here to stay, but when and how therapists use it is still evolving.
The emerging technology can complement clinical care, but it lacks human judgment and the wisdom you've gained from years of experience. Like the American Psychiatric Association writes: "AI should function in an augmentative role to treatment and should not replace clinicians." Technical tools should not be used as a substitute for your own assessments and medical recommendations. You are ultimately responsible for the care you provide, even when you use AI.
If you decide to use AI in your therapy practice, follow the guidelines above to minimize risks and give yourself and your clients peace of mind.
Reviewed by: Liath Dalton, Director of Person Centered Tech, and Steven O. Youngman, VP of Legal and Compliance, Hushmail.