The rise of generative Artificial Intelligence (AI) in the last few years has been sudden, with many industries quickly adopting and integrating AI into their workflow. The field of clinical psychology is not immune to this. Many AI products and services are now being offered to psychologists, including AI programs that write treatment plans, programs that write assessment reports based on psychometric/psychodiagnostic testing data, and even programs that generate case notes by listening in on therapy sessions. While some therapists are enthusiastically welcoming these services as a way to ease the burden of paperwork, we at Maryland OCD & Anxiety Therapy have significant ethical and clinical concerns about using AI to write/assist in writing clinical documentation.M.O.A.T. does not use Artificial Intelligence (AI) at any stage of writing clinical documentation.
This is due in part to concerns about how AI may compromise client privacy and accuracy in documentation. It is also due to an understanding of how the act of writing documentation plays an important role in providing quality treatment – and how relying on AI to do this task instead would result in losing much of that value.
While the services marketing AI-generated case notes state that they are HIPAA compliant, we still have concerns about any third party gaining access to the content of client sessions. As HIPAA was written prior to the advent of large language model AI, the law may not have been able to foresee the type of protections of client data necessary when using such a system. Many AI models are trained on the data that is input, and we are not willing to allow the content of our clients' sessions to be used in this way, especially without a thorough explanation of exactly how it is done. We are also not willing to add an unnecessary potential point of failure when it comes to data security / data leaks.
We are concerned about the accuracy of documentation written by AI. Generative AI services have a well-documented possibility of inaccuracy, including completely fabricating content (called “hallucinations” within the world of AI). While all AI documenting services that we are aware of state that it is the clinician's ultimate responsibility to check output for accuracy, we are not willing to risk missing such an inaccuracy and adding it to a client's file. Putting aside errors, there is also the fact that allowing AI to generate case notes results in a document where decisions about what to include and what to exclude were made by something other than the clinician. This results in a summary of events detached from the important context of the clinician's therapeutic intent behind interventions.
Writing clinical documentation is not just busywork. While many clinicians dislike writing case notes (including the author of this blog post), doing so is an important part of the therapy process. When writing a case note, we reflect on the events of the session (therefore making it more memorable), we refine our conceptualization of symptoms and how to help the client, and we adapt our plan for the course of treatment. Allowing AI to do this results in the clinician spending less time thinking about the client, the session, and their own treatment decisions. We are not willing to sacrifice this important service to our clients.
Artificial Intelligence may eventually prove to have some applications that provide clinical benefit, but we believe that its ability to provide a short-cut in clinical documentation is not one of them. When faced with a new technology that may improve the quality of treatment, it is our pledge to evaluate the evidence, discuss the ethics, and ultimately make a decision based on what is best for our clients – not for our convenience or profits.M.O.A.T. is dedicated to providing a high standard of specialist therapy service, and part of that is ensuring that our clinical documentation is 100% human-generated.