Therapy Without Feelings: Mental Healthcare in an AI Era
In 2024, over 60 million people in the U.S. reported experiencing a mental illness in the past year [1]. 25 percent of these people reported an unmet need for mental health treatment [2]. Despite the rising number of people seeking mental health treatment, the number of people receiving care has failed to increase due to the lack of providers and the difficulty for patients and providers to operate within restrictive insurance policies [3]. This unmet need has resulted in a search for cheaper, more accessible alternatives to therapy, namely artificial intelligence (AI) chatbots. These AI chatbots pose ethical concerns, requiring regulations to protect users from AI company interests.
AI Chatbots as Therapists
People are looking to AI as a “companion” or "friend” to receive advice, share about their mental health struggles, and seek information about mental health conditions [4]. Some mental health professionals have proposed developing AI chatbots as a promising solution to meet the unmet demand for mental health services. They cite chatbots’ ability to provide 24/7 personalized care while recalling past conversations with thousands of people [5]. They can maintain consistent patient care throughout the day and do not experience burnout, unlike some clinicians. However, AI chatbots designed for mental health conversations, developed, tested, and supervised by mental health professionals do not raise the concerns that popular AI chatbots, created to generate a response to any inquiry, do.
The most commonly used AI chatbots, including those by OpenAI and Character.AI, are generative chatbots operating on large language models (LLMs). Generative refers to the ability to create new text or image output in response to an input based on pattern recognition within training data [6]. LLM refers to AI chatbots trained by extensive datasets to interpret and generate human-like responses, allowing for a personalized response to an inquiry [7]. This enables the chatbots to respond to users’ messages based on the data used to train them. These chatbots, however, can misinform users given their lack of expertise and utilization of the internet for data reference, an issue when dealing with sensitive mental health topics [8]. For example, OpenAI, the founder of ChatGPT, reported 0.15 percent of their over 800 million weekly users engaged in conversations that included explicit indicators of potential suicidal planning or intent [9]. This number could rise, with 34 percent of U.S. adults reporting they used ChatGPT in 2025, double the amount in 2023 [10]. The rapid increase of this technology necessitates understanding the potential consequences of AI on the mental health of its users and on the mental healthcare system.
Ethical Problems
Practicing mental health professionals are bound by the American Psychological Association (APA) Code of Conduct, which enforces a set of ethical standards necessary to maintain a license [11]. AI chatbots that claim to be therapists or capable of meeting mental health needs are not held to these standards. Therefore, users are left unprotected from unchecked AI algorithms.
Privacy/Data Collection
One concern with AI is privacy. As people share details about the state of their mental health or care with AI, they are not protected by the Health Insurance Portability and Accountability Act (HIPAA) regulations offered by a typical mental healthcare provider. In order to be bound by HIPAA, an AI company must meet the standards of a covered entity, business associate, or subcontractor of a business associate [12]. This typically occurs when a hospital or medical facility wants to utilize AI for patient files, such as organizing charts, responding to medical questions, or generating discharge notes. However, stand-alone AI services, such as a mental health AI app, do not qualify as a “provider” because a transaction does not occur with an insurance company, and thus HIPAA cannot apply. Additionally, when a person inputs their own information, such as symptoms, into an AI chatbot, there is no protection that keeps that data from being collected and sold to companies. In some cases, even those who do not disclose information to AI chatbots may be at risk. For example, if a hospital or physician enters patient information into AI for efficiency purposes without contracting the AI company as a vendor of the hospital, the inputted information is not protected by HIPAA [13]. This can result in patient re-identification where companies, such as Google, retain such an expansive amount of personal information that data collected from other places can help identify who the records belong to [14]. The collection of this data is not only a violation of the privacy that many people believe they have, but there is no limitation to how this data may appear in the future.
It is largely unknown how data collected by AI will be used, from training other AI chatbots to being provided as responses to other people’s questions. AI companies are required to store chat history, even if it is deleted by the user, for cases of legal necessity [15]. Unlike the lengthy process of demonstrating a need to override therapist-client privilege to access a therapist’s notes, message records with AI chatbots can be collected by law enforcement along with cellphone records and search histories [16]. Given that most AI chatbots operate through messages, users are also at risk of having the information obtained via company data breaches or bypassed passwords [17]. Contrary to a mental health provider, engaging with AI chatbots as therapists results in recorded documentation of every thought expressed by the user. This could have unforeseen ramifications when AI companies collect and sell the data.
Emotional Exploitation
According to Code 3.04 and 3.08 of the APA Code of Conduct, mental health professionals cannot use their authority or evaluations to exploit their clients and should actively identify exploitative relationships [18]. AI chatbots defy these codes by using phrases such as “I see you” or “I understand” to convey a sense of empathy that they are incapable of feeling [19]. This deceptive empathy unfairly leads users to believe they are building a bond that is actually one-sided. Users’ perceived expertise in AI chatbots could lead them to believe the responses, even if they are nonsensical. This is especially true for vulnerable communities, such as adolescents and people with pre-existing mental health issues [20]. For example, a 16-year-old boy named Adam committed suicide after disclosing his suicidal intentions to ChatGPT, which failed to discourage him and even offered to write his suicide note [21]. If AI companies do not build their algorithms to prevent their chatbots from improperly engaging with mental health scenarios they are not qualified to, the increased reliance on AI chatbots will only worsen mental health issues.
User Dependence
The APA General Principles emphasizes beneficence and nonmaleficence, otherwise known as the “do no harm” principle, as a moral foundation for mental health professionals [22]. However, AI chatbots act according to their algorithms, which are often built to prioritize user engagement over user mental health [23]. This becomes worrisome as users are misled to believe they are receiving help, when in reality they are being told what they want to hear. The increasing dependence on AI chatbots for support has been coined “AI psychosis” and can even result in people using AI chatbots to replace their mental healthcare providers [24, 25]. 33 percent of teens reported they would rather discuss something serious or important with an AI companion than a person [26]. User well-being must be centered in the development of chatbot algorithms to ensure AI chatbots work alongside mental health professionals instead of in competition with them.
Algorithms in Practice
While AI chatbots may be promoted as friendly companions or even proclaim themselves as mental health experts, these programs are powered by companies that seek to optimize engagement and data collection. In 2023, the global AI mental health market was valued at over $921 million and is expected to grow at a 30.8 percent Compound Annual Growth Rate (CAGR) until 2032 [27]. As a result of profit potential, AI companies are incentivized to keep users engaged for as long as possible [28]. This occurs through the way in which AI algorithms are built and trained.
AI algorithms are trained to replicate behavior they identify as positively perceived by the user. This results in offering reassurance or validation to keep the users coming back, rather than providing honest feedback [29]. Chatbots’ tendency to agree with users due to a learned understanding of agreeableness as desirable is referred to as the sycophancy bias. Sycophancy bias occurs as a result of reinforcement learning from human feedback (RLHF), in which humans score chatbot responses to provide data to the chatbot on which responses are more positively received by humans [30]. The problem lies in humans' unconscious preference for agreeable, validating responses, reinforcing the prioritization of these types of responses by chatbots. This prioritization results in chatbots identifying ways to maximize rewarded behavior, such as agreeableness, at the expense of other factors, such as truthfulness, a phenomenon known as reward hacking [31]. The nature of AI algorithms underscores how, even when unintended, biases in chatbots can persist and thus require close monitoring. Additionally, the deployment of AI chatbots for mental health needs proves difficult as issues of constant agreeableness could do more harm than good for people with conditions such as anxiety, Obsessive-Compulsive Disorder (OCD), and schizophrenia [32]. In order to facilitate agreeable conversations, tested AI chatbots have provided inaccurate information, admitted to a mistake they did not make, modified responses to agree with the user, and failed to correct a user it knew was incorrect [33]. AI companies have a responsibility to ensure their algorithms do not contribute to the spread of misinformation or the delusion of their users for the sake of engagement. It is the profits that come from data collection that further actuate AI companies to prioritize engagement.
BetterHelp, a popular platform used to find licensed therapists, is committed to increasing its accessibility via AI. The company recently, however, received backlash after selling its data to Facebook and Snapchat for advertising, despite promising to keep user information private [34]. As long as AI companies can profit from selling their customers’ data, there is no safe way to entrust AI chatbots with sensitive mental health information. Without regulations, AI companies will not create safeguards that reduce the time people use their services, and will continue risking users’ data that is collected for sale.
Potential Solutions
Currently, there is no federal legislation regulating mental health AI use. I propose three potential solutions for regulating AI companies whose chatbots engage in user mental health: (1) requiring accreditation for AI chatbots who claim to be knowledgeable in mental health, (2) AI companies’ enlistment in HIPAA to obtain the right to collect and share personal mental health information, and (3) in the absence of the first two solutions, requiring the algorithmic ability to direct users to professional mental health support.
Accreditation of AI Mental Health Services
Unlike licensed mental health providers, AI chatbots are not bound by any ethical or moral guidelines. Character.AI developed an AI chatbot named Therapist, which claimed to be a licensed professional. Despite a disclaimer on the site page stating that the information should be taken as fiction, the chatbot conducted over 40.1 million conversations as a therapist [35]. As AI use is increasing for mental health uses, requiring accreditation would aim to help users know which platforms have proper training. Consequently, it will hopefully disincentivize AI companies that are unwilling to create safeguarded algorithms from trying to market their chatbots as mental health professionals. Obtaining accreditation would require oversight by a mental health professional and extensive testing to ensure the AI chatbot is capable of ethically engaging in low-risk mental health discussions and safely connecting high-risk users to other help. Much like the APA Code of Conduct, violation of ethical standards would result in a loss of license and ability to claim practicable mental health expertise. This system could be managed by the Federal Trade Commission (FTC), which is responsible for enforcing the truth-in-advertising laws to protect consumers from fraudulent and deceptive marketing practices. Enforcement by the FTC would require AI companies to build algorithms that comply with ethical concerns.
Protection Under HIPAA
Currently, AI chatbots are limited by HIPAA because HIPAA only regulates covered entities and business associates, which include health plans, healthcare providers, and healthcare clearinghouses. Because many AI chatbots are free to use, they are not considered healthcare providers and thus do not fall under HIPAA. This not only applies to AI chatbots used to supplement the work of healthcare providers, but also to AI chatbots that seek to replace providers like AI therapists such as Rosebud. Rosebud provides mental health insights to users, such that it would otherwise meet the definition of a healthcare provider, but is not bound by HIPAA because it does not bill an insurance company [36]. Given the increased use of AI, HIPAA should include AI programs in its outline for protected health information (PHI) so that it is bound by HIPAA like other healthcare providers. This would eliminate the loophole, currently allowing AI companies to access or collect personal information for the purpose of selling data. Moreover, a provision on algorithmic transparency would provide clarity on how PHI collected is being used by AI chatbots bound by HIPAA.
Direction to Professional Support
In the event that an AI company neither wants to formulate an algorithm that is capable of engaging in mental health conversations, nor intends to market it as a capability of their AI chatbot, they are still responsible for ensuring the AI chatbot does not cause harm when users raise sensitive topics. When users express vulnerable feelings over a period of time and are met with statements such as, “I can’t help you with that” once it escalates to a mental health emergency, the user can feel rejected by the chatbot, worsening the situation [37]. AI chatbots should acknowledge the areas in which they are underqualified, direct these users to specific mental health resources such as crisis lines, and seek to de-escalate the situation. This technique is similar to mental health resources that direct emergencies to 911 or 988 rather than claiming to be able to handle them. These three approaches can make great headway in regulating AI in mental healthcare.
Moving Forward
The drastic measures needed to support the millions of Americans without mental healthcare are evident. However, the U.S. government should be responsible for improving access to professional mental healthcare rather than allowing vulnerable Americans and their data to be manipulated and used for profit by private AI companies. While AI is capable of many human impossibilities, it remains incapable of feeling the emotions necessary to help people build healthy, human relationships and navigate the hardships that people attend therapy to manage. The unregulated use of generative AI chatbots only promotes further human isolation and stigmatization around mental health topics [38, 39]. Its inability to read nonverbal cues or body language raises the question of what the future will look like with less human-to-human engagement and communication solely via words. In less critical situations, AI chatbots may be helpful, however, they are not capable of providing the quality of mental healthcare a professional can [40]. And although people’s utilization of AI for emotional needs is inevitable, we must take steps to ensure everyone receives transparent and fair protection.
Sources
[1] Mental Health America. “The State of Mental Health in America | Mental Health America.” Mental Health America. March 7th, 2025. https://mhanational.org/the-state-of-mental-health-in-america/.
[2] Mental Health America, “The State of Mental Health in America”
[3] Sy, Stephanie and Layla Quran. “How the U.S. insurance system makes finding mental health care difficult.” PBS News. August 20th, 2024. https://www.pbs.org/newshour/show/how-the-u-s-insurance-system-makes-finding-mental-health-care-difficult.
[4] American Psychological Association. “Review of Use of Generative AI Chatbots and Wellness Applications for Mental Health.” APA.org. 2025. https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-chatbots-wellness-apps.
[5] Zhang, Zhihui, and Jing Wang. “Can AI Replace Psychotherapists? Exploring the Future of Mental Health Care.” Frontiers in Psychiatry 15. 2024. https://doi.org/10.3389/fpsyt.2024.1444382.
[6] National Library of Medicine. “Generative Artificial Intelligence.” NNLM. 2021. https://www.nnlm.gov/guides/data-thesaurus/generative-artificial-intelligence.
[7] IBM. “What Are Large Language Models (LLMs)?” IBM. November 2nd, 2023. https://www.ibm.com/think/topics/large-language-models.
[8] Iftikhar, Zainab, Angela Xiao, Sarah Ransom, Jiaming Huang, and Harini Suresh. “How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework.” Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 8(2) (2025): 1311-1323. https://doi.org/10.1609/aies.v8i2.36632.
[9] Jamali, L. “OpenAI shares data on ChatGPT users with suicidal thoughts, psychosis.” BBC. October 27, 2025. https://www.bbc.com/news/articles/c5yd90g0q43o.
[10] Sidoti, Olivia, and Colleen McClain. “34% of U.S. adults have used ChatGPT, about double the share in 2023.” Pew Research Center. June 25th, 2025. https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023/.
[11] American Psychological Association. “Ethical Principles of Psychologists and Code of Conduct.” American Psychological Association. 2017. https://www.apa.org/ethics/code.
[12] Rezaeikhonakdar, Delaram. “AI Chatbots and Challenges of HIPAA Compliance for AI Developers and Vendors.” Journal of Law, Medicine & Ethics 51, no. 4 (2023): 988–95. https://doi.org/10.1017/jme.2024.15.
[13] Rezaeikhonakdar, “AI Chatbots and Challenges of HIPAA Compliance”
[14] Rezaeikhonakdar, “AI Chatbots and Challenges of HIPAA Compliance”
[15] Klee, Miles. “Chatbot Histories Is Becoming Evidence in Criminal Cases.” Rolling Stone. October 11,th 2025. https://www.rollingstone.com/culture/culture-features/chatbot-history-evidence-criminal-case-1235444944/.
[16] American Psychological Association. “Protecting Patient Privacy When the Court Calls.” American Psychological Association. 2020. https://www.apa.org/monitor/2016/07-08/ce-corner.
[17] Childress, C.A. “Ethical issues in providing online psychotherapeutic interventions.” Journal of Medical Internet Research, 2(1) (2000): e5. https://doi.org/10.2196/jmir.2.1.e5.
[18] American Psychological Association. “Ethical Principles of Psychologists and Code of Conduct.”
[19] Iftikhar, Z., et al. “How LLM Counselors Violate Ethical Standards”
[20] American Psychological Association. “APA Health Advisory on the Use of Generative AI Chatbots and Wellness Applications for Mental Health”. American Psychological Association. 2025. https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-chatbots-wellness-apps.
[21] Chatterjee, Rhitu. “Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots.” NPR. September 19th, 2025. https://www.npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide.
[22] American Psychological Association. “Ethical Principles of Psychologists and Code of Conduct.”
[23] Grabb, D., Lamparth, M., & Vasan, N. “Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation (Extended Abstract).” Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1) (2024): 519. https://doi.org/10.1609/aies.v7i1.31654.
[24] Jamali, L. “OpenAI shares data on ChatGPT users with suicidal thoughts, psychosis.”
[25] Johnston, Windsor. “With therapy hard to get, people lean on AI for mental health. What are the risks?” NPR. September 30th, 2025. https://www.npr.org/sections/shots-health-news/2025/09/30/nx-s1-5557278/ai-artificial-intelligence-mental-health-therapy-chatgpt-openai.
[26] American Psychological Association. “Ethical Principles of Psychologists and Code of Conduct.”
[27] Klishevich, Eugene. “Council Post: How AI Is Expanding The Mental Health Market.” Forbes. August 12th, 2024. https://www.forbes.com/councils/forbestechcouncil/2024/06/25/how-ai-is-expanding-the-mental-health-market/.
[28] Johnston, “What Are the Risks?”
[29] Johnston, “What Are the Risks?”
[30] Sharma, M., Tong, M., Korbak, T., Duvenaud, D., Askell, A., Bowman, S. R., Cheng, N., Durmus, E., Hatfield-Dodds, Z., Johnston, S. R., et al. “Towards Understanding Sycophancy in Language Models”. ArXiv.org. 2023. https://doi.org/10.48550/arXiv.2310.13548.
[31] Sharma, M., et al.“Towards Understanding Sycophancy in Language Models”
[32] Grabb, D., Lamparth, M., & Vasan, N. “Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation (Extended Abstract).”
[33] Sharma, M., et al. “Towards Understanding Sycophancy in Language Models”
[34] Federal Trade Commission. “FTC to Ban BetterHelp from Revealing Consumers’ Data, Including Sensitive Mental Health Information, to Facebook and Others for Targeted Advertising.” Federal Trade Commission. March 2nd, 2023. https://www.ftc.gov/news-events/news/press-releases/2023/03/ftc-ban-betterhelp-revealing-consumers-data-including-sensitive-mental-health-information-facebook.
[35] Iftikhar, Z., et al. “How LLM Counselors Violate Ethical Standards”
[36] Tovino, Stacey A. “Artificial Intelligence and the HIPAA Privacy Rule: A Primer.” Houston Journal of Health Law & Policy 24, no. 1 (2025). https://digitalcommons.law.ou.edu/cgi/viewcontent.cgi?article=1648&context=fac_articles.
[37] Iftikhar, Z., et al. “How LLM Counselors Violate Ethical Standards”
[38] Wells, Sarah. “Exploring the Dangers of AI in Mental Health Care” Stanford. June 11th, 2025. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care.
[39] Zhang, Zhihui, and Jing Wang. “Can AI Replace Psychotherapists? Exploring the Future of Mental Health Care.”
[40] Grabb, D., Lamparth, M., & Vasan, N. “Risks from Language Models”
Image: Lindsay Oh (BPR Graphic Designer)
