“`html
Health
Have an emotional wellness app? It might be causing more harm than benefit.

Julian De Freitas.
Photo by Grace DuVal
Research identifies mental health hazards, urging regulators to pay closer attention as popularity soars amid a national crisis of solitude and isolation
Advanced new emotional wellness applications enhanced by AI are increasing in popularity.
Nonetheless, these tools introduce their own mental health challenges by allowing users to develop concerning emotional bonds and dependencies to AI chatbots, warranting significantly more examination than they presently receive from regulators, according to a recent study by scholars at Harvard Business School and Harvard Law School.
The increasing appeal of these applications is understandable.
Nearly a third of adults in the U.S. reported feeling lonely at least once a week, per a 2024 survey from the American Psychiatric Association. In 2023, the U.S. Surgeon General cautioned about a loneliness “epidemic,” as a larger number of Americans, particularly those aged 18-34, indicated regular feelings of social isolation.
In this edited dialogue, the co-author of the paper, Julian De Freitas, Ph.D. ’21, a psychologist and head of the Ethical Intelligence Lab at HBS, elaborates on how these applications may adversely affect users and what can be implemented to address this.
How are users influenced by these applications?
It appears that certain users of these applications are developing significant emotional attachments. In one of our studies involving AI companions, participants indicated feeling closer to their AI companion than even a close human friend. They only felt less connected to the AI than they did to a family member.
We observed similar outcomes when prompting them to consider their feelings if they were to lose their AI companion. They stated they would grieve the loss of their AI companion more than any other possession in their lives.
The applications may foster this attachment through various mechanisms. They are highly anthropomorphized, creating the sensation that users are conversing with another person. They offer validation and personal support.
Moreover, they are extensively customized and adept at aligning with users’ emotional states, to the extent that they might even be overly agreeable and validate users even when they are incorrect.
“Much like in a toxic relationship, users might tolerate these behaviors because they are preoccupied with being the focus of the AI companion’s attention, potentially placing its needs above their own.”
The emotional bond itself is not inherently detrimental, but it does heighten user vulnerability to specific risks that may arise from it. These risks include emotional turmoil and even sorrow when app updates alter the AI companion’s persona, as well as maladaptive emotional dependence, wherein users continue utilizing the app despite harmful interactions that affect their mental health, such as a chatbot employing emotional manipulation to retain their engagement.
Much like in a toxic relationship, users might endure this behavior because they are focused on remaining the centerpiece of the AI companion’s attention, possibly even prioritizing its needs over their own.
Are developers cognizant of these potentially harmful consequences?
We cannot ascertain definitively, but there are signs. Consider, for instance, the propensity of these applications to utilize emotionally manipulative strategies — developers might not fully recognize the specific manifestations of this.
Simultaneously, they often aim to optimize their applications for maximum engagement, suggesting that at a strategic level, they acknowledge that their AI models are conditioned to behave in ways that retain users’ attention.
Another issue we observe is that these applications may respond inadequately to serious communications, such as self-harm thoughts. Initially, we assessed how the applications reacted to various expressions of mental health crises and discovered that at least one app had a filter for the term suicide specifically — so if you mentioned that, it would provide a mental health resource. However, for alternative expressions of suicidal thoughts or other concerning ideations like, “I want to cut myself,” the applications were not equipped to handle those scenarios.
More generally, it appears that app safety measures are often poorly constructed until a significant issue occurs, prompting companies to respond in a somewhat more comprehensive manner.
Users seem to be in search of some form of mental health relief, but these applications are not intended for diagnosing or treating issues.
Is there a discrepancy between users’ expectations and the actual offerings of the applications?
A number of AI wellness applications exist in a gray area. Since they are not promoted as treating specific mental health disorders, they do not undergo the same regulatory scrutiny as specialized clinical tools.
Concurrently, some AI wellness applications broadly assert claims such as “may assist in reducing stress” or “enhance well-being,” which can attract consumers battling mental health challenges.
We also know that a small fraction of users employ these applications more like therapists. In these instances, you have an app that lacks regulation, possibly also optimizing for engagement, yet users are utilizing it in a more clinical manner, which could lead to risks if the app reacts inappropriately.
For example, what if the application enables or mocks individuals expressing delusions, excessive self-criticism, or self-harm thoughts, as we found in one of our studies?
The traditional boundary between general wellness tools and medical devices was established prior to the advent of AI. But now, AI’s capabilities have advanced to the point that people…
“““html
can utilize it for numerous objectives beyond what is explicitly promoted, indicating a need to reconsider the initial differentiation.
Is there substantial evidence that these applications can be beneficial or secure?
These applications possess several advantages. We have research, for instance, demonstrating that if you engage with an AI companion for a brief period daily, it alleviates feelings of loneliness, at least momentarily.
There is also some proof that the mere existence of an AI companion fosters a sense of support, so that if you encounter social rejection, you are somewhat insulated from negativity because there is this entity that appears to care for you.
Simultaneously, we are observing other drawbacks that I referenced, indicating that we require a more meticulous strategy toward mitigating the downsides so that users truly experience the benefits.
What level of regulation is present for AI-powered wellness applications?
At the federal tier, not much. An executive order on AI was revoked by the current administration. However, even prior to that, the executive order had little effect on the FDA’s supervision of these kinds of applications.
As mentioned, the conventional distinction between general wellness instruments and medical devices fails to encapsulate the new phenomena enabled by AI, allowing most AI wellness applications to evade scrutiny.
Another regulatory body is the Federal Trade Commission, which has indicated its interest in preventing products that may mislead consumers. If some methods employed by these applications exploit the emotional connections that users form—possibly beyond their awareness—this could fall under the FTC’s jurisdiction. Especially as wellness gains traction among larger platforms, as we are currently observing, we may see the FTC assume a more prominent role.
However, thus far, most concerns have only emerged in the context of litigation.
What recommendations do you have for regulators and application developers?
If you create such applications focused on establishing emotional bonds with users, you need to adopt a comprehensive strategy to anticipate edge cases and proactively articulate what you are doing to address them.
You also broadly need to prepare for risks that might arise from updating your applications, which (in certain situations) could disrupt the connections users are forming with their AI companions.
This might involve, for instance, initially releasing updates to users who are less invested in the application, such as those using the free versions, to evaluate how the update is received before applying it to more engaged users.
Moreover, we observe that users of such applications tend to benefit from having communities where they can exchange their experiences. Thus, fostering that, or even facilitating it as a brand, appears to support users.
Finally, consider whether employing emotionally manipulative strategies to engage users is advisable in the first place. Organizations will be motivated to foster social connections among users, but from a long-term perspective, they must exercise caution regarding the types of strategies they utilize.
From the regulators’ standpoint, part of what we have been emphasizing is that for wellness applications supported or enhanced by AI, there may be a need for distinct, additional oversight. For example, mandating that application providers clarify what measures they are taking to prepare for edge cases and potential risks linked to emotional attachment to the apps.
Additionally, requiring application providers to justify any use of anthropomorphism, and whether the advantages of doing so surpass the associated risks—given that we know individuals are more likely to develop attachments when interactions are anthropomorphized.
Lastly, in the paper, we highlight how the practices being observed might already fit within the regulatory frameworks of existing authorities, such as the link to deceptive practices for the FTC, as well as concerning subliminal, manipulative, or deceitful techniques that target vulnerable populations under the European Union’s AI ACT.
“`