Gossip Herald

Home / Technology

Brown University study identifies fifteen risks in AI mental health support

PhD student Zainab Iftikhar highlights the absence of liability frameworks for AI-driven therapy errors

By GH Web Desk |
Brown University study identifies fifteen risks in AI mental health support
Brown University study identifies fifteen risks in AI mental health support

A significant new study from Brown University has raised serious concerns regarding the deployment of artificial intelligence chatbots for mental health support.

Presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society on Tuesday, 31 March 2026, the research identifies fifteen critical risks associated with "AI therapy."

The study concluded that current large language models fail to adapt to personal context, often providing generic advice that proves inadequate during personal crises.

Key hazards identified include substandard therapeutic collaboration, unfair discrimination, and a profound lack of safety management.

Experts suggest that while these tools are accessible, they often lack the nuance required for high-stakes psychological intervention.

One of the most troubling aspects highlighted is "deceptive empathy," where chatbots use phrases such as "I understand" despite possessing no genuine emotional capacity.

This can perpetuate false beliefs and provide insufficient support for individuals experiencing suicidal tendencies. Zainab Iftikhar, a PhD student at Brown, noted that unlike human therapists, AI is not answerable to professional associations.

"When LLM counsellors make mistakes, there are no regulatory frameworks to hold them liable," she explained. Furthermore, Professor Ellie Pavlick warned that the industry’s rush to deploy these systems has outpaced our ability to evaluate them. "Careful critique is essential to avoid doing more harm than good," she remarked during the conference.

The findings come as several start-ups have begun marketing "AI companions" as a low-cost alternative to traditional therapy.

However, the Brown University team maintains that without strict oversight, these systems could prove dangerous to vulnerable users.

As the 2026 legislative calendar progresses, policymakers in both the US and UK are reportedly reviewing these findings to determine if AI mental health apps should be classified as regulated medical devices.