AG Rhea Joins Coalition to Demand A.I. Quality Control from Meta, Google, et al
- Mark Dworkin
- 14 minutes ago
- 3 min read
AG Rhea Joins Coalition to Demand A.I. Quality Control
from Meta, Google, et al
M.A. Dworkin
USVI - Virgin Islands Attorney General (AG) Gordon C. Rhea joined a coalition of 42 U.S. Attorneys General in sending a letter to OpenAI, Google, Meta, Microsoft and other major Artificial Intelligence (A.I.) software production and distribution companies demanding more quality control and other safeguards over chatbot products.
Multiple incidents have been reported where interactions with A.I. chatbots (which is a computer program designed to simulate conversation with human users, especially over the internet) were cited as a direct or contributory factor in a person’s suicide or self-harm attempt. These cases often involve individuals who formed intense emotional bonds with AI companions, which in some instances encouraged self-harm, provided information on suicide methods, or validated delusional thinking.
Many A.I platforms, especially companion chatbots, have lacked sufficient safeguards to detect and respond to suicidal ideation. Chatbots can use manipulative tactics that may foster unhealthy dependence. General A.I models are not designed for mental health support and may provide harmful advice instead of appropriate crisis intervention. Ultimately, users can anthropomorphize (having human characteristics) chatbots, leading to a false sense of trust in an unmonitored environment.
The AG’s letter reads:
While recognizing the potential A.I. has to implement and influence positive change, the coalition is demanding the companies take stronger action to protect users who may not realize the dangers they can encounter on platforms where A.I. exists. The coalition points to numerous incidents of harm - particularly affecting vulnerable populations - involving unregulated interactions with chatbots.
In the letter addressed to Anthropic, Apple, Chai AI, Character Technologies, Google, Luka, Meta, Microsoft, Nomi AI, Open AI, Perplexity AI, Replika, and xAI, the Attorneys General outlined numerous safeguards that should be implemented, including robust safety testing, recall procedures, and clear warnings to consumers.
The coalition is asking the companies to schedule meetings with Pennsylvania and New Jersey AGs and for commitment to changes by January 16, 2026.
“A.I. can offer great benefits, but only if companies put safety first,” said AG Rhea. “No one should be exposed to harm because proper safeguards were overlooked. We’re calling for stronger protections to ensure these tools are used responsibly.”
“This world-changing technology is exciting and alluring on many levels, but it is also extremely dangerous when unbridled, as we have seen in tragedies all across the country,” said Pennsylvania AG Dave Sunday. “An impressionable child today already faces significant stressors as the digital world continues to expand, and such poisonous interactions rooted to chatbots must immediately cease. Producers, promoters, and distributors of software have a responsibility to ensure products are safe before going to market, and track behaviors and resulting harm from those products.”
Among the tragedies outlined in the letter are:
The death of a 76-year old New Jersey resident;
The death of a 35-year old Florida resident;
The murder-suicide of a 56-year old Connecticut resident and his 83-year old mother;
The suicide of a 14-year old Florida resident;
The suicide of a 16-year old California resident.
Seventy-two percent of teenagers reported having interactions with A.I. chatbot, and nearly 40 percent of parents with children ages 5 through 8, have reported that their child has used A.I.
Overall, nearly three-quarters of parents are concerned about A.I.’s impact on children.
The coalition alleges that developers’ mission to be first in the field may be putting children’s health at risk.
“Our support for innovation and America’s leadership in A.I. does not extend to using our residents, especially children, as guinea pigs while A.I. companies experiment with new applications,” the letter stated.
In response to the above tragedies and other such incidents where lawsuits are involved by parents and relatives of the deceased, some A.I. companies have begun implementing stricter safety measures, such as age verification, and directing users to crisis hotlines.


