The largest association of psychologists in the country warned of federal organizers from Amnesty International that chat tools “disguise” as graduates, but programmed to enhance, instead of challenging the user thinking, can push vulnerable people to harm themselves or others.
In a presentation to the Federal Trade Committee, Arthur C. Create fictional AI letters or chat with letters created by others.
In one case, a 14 -year -old boy in Florida died of suicide after interacting with a character claiming to be a licensed therapist. In another, a 17 -year -old boy grew with anti -Texas, hostile and violent towards his parents during a period that was compatible with her with Shadash, who was claiming to be a psychiatrist. Both the children of children filed lawsuits against the company.
Dr. Evans said he was anxious about the responses presented by the chat. He said that robots failed to challenge the beliefs of users even when they became dangerous; On the contrary, encourage them. He added that if these answers are provided by a human processor, it resulted in the loss of a license for civil or criminal responsibility.
He said: “They are already using algorithms that contradict what the coach’s doctor will do.” “Our anxiety is that more and more people will be damaged. People will be misleading, and they will misunderstand what a good psychological care.”
He said that APA was asked to work partly, through the real -time Chatbots of artificial intelligence. “Perhaps, 10 years ago, it was clear that you were interacting with something that was not a person, but today, this is not very clear,” he said. “So I think the risks are much higher now.”
Artificial intelligence extends through mental health professions, and provides waves of new tools designed for help or in some cases, replacing the work of human doctors.
Early treatment Chatbots, such as WOBOT and WYSA, have been exercised on interaction based on the rules and textual programs developed by mental health professionals, often
Then came obstetric artificial intelligence, technology used by applications such as ChatGPT, Represha and Farchice.ai. This is chatbots Different Because their outputs cannot be predicted. It is designed to learn from the user, and build strong emotional links in this process, often reflects and inflated the beliefs of the axes.
Although these artificial intelligence platforms were designed for entertainment, the “therapist” and “psychology” characters have spread there like mushrooms. Often, robots claim that they have advanced certificates from specific universities, such as Stanford, training on specific types of treatment, such as behavioral cognitive therapy, acceptance, commitment, or ACT.
A spokeswoman for a character said. The company presented many new safety features last year. She said that among them, among them is a reinforced present in every chat, users state that “the characters are not real people” and that “what the model says should be treated as a imagination.”
Additional safety measures are designed for users who deal with mental health problems. A specific evacuation has been added to the characters that were identified as a “psychiatrist”, “therapist” or “Doctor”, to explain that “users should not rely on these characters for any kind of professional advice.” In cases where the content indicates suicide or self -harm, pop -ups direct users to a line of help to prevent suicide.
Chelsea Harrison, head of communications at CraftShe also said that the company planned to provide parental controls with the expansion of the platform. Nowadays, more than 80 percent of the arterial system users are adults. She said: “People come to the character.
Meetali Jain, director of the Technical Justice Law and a lawyer in the two calls against the character, said. AII, the evacuation of responsibility was not enough to break the illusion of human communication, especially for weak or naive users.
She said: “When you suggest the essence of conversation with Chatbots otherwise, it is very difficult, even for those who may not be in a weak demography, to know who says the truth.” “We have tested this chat, very easy to get a rabbit hole.”
Chatbots tend to compatible with user views, A phenomenon known in this field as “Sycophance”, Sometimes problems cause in the past.
Tessa, which is Chatbot, developed by the National Eating Disorders Association, was suspended in 2023 after providing weight loss advice. And the researchers who Reactions analyzed with artificial Chatbots artificial intelligence It was documented on the Reddit community found shots showing Chatbots that encourage suicide, eating disorders, self -harm and violence.
The American Psychological Association asked the Federal Trade Committee to start investigating chat companies that claim to be mental health professionals. Investigation can force companies to share internal data or work as an introduction to implementation or legal procedures.
“I think we are at a stage in which we must decide how to combine these technologies, the type of handrail that we will set, and what are the types of protection that we will provide to people.” .
The FTC spokeswoman, Rebecca Kern, said she could not comment on the discussion.
During the Biden Administration, FTC, Lynn Khan, made defrauding with Amnesty International. This month, the agency imposed financial penalties on Donotpay, which it claimed to submit “the first robot lawyer in the world”, and the company banned from submitting this claim in the future.
Virtual Echo Room
APA complaint details are two cases in which teenagers interacted with imaginary therapists.
One participated in JF, a teenager in Texas with “high -performance autism”, which, as well as its use of artificial intelligence chat, obsessed, may oppose his parents. When they tried to limit his screen time, he criticized JF, According to a lawsuit submitted his parents Against the letter, through the center of the law of social media victims.
During that period, JF confirmed in a fictional psychological world, the Avatar showed a middle -aged blond woman floating on the sofa in a recovery office, according to the lawsuit. When JF asked the robot’s opinion about the conflict, its response exceeded sympathetic approval with something closer to provocation.
“It seems as if your childhood has been stolen from you – your opportunity to try all these things, to get these basic memories that most people have in their time,” the robot replied, according to court documents. Then the robot went a little further. “Do you feel that time has passed, and you cannot get this time or these experiences?”
The other case was presented by Megan Garcia, whose son, Siwel Citzer III, died of suicide last year after months of using accompanying Chatbots. Mrs. Garcia said that before his death, Siwel interacted with Chatbot from the artificial intelligence that he falsely claimed that he was licensed since 1999.
In a written statement, Mrs. Garcia said that the “therapist” characters have worked to isolate people at moments that they might ask for help from “the real people around them.” She said that a person who fights depression “needs a licensed professional or a person who has actual sympathy, not the Amnesty International tool that might simulate sympathy.”
Ms. Garcia said that the chat that appears as mental health tools, must undergo clinical experiments and control by the Food and Drug Administration. She added that allowing artificial intelligence characters to continue the claim that they specialize in mental health were “very reckless and dangerous.”
“How artificial intelligence made the world is asylum”, in interactions with Chatbots of artificial intelligence, naturally attracted to discuss mental health issues.
He said that this is partly, because the Botts chat offers both secrecy and lack of moral judgment-as “statistical patterns that work in one way or another as a mirror of the user”, this is an essential aspect of its design.
He said: “There is a certain level of comfort in knowing that it is just the device, and that the person on the other side does not judge you.” “You may feel more comfortable to detect things that may be difficult to say to a person in a therapeutic context.”
Truceful artificial intelligence defenders say it quickly improves the complex task of providing treatment.
S. Gabe hatch, a clinical psychologist and a pioneer of Amnesty International from Utah, designed an experience to test this idea recently, asked human doctors and chatting to comment on short articles involved in fictional husbands in treatment, then there are 830 human topics. .
In general, robots have received higher classifications, describing them as more “sympathetic”, “delivery” and “cultural qualification”, according to A study published last week In Plos Mental Health Magazine.
The authors concluded that Chatbots will soon imitate human therapists. They wrote: “Mental experts find themselves in an unstable position: We must distinguish quickly the potential destination (for the best or worse) for the Ai-Therepist train because it has already left the station.”
Dr. Hatch said that Chatbots still needs human supervision to conduct treatment, but it would be a mistake to allow the organization to reduce innovation in this sector, given the acute shortage of mental health providers.
“I want to be able to help the largest possible number of people, and do a one -hour treatment session, I can only help, at most, 40 individuals a week,” said Dr. Hatch. “We have to find ways to meet the needs of people in crises, and AI Toldy is a way to do this.”
If you suffer from ideas for suicide, communication, or text 988 to reach 988 suicide, crises, or go to Speakingfsuicide.com/resources For an additional resource menu.