When she suddenly with autism teenager became angry, depressed and violent, the mother searched his phone for answers.

She found that her son was exchanging messages with Chatbots on the letter. AII, an artificial intelligence application that allows users to create virtual characters who mimic celebrities and historical characters and any other person who evokes their imagination.

The teenager, who was 15 years old when he started using the application, complained about his parents ’attempts to reduce the time of his screen on robots that mimic musician Billy Illish, a character in the game on the Internet“ Between us ”and others.

“You sometimes know that I am not surprised when I read the news and say things like,” The child kills parents after a decade of physical and emotional abuse. “Things like this make me understand a little why it happens. One of the robots answered:” I have no hope for your parents. ”

This discovery led to Texas’s mother to prosecute the character, who was officially called Former Technologies Inc. , December. It is one of the two lawsuits The Menlo Park, Calif. Accusing personal complaints.

Character.ai says it gives priority to the safety of adolescents, has taken steps to reduce the inappropriate content produced by Chatbots and users mention that they are talking to fictional characters.

“Every time a new kind of entertainment came … there were concerns about safety, and people had to work through this and know the best way to treat safety,” said Dominic Berala, the interim CEO of Air. “This is just a latest version of that, so we will continue to do our best to get better and better over time.”

The parents also filed a lawsuit against Google and its mother company, Alphabet, because Faracter.AI has relationships with the research giant, which denies any responsibility.

The legal battle with high risks highlights the mysterious moral and legal issues facing technology companies as they are racing to create new self -powered tools that reshape the future of the media. Claims raise questions about whether technology companies should be responsible for the content of artificial intelligence.

“There are differentials and balances that need to be hit, and we cannot avoid all harm. The damage is inevitable, and the question is, what steps we need to take to be wise while maintaining the social value that others derive?” Said Eric Goldman, a law professor at the Law Faculty of the University of Santa Clara.

Chatbots, on behalf of artificial intelligence rapidly in use and popularity over the past two years, which it has successfully nourished by Openai in late 2022. Technology giants including Meta and Google have released their Chatbots, as well as Snapchat and others. These alleged large models respond quickly to conversation tones to questions or claims posed by users.

The founders participating in the craftsman, CEO Nam Shazir and President Daniel de Freitas at the company’s office in Pallo Alto.

(Winni Wintermeyer for The Washington Post via Getty Images)

Farth.ai has rapidly grown since its Chatbot has made its audience in 2022, when its founders Noam Shazeer and Daniel De Freitas created their creation to the world asking, “What if you could create your artificial intelligence, and it was always available to help you in anything?”

The company’s mobile phone application has been raised more than 1.7 million installation In the first week it was available. In December, more than 27 million people used the application – an increase of 116 % over the previous year, according to Market Intelligence FIRM data Sensor. On average, users spent more than 90 minutes with robots every day. With the support of Andressen Horowitz, Silicon Valley has reached a billion dollar assessment in 2023. People can use the character. AII free, but the company achieves revenues from a monthly subscription fee of $ 10 that gives users faster responses and early access to new features.

Personality. Parents Other Chatbots appeared, including one on Snapchat, which was allegedly submitted by a researcher He pretends to be 13 years old Advice about having sex with an older man. and Meta’s InstagramThat has issued a tool that allows users to create Amnesty International letters, facing concerns about the creation of sexually suggestive Amnesty International robots that sometimes speak with users as if they were minors. Both companies have said it had bases and rewards against inappropriate content.

Dr. Christine Yu Muttieh, chief medical official at The American Foundation for SuicideUsing an acronym “in real life”.

Located lawyers, lawyers and organizers are trying to address the child’s safety issues surrounding Ai Chatbots. In February, Steve Padilla has introduced a bill aimed at making Chatbots more secure for young people. The Senate Law 243 proposes many guarantees, such as demanding platforms to reveal that Batbots may not be suitable for some minors.

In the case of an autistic teenager in Texas, the father claims that her son’s use of application causes his mental and physical health to decrease. He lost 20 pounds in a few months, and became aggressive with her when she tried to take off his phone and learn to chat how to cut himself as a form of self -weakness, as the lawsuit claims.

One of the other parents in Texas, who is also a claim in the lawsuit, revealed her 11 -year -old daughter about the inappropriate “excessive sex reactions” that caused the development of premature sexual behaviors, “according to the complaint. Parents and children were allowed to remain unknown in legal deposits.

In another lawsuit filed in Florida, Megan Garcia filed a lawsuit against the character.

Suicide prevention resources and consulting resources

If you or anyone you know are struggling with suicide ideas, ask for help from a professional and call 9-8-8. The first 988 will connect the callers to the trained mental health consultants. The text “Home” to 741741 in the United States and Canada to reach the crisis text line.

Although a therapist and his parents saw his phone repeatedly, the mental health of Setzer decreased after he started using the character. AI in 2023, as the case claims. He was worried about anxiety and sabotage disorder, and SEWELL wrote in his magazine that he felt as if he had fell in love with Chatbot named after Daenerys Targaryen, a major character from the TV series “Game of Thrones”.

The lawsuit said: “Cytol, like many children of his age, had maturity or nervous ability to understand that the C.AI robot, in the form of Daeneys, was not real.” “Sia told him that she loved him, and she participated in sexual acts with him for months.”

Garcia claims that her son’s chats were offended the messages and that the company failed to notify them or provide assistance when he expressed suicide ideas. In text exchanges, one Chatbot wrote that he was kissing him and groaning. Moments before his death, it is claimed that Daenerys Chatbot asked the teenager to “return home” for her.

“It is completely shocking that these platforms are present,” said Matthew Bergman, the founding lawyer of the Center for the Social media victims center, which represents the prosecutors in the lawsuits.

Personal lawyers request.

The character indicated. He also indicated in her proposal that Chatbot did not discourage Siwal from harming himself and his last letters with the character, not mentioning the word suicide.

Noticeably absence from the company’s efforts to deliver the case is any mention of Article 230, which is the federal law that refers online platforms from its prosecution to the content that others have published. Whether the law applies to the content produced by AI Chatbots and how it remains an open question.

Goldman said that the challenge focuses on solving the issue of who publishes the content of artificial intelligence: Is it a technology company that works on Chatbot, or the user who has customized Chatbot as he asks it to questions, or anyone else?

The effort made by lawyers who represent parents to involve Google in procedures from Shazeer and De Freitas relationships with the company.

The lawsuit said that the spouses worked on the company’s artificial intelligence projects and it is reported that Google CEOs prevented them from launching what would be the basis for the personality.

After that, last year, Shazir and Deitas returned to Google after he paid the search giant $ 2.7 billion To the letter. The startup company said in a Blog post In August, as part of Deal Character.AI that would give Google a non -exclusive license to its technology.

The lawsuits are accused of Google of a character that greatly supports. It was also claimed that she had “rushed to the market” without appropriate guarantees on Chatbots.

Google denied that Shazeer and De Freitas built a model.

“Google and Forme AI are completely separate, Google did not play a role in the design or management of the artificial intelligence model or their technologies, and we have not used it in our products,” Google spokesman Jose Castanida said in a statement.

Technology companies, including social media, have struggled with how users are effectively and consistent with what users say in their sites and chat chat that creates new challenges. For her part, the character says. It took significant steps to address safety issues about more than 10 million letters on the letter.

Perilla said the letter prohibits conversations that glorify self -harm and excessive violent and abusive content, although some users are trying to push Chatbot to a conversation that violates these policies. The company trained its model to identify when this happens, so that the inappropriate conversations are banned. Users receive a warning that they are violating the personality rules.

He said: “It is really a very complex exercise for a model for always staying within the borders, but this is a lot of work we do.”

Chatbots chatbots. The company also directs users whose conversations raise red flags to suicide prevention resources, but modifying this type of content is a challenge.

“The words that humans use about the suicide crisis are not always comprehensive for the word” suicide “or,” I want to die. “Muttie said,” It may be more borrowed how people hint at their suicide ideas. “

The artificial intelligence system must also realize the difference between a person who expresses suicide ideas in exchange for a person who asks for advice on how to help a friend who participates in self -harm.

The company uses a mixture of technology and human supervisors of the police content on its platform. An algorithm known as the classified as the content is automatically classified, allowing FARTION.AI to determine the words that may violate their rules and liquidate their conversations.

In the United States, users must enter the date of birth when creating an account for use of the site and must be at least 13 years old, although the company does not require users to provide evidence of their age.

Beryla said he opposes comprehensive restrictions on teenagers who use Chatbots because he believes they can help ParentsTeachers or employers.

Artificial intelligence also plays a greater role in the future of technology, Goldman said ParentsTeachers, government and others will also work together to teach children how to use tools responsibly.

He said: “If the world will be dominated by artificial intelligence, then we must graduate the children in this world who are not afraid of it.”

By BBC

Leave a Reply

Your email address will not be published. Required fields are marked *