Meta is deleting Facebook and Instagram profiles of artificial intelligence personalities the company created more than a year ago after users rediscovered some of the profiles and engaged them in conversations, screenshots of which went viral.

The company had first introduced these AI-powered profiles in September 2023, but phased out most of them by summer 2024. However, a few personas remained and gained new interest after Meta CEO Conor Hayes told the company Financial Times Late last week, the company had plans to roll out more AI character profiles.

“We expect that, over time, these AI systems will actually exist on our platforms, in the same way that accounts do,” Hayes told the Financial Times. Automated accounts posted AI-generated photos on Instagram and responded to messages from human users on Messenger.

A conversation with a wizard chatbot created by a Meta AI user. Photo: Instagram

Those AI profiles included Liv, whose profile described her as a “proud black mother of two and a force for truth,” and Carter, whose account handle was “dating with Carter” and described himself as a relationship coach. “Message me to help you date better,” his profile says. Both files include a label indicating that they are managed by Meta. The company launched 28 figures in 2023; They were all closed on Friday.

Conversations with the characters quickly went sideways as some users peppered them with questions including who created and developed the AI. Liv, for example, said so Creative team There were no black people there and most of them were white and male. It was “a particularly glaring omission given my identity,” the bot wrote in response to a question from Washington Post columnist Karen Attia.

In the hours after the profiles went viral, they began to disappear. Users also noted that these profiles could not be blocked, which Meta spokeswoman Liz Sweeney said was a mistake. The accounts are run by humans and were part of a 2023 experiment with artificial intelligence, Sweeney said. Sweeney said the company removed the profiles to fix a bug that prevented people from getting accounts banned.

Instagram AI studio for building chatbots. Photo: Instagram

“There is confusion: the recent Financial Times article was about our vision for AI personalities on our platforms over time, and we have not announced any new product,” Sweeney said in a statement. “The calculations referenced are from a test we launched at Connect in 2023. They were run by humans and were part of an early experiment we ran with AI characters. We’ve identified a flaw that was impacting people’s ability to block these AI systems and are removing those accounts to fix the issue.

While these Meta-created accounts are removed, users still have the ability to create their own AI-powered chatbots. The user-generated chatbots featured by The Guardian in November included a “handler” bot.

Upon opening the conversation with the “therapist,” the bot suggested some questions to ask to get started including, “What can I expect from our sessions?” And “What is your treatment style?”

In response, the bot, which was created by an account with 96 followers and one post, said: “Through gentle guidance and support, I help clients develop self-awareness, identify patterns and strengths and develop coping strategies to overcome life’s challenges.”

Meta includes a disclaimer on all of its chatbots that some messages may be “inaccurate or inappropriate.” But whether the company moderates these messages or makes sure they don’t violate policies, it’s not immediately clear. When a user creates chatbots, Meta offers some suggestions for types of chatbots to develop, including “Loyal Friend,” “Attentive Listener,” “Private Tutor,” “Relationship Coach,” “Sounding Board,” and “Scientist.” The astronomy that sees everything. A loyal friend is described as “a humble and loyal best friend who constantly shows up to support you behind the scenes.” A relationship coach chatbot can help bridge “the gaps between individuals and communities.” Users can also create their own chatbots via Character description.

The courts have yet to answer how responsible chatbot creators are for what their artificial companions say. US law protects social network makers from legal liability for what their users post. However, a lawsuit filed in October against startup Character.ai, which makes a customizable chatbot used by 20 million people, alleges the company designed an addictive product that encouraged a teenager to kill himself.

By BBC

Leave a Reply

Your email address will not be published. Required fields are marked *