Internet abuse experts have warned that a rise in online racism due to fake images is “just the beginning of a problem to come” following the recent release of artificial intelligence software X.

Concerns were raised after computer-generated images using Grok, X’s generative AI chatbot, flooded the social media site in December last year.

Signify, an organization that works with prominent groups and clubs in sport to track and report online hate, said it has seen an increase in reports of abuse since Grok’s last update, and believes the introduction of real-world AI will make it more prevalent.

“It’s a problem now, but it’s really just the beginning of a problem to come. It’s going to get a lot worse, and we’re just getting started, and I expect it will get incredibly serious over the next 12 months.”

Launched in 2023 by Elon Musk, Grok recently gained a new text-to-image feature called Aurora, which created photo-realistic AI images based on simple user-written prompts.

An earlier, less advanced version, called Flux, sparked controversy earlier this year as it was found to do things that many other similar programs don’t, such as capture Copyrighted figures and public figures are in compromising positionsOr use drugs or commit acts of violence.

There have been several reports of the latest Grok update being used to create realistic racist images of various football players and managers. One photo shows a black player picking cotton while another photo shows the same player eating a banana surrounded by monkeys in the jungle. A separate image depicts different players as pilots in the cockpit of the plane with the Twin Towers in the background. More images depict a diverse group of players and managers meeting and speaking with controversial historical figures such as Adolf Hitler, Saddam Hussein and Osama bin Laden.

X has become a platform that incentivizes and rewards the spread of hate through revenue sharing, and AI visuals have made that easier, said Calum Hood, head of research at the Center for Countering Digital Hate (CCDH).

“The thing that X has done, to a degree that no other major platform has done, is offer cash incentives for accounts to do this, so the accounts on

One major concern highlighted by many is not only the relative lack of restrictions on what users can ask for, but also the ease with which claims made to Grok can circumvent AI guidelines through “jailbreaking,” which involves describing the physical attributes of anyone they want. The stimulus is in the picture, rather than just naming it.

summer Report published by the Advisory Council on Human Rights I found that when Grok received different hate prompts, he generated 80% of them, 30% of which he created without a response, and another 50% of which he created after the jailbreak.

The Premier League said it was aware of the images and had a specialist team dedicated to helping find and report racist abuse directed at athletes, which could lead to legal action. The Premier League is believed to have received more than 1,500 such reports last year, and they have introduced filters for players to use on their social media accounts to help prevent large amounts of abuse.

An FA spokesperson said: “Discrimination has no place in our game or in wider society. We continue to urge social media companies and relevant authorities to tackle online abuse and take action against perpetrators of this unacceptable behaviour.

X and Grok have been contacted for comment.

By BBC

Leave a Reply

Your email address will not be published. Required fields are marked *