Experts have expressed concerns and warned about X’s Grok AI-made racist abuse images that flooded the platform in December. According to Signify—an organization that works with prominent sports groups and clubs to track and report online hate—there will be an escalation in the images on the internet and the platform, particularly shortly. Abuse images flooded X after Grok got updated According to The Guardian , there have been several reports made of racist images reportedly created by Grok AI following its update, including photo-realistic racist images of football players and managers. Among the images, one of them shows a black player picking cotton, while another depicts a player eating bananas and surrounded by monkeys. Other images show several other players and managers meeting and chatting with controversial historical figures like Osama Bin Laden, Adolf Hitler, and Saddam Hussein. Signify has noted with concern the sudden increase in computer-generated images that were created using Grok AI and flooded the X platform. The organization is also concerned about this trend and believes more of such images are likely to be seen on social media as the introduction of photorealistic AI will make it easier and increase the prevalence of such images. “It is a problem now, but it’s really just the start of a coming problem. It is going to get so much worse and we’re just at the start, I expect over the next 12 months it will become incredibly serious.” Signify. X’s generative AI tool was launched in 2023 by Elon Musk. Recently it added a new text-to-image feature known as Aurora, which created photorealistic AI images based on simple prompts by users. Previously, a less advanced version known as Flux also drew controversy earlier this year as it was found to do things that many other similar software would not do, according to The Guardian. These included depicting copyrighted characters and public figures in compromising positions, taking drugs, or committing acts of violence. X turned into a platform for hate Center for Countering Digital Hate (CCDH) head of research Callum Hood accused the X platform of being a platform for hate. Hood said that X had become a platform that incentivized and rewarded spreading hate through revenue sharing, and AI imagery made it easier than ever. “The thing that X has done, to a degree that no other mainstream platform has done, is to offer cash incentives to accounts to do this, so accounts on X are very deliberately posting the most naked hate and disinformation possible.” Hood. Experts expressed concerns at the relative lack of restrictions on what users can ask the generative AI to produce with such ease allowing Grok to circumvent the AI’s guidelines by “jailbreaking.” A CCDH report shows that when given different hateful prompts, the AI model created 90% of them, 30% of which were created without pushback. It also created another 50% after a jailbreak. The Premier League admitted knowing about the images of the football players. They said they had assigned a dedicated team to help find and report racist abuse towards athletes, which they said could lead to legal action. This comes as the football administration revealed they received over 1,500 abuse reports in 2024 and that there were introduced filters for players to use on social media accounts to help block out large amounts of abuse. “Discrimination has no place in our game or wider society. We continue to urge social media companies and the relevant authorities to tackle online abuse and for action to be taken against offenders of this unacceptable behavior,” said a spokesperson from the FA. A Step-By-Step System To Launching Your Web3 Career and Landing High-Paying Crypto Jobs in 90 Days.