
The UK government has called on Elon Musk’s X platform to urgently tackle the use of its AI chatbot, Grok, in generating non-consensual sexualized images of women and girls. Technology Secretary Liz Kendall described the situation as “absolutely appalling”, emphasizing that such content is degrading and must be prevented.
The issue came to light after the BBC reviewed multiple examples on X where users asked Grok to digitally undress individuals, place them in bikinis, or depict them in sexual scenarios without consent.
Government and Regulator Response
Kendall fully endorsed actions by the UK regulator Ofcom, which has made urgent contact with Musk’s AI company, xAI, and is investigating Grok’s misuse in generating sexualized images.
“It is absolutely right that Ofcom is looking into this as a matter of urgency and it has my full backing to take any enforcement action it deems necessary,” Kendall said.
The government clarified that enforcing the law in this case is not about limiting free speech, but about preventing illegal and abusive content online. Under the Online Safety Act, intimate image abuse—including AI-generated material—is considered a priority offense. Platforms like X are required to remove illegal content promptly and prevent its proliferation.
How Grok AI Is Being Misused
Grok, a free AI assistant with optional premium features, allows users to respond to posts or edit uploaded images. Users on X have been exploiting the AI’s image editing capabilities to produce sexualized content without consent.
Women affected by these deepfakes have described the experience as “humiliating” and dehumanizing. One victim, Dr. Daisy Dixon, reported that people were taking everyday pictures she had shared on X and prompting Grok to undress or sexualize her. She said the images left her feeling “shocked, humiliated, and unsafe”.
“Myself and many other women on X continue to report the inappropriate AI images/videos we are being sent daily, but X continues to reply that there has been no violation of X rules,” Dixon told the BBC.
X’s Response
X issued a statement confirming it takes action against illegal content, including Child Sexual Abuse Material (CSAM), by removing content, suspending accounts, and cooperating with law enforcement.
“Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they uploaded illegal content,” the company said.
Despite this, affected users report that enforcement has been inconsistent, prompting government calls for stricter oversight and accountability.
Calls for Accountability
Politicians and officials have urged immediate action. Sir Ed Davey, leader of the Liberal Democrats, suggested reducing access to X or launching criminal investigations through the National Crime Agency if Grok’s misuse is confirmed.
At the European level, Thomas Regnier, a European Commission spokesperson, emphasized the seriousness of AI-generated abuse.
“We don’t want this in the European Union… it’s appalling, it’s disgusting. The Wild West is over in Europe. All companies have the obligation to put their own house in order – and this starts by being responsible and removing illegal content generated by AI tools,” he said.
The Broader Issue of AI Deepfakes
The Grok controversy underscores the emerging risks of AI-powered image and video generation. While AI tools like Grok can assist in conversation, research, and content creation, their misuse for deepfakes poses serious legal, ethical, and personal safety concerns.
Experts warn that unchecked AI-generated sexual content can:
- Violate privacy and consent
- Cause emotional and psychological harm
- Challenge existing regulatory frameworks
- Create legal liabilities for platforms hosting AI tools
This has prompted governments worldwide to consider stricter AI content regulations and mandatory moderation measures to prevent abuse.
Summary
The case of Grok AI deepfakes highlights the urgent need for social media platforms and AI developers to take responsibility. The UK government, backed by Ofcom, is pressuring Elon Musk’s X to:
- Stop the creation and sharing of non-consensual sexualized AI images
- Enforce stricter content moderation policies
- Ensure compliance with the Online Safety Act
- Protect vulnerable users from AI-enabled harassment
Failure to act could lead to regulatory penalties, criminal investigations, and international scrutiny, signaling that governments are serious about controlling AI abuse in social media platforms.


Leave a Reply