
Elon Musk’s AI chatbot, Grok, developed by his company xAI, will now prevent users from generating sexualised images of real people following global criticism and regulatory scrutiny. The move comes amid mounting concern over deepfakes depicting women and children in explicit or non-consensual contexts.
Geoblocking and Safety Measures Introduced
In a statement released on January 14, 2026, X’s safety team announced that Grok users would be “geoblocked” from creating or editing images of real people in revealing attire, including bikinis and underwear. The company emphasized that the measures aim to curb sexualised content and protect individuals from harassment and exploitation.
“Technological measures have been implemented to prevent the Grok account from allowing the editing of images of real people in revealing clothing,” the statement said. Only paid subscribers will be allowed to generate and edit images, and users in jurisdictions where sexualised AI imagery is illegal will be blocked from creating such content.
X reiterated its stance on zero tolerance for child sexual exploitation, non-consensual nudity, and unwanted sexual content.
Global Backlash and Investigations
Grok’s controversial “spicy mode” allowed users to generate deepfakes with commands like “put her in a bikini” or “remove her clothes.” The feature has been exploited to create sexualised images of women and children without consent, prompting investigations in multiple countries.
California Attorney General Rob Bonta announced an inquiry into whether xAI violated laws by enabling large-scale production of sexualised deepfakes. “An avalanche of reports” of AI-generated harassment of women and minors led to the investigation, Bonta said.
The UK’s media regulator Ofcom has launched its own probe, while French authorities referred cases to prosecutors and the media regulator Arcom. Indonesia temporarily blocked access to Grok over rising deepfake attacks, and Malaysia followed suit, with plans for legal action against xAI. Meanwhile, the European Commission extended a retention order requiring X to preserve internal documents and data related to Grok until the end of 2026.
Musk’s Response
Hours before the safety measures were announced, Elon Musk denied knowledge of any sexualised AI-generated images involving minors. “Literally zero. Obviously, Grok does not spontaneously generate images; it does so only according to user requests,” he said in a statement on X.
Despite Musk’s claims, critics argue that the platform’s prior features facilitated harassment and non-consensual content creation, raising ethical and legal concerns about AI tools capable of producing realistic imagery of individuals.
The Broader Issue of AI Deepfakes
Grok’s controversy highlights growing global unease over AI-generated deepfakes and their misuse. Sexualised or non-consensual AI images can cause psychological harm, reputational damage, and legal risks for victims, prompting regulators to demand stricter safeguards.
Experts say that AI platforms must implement robust verification and content moderation mechanisms, particularly for images of real people, to prevent abuse. The Grok case may serve as a test for how governments, tech companies, and regulators navigate the legal and ethical challenges posed by generative AI.
Conclusion
xAI’s decision to geoblock users from creating sexualised images marks a significant response to international pressure, but scrutiny of Grok is far from over. With investigations underway in the US, UK, France, Indonesia, Malaysia, and the European Union, the company faces ongoing challenges in balancing AI innovation with user safety and compliance with global legal standards.
The controversy underscores the urgent need for ethical AI governance and highlights the potential risks when generative technologies are misused, particularly in producing sexualised content without consent.


Leave a Reply