Elon Musk’s Grok Restricts Image Generation Feature Amid Global Backlash Over Sexualised AI Imagery

New Delhi / London / Washington, January 9, 2026: In response to mounting criticism over the misuse of its AI image-generation tool, Grok, social media platform X, owned by Elon Musk, has restricted the image creation and editing features of the application to paid subscribers. The decision comes after users exploited Grok to generate sexualised images of women and children, sparking outrage from governments, regulatory authorities, and civil society organizations worldwide.

Previously, the Grok tool on X allowed users to create and edit images for free, albeit with daily usage limits. Users could leverage the AI to generate diverse imagery, edit existing photos, and experiment with creative prompts without a subscription. The standalone Grok application, which operates independently of X, continues to provide free access to image-generation tools, allowing users to create AI-generated images without a paid plan, according to a report by Bloomberg.

Widespread Misuse and the Resulting Backlash

The misuse of Grok has drawn significant global attention. Reports indicate that users were modifying photos of people, primarily women, to produce sexualised imagery, including images of women in bikinis. Alarmingly, some individuals pushed the boundaries further, creating sexualised content depicting children, raising serious legal and ethical concerns. Sources suggest that thousands of such images were being generated per hour, often without the knowledge or consent of the individuals depicted.

The Internet Watch Foundation (IWF), a UK-based organization tasked with tracking and preventing child sexual abuse material online, reported discovering criminal content on the dark web that was allegedly produced using Grok. The findings prompted urgent calls for regulatory action, highlighting the potential of AI tools to be weaponized for producing illegal content.

Global Regulatory and Government Reactions

Governments and regulatory bodies reacted strongly to the misuse of Grok. In the United Kingdom, Prime Minister Keir Starmer condemned the content as “disgraceful” and reaffirmed the government’s full support for Ofcom, the UK’s media regulator, to take action against X and hold the company accountable. The UK’s intervention reflects a broader trend of tightening oversight of AI platforms that generate or disseminate sexualised or otherwise illegal content.

Similarly, the European Union has directed X to preserve all internal documentation related to Grok until the end of the year. This measure aims to ensure accountability and facilitate investigations into the misuse of AI tools. Earlier, the EU had condemned sexually explicit AI-generated images of children as illegal and a violation of multiple international treaties concerning child protection.

Elon Musk’s Response

Elon Musk responded publicly to the controversy through his social media platform, emphasizing that illegal use of Grok would have consequences. “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” Musk stated on X last week. The message underscores X’s intent to treat AI-generated illegal content with the same legal scrutiny as traditional forms of online abuse or exploitation.

While Musk’s response was immediate, critics argue that the enforcement mechanisms of AI-generated content are still evolving and that stricter moderation policies are necessary. The incident has also reignited debates over the responsibilities of social media platforms and AI developers in preventing the creation and dissemination of harmful content.

India’s Intervention

India has taken a proactive stance against Grok’s misuse. Earlier in January 2026, the Ministry of Electronics and Information Technology (MeitY) issued a directive to X, instructing the platform to immediately remove and disable all obscene, nude, indecent, and sexually explicit content generated through AI tools like Grok. The directive cited the Information Technology Act, 2000, and the IT Rules, 2021, emphasizing that social media intermediaries are legally obligated to prevent the circulation of unlawful content.

The government’s order mandated that X take action against both the content and the users responsible for generating it. Additionally, X was required to submit a detailed Action Taken Report (ATR) within 72 hours, outlining the steps it had implemented to comply with the directive. Officials indicated that failure to comply could result in the loss of intermediary liability protection, leaving X legally accountable for any future violations.

A senior government official noted that the misuse of Grok is particularly concerning because the tool functions as a content creator rather than a neutral platform, generating images autonomously based on user prompts. “Earlier, platforms argued that AI-generated content was neutral, but we have clarified that AI tools that create unlawful content are accountable just like human creators,” the official stated.

Ethical Concerns and the Debate Over AI Content

The Grok incident has raised broader questions about the ethics and governance of AI-generated content. AI tools capable of generating realistic images are increasingly accessible to the public, raising the risk of misuse for sexual exploitation, harassment, or political manipulation. Experts argue that AI developers must implement robust safeguards, content moderation protocols, and reporting mechanisms to prevent abuse.

Civil society groups have also highlighted the psychological and social impact on individuals whose images are manipulated without consent. The creation and distribution of sexualised imagery of minors or adults without permission is not only illegal but also deeply damaging, underscoring the urgent need for stricter AI governance frameworks.

Paid Access as a Control Measure

The decision to restrict Grok’s image-generation features to paid subscribers on X is viewed as an interim measure to curb abuse. By limiting access, the company aims to reduce the volume of content being generated, create accountability through user identification, and provide a financial barrier that discourages mass misuse. While this step may reduce casual exploitation, experts suggest that technical safeguards, monitoring, and AI filters are equally crucial for long-term prevention.

Despite the restrictions on X, the standalone Grok application remains accessible for free, raising concerns that the misuse could continue outside the platform. Regulatory authorities in the UK, EU, and India are reportedly monitoring all versions of the tool, emphasizing that AI content moderation must extend beyond a single platform to be effective.

The Path Forward

The controversy surrounding Grok illustrates the growing challenges of AI governance in the digital age. Platforms must balance innovation and user access with the responsibility to prevent illegal or harmful content. Experts recommend measures such as automated detection of inappropriate prompts, human oversight, reporting mechanisms, and cross-border regulatory cooperation to address misuse effectively.

Governments, including India, have made it clear that AI platforms cannot operate in a legal vacuum. The Ministry of Electronics and Information Technology has indicated that Grok, and similar AI tools, will be held accountable under existing laws, and failure to comply could result in strict legal consequences for both the platform and its officers.

Meanwhile, global attention continues to focus on Musk’s X and Grok, with the UK, EU, and India watching closely to ensure that AI-generated sexual content is swiftly identified and removed. Analysts suggest that this incident may set a precedent for AI governance worldwide, prompting both regulators and tech companies to rethink how AI content creators are regulated and monitored.

Conclusion

The restriction of Grok’s image-generation tools to paid subscribers represents a significant step by Elon Musk’s X to address a growing global concern over sexualised AI content. The move follows international pressure from the UK, EU, and India, as well as criticism from civil society organizations.

While the restriction may reduce misuse, it also highlights the need for comprehensive AI governance, legal accountability, and proactive monitoring to prevent further harm. As AI technologies become more advanced and accessible, the Grok controversy underscores the urgent need for global standards, regulatory oversight, and ethical safeguards to ensure that AI tools serve society responsibly rather than being exploited for harmful purposes.

Leave a Reply

Your email address will not be published. Required fields are marked *