X Acts Against Obscene AI Content Following Government Notice

Microblogging platform X has taken decisive action against obscene and sexually explicit content generated by its AI chatbot Grok, removing roughly 3,500 pieces of content and deleting over 600 accounts, officials aware of the matter said on Sunday. The move comes after the company acknowledged lapses in handling objectionable material and committed to complying with Indian law.

The development follows a notice issued by the Ministry of Electronics and Information Technology (MeitY) on January 2, which raised concerns about the proliferation of obscene and sexually explicit content linked to Grok. The ministry warned that failure to act could strip X of legal protections under Indian law, particularly under Section 79 of the Information Technology Act, which provides safe harbour protection to intermediaries, contingent on strict due diligence.

Grok, developed by Elon Musk’s xAI and integrated into X, has faced global scrutiny for enabling users to create non-consensual sexualised deepfake images of real people, including minors. These images, often depicting nudity or sexually suggestive scenarios, spread widely across X, drawing criticism from regulators and rights groups in Europe, Asia, and beyond. Indonesia has suspended access to Grok, while the European Union and the United Kingdom have launched investigations.

According to officials, X accepted its shortcomings and committed to operate within India’s legal framework, stating that it would no longer permit obscene imagery. A Hindi communication shared by the officials said, “X has accepted its mistake. The company said it will operate as per India’s laws. Going forward, X will not allow obscene imagery.”

MeitY initially flagged serious failures in Grok’s content moderation, particularly in handling politically and religiously sensitive content. Prior discussions between the ministry and X’s compliance teams took place in late December, focusing on the chatbot’s responses to contentious issues. X requested an extension to respond, citing intervening public holidays. The ministry set a deadline of January 7 for X’s detailed compliance report.

Officials said X’s initial submission was insufficient, largely reiterating its existing user policies rather than providing specifics on action taken. Following this, X submitted a comprehensive report detailing the deletion of over 3,500 pieces of content and suspension or termination of over 600 accounts linked to objectionable AI outputs.

A key point in the government’s assessment is the classification of Grok as a content creator rather than a passive platform tool. This distinction has significant implications for intermediary liability under Indian law, as safe harbour protection under Section 79 applies only if platforms exercise due diligence in content moderation.

The MeitY notice explicitly cited multiple legal violations, including provisions of the IT Act, the Indecent Representation of Women (Prohibition) Act, 1986, the Protection of Children from Sexual Offences (POCSO) Act, 2012, and relevant sections of the Bharatiya Nyaya Sanhita (BNS). The ministry emphasized that Grok’s misuse included non-consensual manipulation of images of women, whether uploaded by third parties or the individuals themselves, highlighting the technology’s potential for harm.

The notice directed X to immediately review Grok’s prompt-processing, output-generation, and image-handling systems, ensuring the chatbot does not produce, promote, or facilitate content containing nudity, sexualisation, or other unlawful material. Additionally, X was instructed to enforce its user terms strictly, implementing strong deterrent measures including account suspension, termination, and removal of violating content without delay.

The issue has also drawn political attention. Shiv Sena (UBT) MP Priyanka Chaturvedi criticised X for monetising the platform’s harmful capabilities after it restricted Grok’s image-generation feature to paid users in response to global backlash.

This controversy underscores broader challenges facing AI image-generation technologies. The Internet Watch Foundation reported a 400% increase in AI-generated child sexual abuse material in the first half of 2025, highlighting the rapid proliferation of harmful content. Grok has been positioned as more permissive than other mainstream AI models, with features such as “Spicy Mode” permitting partial nudity and sexually suggestive imagery. The platform prohibits pornography involving real people’s likenesses and sexual content involving minors, which remain illegal to produce or distribute.

MeitY officials have stated they will continue to monitor X’s compliance closely, warning that any recurrence of violations could trigger further legal action. The ministry’s approach illustrates India’s growing focus on regulating AI platforms and holding them accountable for content moderation, particularly when technologies can generate realistic, non-consensual imagery with potential for harm.

The case also highlights the evolving responsibilities of AI platforms. While AI has the potential to create innovative content and services, unregulated use can lead to widespread ethical and legal risks. Grok’s situation demonstrates the need for stringent safeguards, robust oversight, and clear accountability frameworks to prevent AI-generated content from causing societal harm.

X’s action to remove thousands of objectionable posts and disable accounts marks a significant step in India’s regulatory oversight of AI-driven platforms. It also signals the importance of rapid response and collaboration between technology companies and regulators to ensure user safety while maintaining the benefits of emerging AI technologies.

As AI continues to integrate into mainstream platforms, the Grok incident is likely to serve as a reference point for policymakers, regulators, and technology firms worldwide in designing safer, legally compliant AI systems. MeitY’s insistence on treating AI outputs as potentially actionable content emphasizes that platforms cannot evade liability and must take proactive measures to prevent misuse.

In the coming weeks, both X and other AI developers operating in India will likely face heightened scrutiny, with compliance audits, detailed reporting requirements, and the need to demonstrate effective content moderation measures. The Indian government’s approach signals its intent to enforce accountability in the rapidly evolving AI landscape, aiming to balance innovation with protection of citizens, particularly minors and vulnerable groups, from harm.

This case exemplifies the challenges global AI platforms face in adapting to diverse legal frameworks and societal expectations while delivering user-friendly services. X’s compliance with MeitY’s directives could set a precedent for other companies operating in India, illustrating the necessity of combining technological safeguards with legal accountability to mitigate the risks associated with AI-generated content.

In conclusion, the removal of thousands of objectionable posts and accounts by X marks a critical step in aligning AI platform operations with Indian law, emphasizing the shared responsibility of governments and tech companies in preventing the misuse of emerging technologies. The Grok episode highlights the importance of proactive regulation, rapid enforcement, and continuous monitoring to protect users and uphold legal and ethical standards in the digital era.

Leave a Reply

Your email address will not be published. Required fields are marked *