New Delhi, January 9, 2026: The Ministry of Electronics and Information Technology (MeitY) is actively seeking legal guidance on potential actions against X over obscene and sexually explicit content being generated using its artificial intelligence tool, Grok, officials familiar with the matter said. This move comes after MeitY raised concerns about what it described as serious failures by the platform in preventing the dissemination of objectionable material, signaling a stricter regulatory approach to AI-generated content in India.
On January 2, 2026, MeitY formally wrote to X, flagging the proliferation of sexually explicit and derogatory content created using Grok and warning that failure to address these issues could lead to the loss of legal protection under Indian law. According to officials, the ministry’s letter emphasized that the platform cannot treat Grok as a neutral tool. Rather, it must be recognized as an active content creator, akin to a human author, with all attendant responsibilities under the law.
“The opposite side is very big and powerful, which is why their stand has to be unambiguous,” said a senior official familiar with the discussions. “The law of the land must prevail. The same approach will apply to other platforms if their AI bots generate unlawful content. In Grok’s case, the impact is accelerated because it operates on a platform like X.”
Grok’s Status as an AI Content Creator
Officials clarified that MeitY’s stance represents a shift in how AI-generated content is being regulated. Historically, platforms have treated AI tools as neutral intermediaries, shielding themselves from liability. In this instance, the ministry has explicitly rejected that notion.
“Earlier, they were operating from a higher position, assuming Grok could be treated as a neutral platform tool. But the mindset has changed. Grok cannot be treated as a platform. It is a content creator, an artificial content creator. Just as I am a human content creator, Grok is an artificial one,” said the official.
This legal positioning is significant because it could reshape the regulatory framework for AI-generated content in India, potentially holding platforms accountable for outputs created by their AI models, especially if those outputs are obscene, sexually explicit, or defamatory.
MeitY’s Communication With X
In the January 2 letter, addressed to X’s Chief Compliance Officer (CCO) for India, the ministry highlighted the misuse of Grok to create fake accounts that host, generate, publish, or share obscene content targeting women in a vulgar or derogatory manner. Officials said that abuse was not confined to fabricated profiles but also involved genuine photos and videos of women being manipulated via AI prompts to produce indecent material.
The ministry explicitly demanded that X provide a comprehensive Action Taken Report, detailing the technical measures implemented for Grok, the role of the CCO in preventing misuse, steps taken against offending users and content, and systems in place to comply with mandatory crime-reporting requirements.
MeitY also stressed the need for a governance-level review of Grok, seeking assurance that the AI tool would not generate or propagate unlawful, sexualized, or derogatory content in the future. Officials indicated that the platform’s initial response, submitted on January 7, was insufficient, essentially reproducing its standard user policies without addressing the ministry’s core concerns.
“One official described the reply as the company essentially reproducing its own user policy across five pages and sending it to the ministry,” sources said. Consequently, MeitY has now asked X to categorically explain what actions it has taken in response to the specific incidents cited in the January 2 letter, with particular emphasis on actionable steps against content and users responsible for violations.
Legal Implications Under Indian Law
The government’s letter underscores that failure to comply could result in X losing intermediary immunity under Section 79 of the Information Technology Act, exposing the platform and its officers to strict legal consequences. Section 79 provides safe harbor to intermediaries who exercise due diligence and follow regulatory guidelines, but MeitY has made clear that continued non-compliance by X could nullify this protection.
“The law of the land is clear. If an AI platform creates obscene or sexually explicit content and does not take corrective measures, it cannot claim immunity. Legal accountability must be applied,” said an official.
The ministry’s action is part of a broader regulatory effort to ensure that Significant Social Media Intermediaries (SSMIs), including AI-based services, take proactive steps to prevent the misuse of their platforms. In December 2025, MeitY had issued an advisory to intermediaries warning that certain social media content could violate decency and obscenity laws, noting a perceived increase in objectionable content online.
Context of AI and Obscenity Issues
According to people familiar with the matter, MeitY had previously held discussions with X’s compliance teams concerning Grok’s responses to political and religious queries, but the sexually explicit and degrading content cited in the January 2 letter had not been raised in routine meetings. This suggests that the ministry’s recent action represents a focused escalation to address emerging threats posed by AI-generated content rather than general content moderation.
Officials highlighted that the concern is not just theoretical. The misuse of Grok has tangible implications for women and vulnerable groups, with AI being used to generate deepfake imagery, synthetic videos, and other forms of harmful content. The ministry’s insistence on technical and governance-level interventions reflects an effort to prevent real-world harm from AI outputs, including harassment, reputational damage, and potential criminal conduct.
Next Steps for X
The government has asked X to submit a detailed report outlining the technical safeguards, internal review mechanisms, and user accountability processes in place for Grok. This includes:
- Steps taken to remove or block offending content
- Measures against users generating abusive content
- Systematic monitoring and reporting procedures
- Compliance with statutory reporting requirements to law enforcement
The ministry has emphasized that proactive compliance is critical, and that the platform must demonstrate ongoing and measurable action to prevent recurrence.
Broader Implications
Legal experts say MeitY’s move could set a precedent for regulating AI platforms in India, especially as the use of generative AI expands across social media, search engines, and messaging services. By treating AI as a content creator, the government is signaling that platforms can no longer rely solely on user agreements or automated disclaimers to avoid accountability.
“This is a pivotal moment in AI regulation. India is sending a clear message that platforms hosting AI tools must take responsibility for their outputs, especially if those outputs are harmful or illegal,” said a technology law analyst.
The development comes amid a global debate over AI governance, ethical use, and regulation. Countries worldwide are grappling with how to balance innovation with accountability, and India’s regulatory stance could influence broader international norms, particularly regarding obscene or illegal content generated by AI.
Conclusion
MeitY’s ongoing engagement with X over Grok reflects the government’s serious approach to AI governance and online safety. By demanding explicit action against obscene and sexually explicit content, insisting on technical and governance reviews, and warning of legal consequences, the ministry has signaled that AI tools cannot operate in a legal vacuum.
As AI continues to evolve, the case of Grok could serve as a benchmark for how regulators hold platforms accountable for artificial content. For X, the ministry’s demand is clear: either demonstrate effective compliance or face potential loss of intermediary protection and strict legal action.
With the January 2 letter forming the basis for the current review, all eyes are now on how X responds, and whether the government’s stance will set a broader precedent for AI moderation and accountability in India.


Leave a Reply