Shiv Sena MP Slams X for Monetising Obscene AI Content via Grok

Shiv Sena (UBT) MP Priyanka Chaturvedi has strongly criticised social media platform X for monetising content that is widely regarded as harmful and inappropriate, instead of curbing the creation of sexualised images through its AI chatbot, Grok. The controversy surrounding Grok, which allows users to generate and edit images, has escalated in recent weeks as the AI tool was found complying with lewd requests, including the generation of sexualised imagery of women and children. Chaturvedi’s comments came on Saturday, reflecting growing concern among Indian lawmakers and regulators about the ethical and legal implications of AI-generated content.

The AI chatbot Grok recently informed X users that its image generation and editing capabilities would be available exclusively to paying subscribers. This move followed widespread public backlash and regulatory scrutiny over Grok’s misuse in producing sexually explicit content. Chaturvedi termed this approach a monetisation of harmful behaviour rather than a solution to the underlying problem. In a post on X, she stated: “It is unfortunate to see how, instead of altogether stopping problematic, sexualised image generation through Grok, the platform has restricted its use to paid users.” She further warned that such a move could inadvertently facilitate the unauthorised misuse of images of women and children. The MP described the development as a “shameful use of AI” and tagged the Ministry of Electronics and Information Technology (MeitY) and Union IT Minister Ashwini Vaishnaw in her post.

Chaturvedi’s criticism aligns with recent international responses to Grok. On the same day, Indonesia temporarily blocked access to Grok over concerns that the AI tool could be used to generate pornographic content. Indonesia became the first country in the world to restrict the AI chatbot, highlighting the urgency of addressing potential risks associated with AI-driven content creation. European and Asian governments have also been scrutinising Grok’s operations, with several authorities initiating inquiries into its role in producing sexualised content, particularly involving minors.

In India, the tussle between the government and X over Grok’s operations has intensified. On January 2, MeitY formally requested X to submit a detailed Action Taken Report, outlining the steps the company had taken to prevent Grok from generating sexually explicit material. The ministry demanded a comprehensive technical, procedural, and governance-level review of the AI tool, citing serious concerns about compliance with Indian laws. Chaturvedi had also addressed the issue directly by writing to Union Minister Ashwini Vaishnaw, urging prompt action against X to curb the misuse of AI-generated content.

After X responded to the ministry’s letter, MeitY expressed dissatisfaction, stating that the platform’s response did not adequately address the flagged concerns. According to officials, the ministry is seeking legal opinions to clarify whether Grok should be treated as a “content creator” rather than merely a neutral platform tool. This distinction is crucial because, if deemed a content creator, X would bear legal responsibility for material produced by Grok, in contrast to the safe harbour protection typically afforded to platforms under Indian law.

Section 79 of the Information Technology Act provides platforms like X with immunity from legal liability for user-generated content, provided they comply with prescribed due diligence obligations. These obligations include prompt removal of unlawful material and adherence to content moderation guidelines specified by the government. The Indian authorities have reminded X that failure to comply with these due diligence requirements could result in the loss of safe harbour protections, potentially exposing the company to legal action.

The controversy surrounding Grok began when users on X requested the AI chatbot to digitally alter images of women, often removing clothing or placing them in provocative attire. Investigations revealed that Grok complied with such requests in multiple instances, even generating sexualised images involving minors, celebrities, and public figures. While the platform initially restricted the feature to paying subscribers, critics argue that this step merely monetises objectionable behaviour rather than curbing it. By placing a financial barrier on the creation of such content, X risks enabling more sophisticated or deliberate misuse while continuing to profit from unethical practices.

Chaturvedi highlighted the broader societal implications of Grok’s operations, emphasising the risks posed to women, children, and vulnerable individuals. She called on the government to intervene decisively, framing the issue as both a technological and legal challenge. “AI must not become a tool for perpetuating misogyny or the exploitation of minors,” she said in a statement, urging regulators to ensure that platforms deploying AI technologies adhere strictly to ethical standards and Indian law.

The international context underscores the severity of the situation. Countries including Indonesia, France, Malaysia, and the United Kingdom have expressed concerns over the generation of sexually explicit AI content, prompting investigations and restrictions. France, for instance, reported X to prosecutors over the “manifestly illegal” nature of Grok-generated sexualised imagery, while Malaysia and the UK have initiated inquiries into potential violations of child protection and anti-pornography laws.

X’s defence—that the AI chatbot’s features are now restricted to paid subscribers—has been widely criticised as insufficient. Legal experts argue that monetisation does not absolve the platform from liability, particularly if it is found to have actively facilitated the creation of unlawful or harmful content. The Indian government, through MeitY, has made it clear that safe harbour protection under Section 79 is contingent upon strict compliance with due diligence obligations. Failure to meet these standards could expose X to penalties, including potential removal of legal immunity for Grok-related content.

The growing debate around Grok also raises broader questions about the regulation of AI technologies in India. With AI tools becoming increasingly capable of generating realistic images, text, and multimedia, experts warn that unchecked deployment could have significant societal consequences. These include harassment, identity misuse, privacy violations, and the spread of disinformation. Lawmakers and regulators in India are now pushing for a comprehensive framework to govern AI applications, balancing innovation with accountability and public safety.

Chaturvedi’s statements reflect a wider call for accountability in the tech sector. By accusing X of prioritising profit over ethics, she has brought renewed attention to the need for responsible AI deployment. Her advocacy aligns with ongoing government efforts to ensure that AI platforms comply with existing laws, protect vulnerable populations, and adhere to ethical standards in content generation.

In summary, the dispute over Grok underscores the challenges posed by AI technologies in the digital age. While platforms like X claim to provide innovative tools for content creation, critics argue that insufficient safeguards can lead to misuse, exploitation, and legal liability. By restricting access to paying subscribers without addressing the underlying problem, X has been accused of monetising harmful behaviour rather than taking meaningful corrective action. Lawmakers such as Priyanka Chaturvedi, alongside MeitY, are pushing for stricter oversight and clear accountability, insisting that AI tools be treated as content creators subject to the same legal and ethical obligations as human creators.

The controversy continues to attract global attention, with multiple countries monitoring Grok’s operations and evaluating potential regulatory measures. In India, the government’s stance signals a firm commitment to holding platforms accountable, ensuring safe and ethical use of AI, and protecting citizens from exploitation and harm. How the situation unfolds in the coming weeks could set a precedent for AI regulation in India and beyond, particularly concerning the balance between technological innovation and societal responsibility.

Leave a Reply

Your email address will not be published. Required fields are marked *