In a recent submission to the government, the industry body Internet and Mobile Association of India (IAMAI) has expressed serious concerns over the proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”) issued by the Ministry of Electronics & Information Technology (MeitY). The amendments aim to regulate deepfakes and other AI‑generated content, formally designated as “synthetically generated information” (SGI). However, IAMAI warns that the draft framework is fundamentally unworkable and risks causing widespread disruption across India’s digital economy. Hindustan Times+1
Background: The Proposed Amendments
In October 2025, the MeitY released draft amendments to the IT Rules which seek to bring synthetic media under regulatory oversight. The key elements of these draft rules include:
- Defining “synthetically generated information” (SGI) as information that is artificially or algorithmically created, generated, modified or manipulated such that it appears authentic. Moneycontrol+1
- Requiring intermediaries (platforms, social media services) to label such content with visible disclosures or embed metadata to indicate that the content is synthetic. Reuters+1
- Imposing obligations on service providers to “make reasonable efforts” to inform users that the content has been synthetically generated, and to obtain user‑declarations when uploading or sharing such content. Moneycontrol+1
- Introducing quantifiable standards for labelling—for example, covering at least 10 % of the surface area of a visual display, or indicating in the first 10 % of audio duration. Reuters
- Extending the due‑diligence regime for large social media intermediaries to ensure compliance, traceability, metadata management and transparency of synthetic content moderation. ParagKar.+1
The objective, as articulated by the government, is to ensure that “users are able to easily distinguish between content that is AI‑generated and content that is not”. Moneycontrol+1
IAMAI’s Objections: “Unimplementable” and Risky
While IAMAI concurs with the aim of curbing harmful deepfakes, the body argues that the draft rules are fraught with practical and legal difficulties. Their submission highlights several key areas of concern:
1. Over‑broad definition of SGI
IAMAI warns that the definition of SGI is so sweeping that it could capture “ordinary edits like AI‑assisted grammar correction, image enhancement, sound mixing” — activities which bear little risk of deception or harm. Moneycontrol+1The association argues that by focusing on how content is created rather than the intent or impact of the content, the rules risk hampering legitimate creativity and media workflows. Moneycontrol+1
2. Impracticable compliance obligations
According to IAMAI, requiring platforms to verify declarations from every user, label all uploads, trace metadata, and monitor downstream sharing would be technically and operationally unfeasible, especially for smaller intermediaries. Hindustan Times The compliance burden and cost implications could be significant. Moneycontrol
3. Chilling effect on innovation and expression
The body cautions that the draft rules could inadvertently stifle innovation—particularly in generative AI, creative editing, user‑generated content and media production. The fear is of a “one‑size‑fits‑all” regime that fails to distinguish between benign synthetic content and content intended to mislead. Moneycontrol
4. Fragmentation of regulatory frameworks
IAMAI notes that the proposed regime risks fragmenting India’s content regulation landscape by imposing additional obligations without clear alignment with existing frameworks — such as defamation law, privacy law, and intermediary guidelines. Moneycontrol
5. Risk of over‑regulation without harm‑based approach
The association argues for a more nuanced, risk‑based approach — one that triggers obligations only where synthetic content is “reasonably likely to cause material harm” rather than prohibiting or regulating all SGI by default. Moneycontrol
Implications for the Digital Economy
There are concrete ramifications if the draft rules as currently drafted are implemented without amendment:
- Platform burden: Large social media and content platforms may need to embed detection, labelling and verification systems at scale, adding costs and affecting timelines of content delivery.
- Innovation risk: AI‑startups and creative media companies may face legal uncertainty or compliance overheads just to deploy standard AI‑editing tools or generative workflows.
- User‑generated‑content impact: Ordinary users, content creators, influencers may find their uploads subject to extra declarations, labels, or metadata tracking — potentially chilling participation.
- Cross‑border complexity: Given the global nature of content and AI tools, platforms may struggle to trace origin, apply metadata or comply with unified labelling rules for content generated outside India.
- Legal uncertainty: Without clear delineation between what constitutes “harmless synthetic content” and “harmful manipulated content”, there is risk of uneven enforcement, legal disputes, and litigation burden.
IAMAI warns that if left unaddressed, such disruptions could ripple across “various segments of India’s digital economy” — from creator‑economy to media houses, from AI‑tool developers to user‑platform interactions. Hindustan Times+1
What IAMAI Recommends
In its submission, IAMAI outlines several recommendations to make the draft rules workable and proportionate:
- Adopt a harm‑based, risk‑tiered framework: Rather than a broad label‑everything regime, obligations should apply when synthetic content is reasonably likely to exhibit deception, impersonation, election‑influence or significant reputational harm. Moneycontrol
- Narrow the definition of SGI: Limit SGI to content that is substantially altered by AI and likely to mislead — exclude routine edits, assistive tools, creative transformations that do not pose harm. Moneycontrol
- Flexible labelling approaches: Allow machine‑readable metadata, backend provenance disclosures, rather than strictly visible labels covering fixed percentages of visual/audio display. This helps align with global standards and technical feasibility. Moneycontrol+1
- Clear role demarcation: Upstream AI‑tool developers or service providers should not bear the full labelling obligations of downstream platforms — obligations should be proportionate to role in the content chain. Moneycontrol
- Alignment with existing frameworks: Avoid regulatory overlap or fragmentation; integrate with intermediary rules, privacy and defamation laws; ensure clarity of enforcement and jurisdiction. SCC Online
- Phased implementation and exemptions: Allow transition period, exemptions for certain categories of benign synthetic content (e.g., assistive editing, creative transformations), and guardrails for small platforms or startups.
Broader Context: Why the Rules Matter
The urgency for such regulation stems from a rapidly evolving generative‑AI landscape. Deepfakes — manipulated videos, images or audio that convincingly impersonate real individuals — pose serious risks: misinformation, political manipulation, fraud, defamation and threats to personal dignity. SCC Online+1
India, with nearly a billion internet users, is considered particularly vulnerable to harmful content given its socio‑diverse and sensitive ecosystem of religion, language and politics. Reuters The proposed rules put India among the first to adopt quantifiable labelling requirements (e.g., 10 % visual marker) for synthetic content. Reuters
Yet, navigating deepfake risk requires striking a balance: protecting citizens and democratic integrity while preserving freedom of expression, creative innovation and platform dynamism. As one legal analysis puts it, “excessively broadening legal controls can have a chilling effect on legitimate expression”. SCC Online
The Road Ahead
With deadlines extended for stakeholder responses, the MeitY is currently synthesising feedback—including those from IAMAI, other technology bodies, platforms, civil‑society organisations and media houses. Moneycontrol
Key issues that remain to be resolved include:
- Finalising the definition of SGI and whether it will explicitly exclude benign synthetic content.
- Determining the technical and operational feasibility of labelling mandates (visible vs metadata).
- Deciding the scope and timing of obligations — for all platforms vs a subset, for all content vs targeted content.
- Establishing enforcement mechanisms, penalties, and safe‑harbour protections for intermediaries.
- Aligning India’s framework with international norms and ensuring cross‑border content flows are manageable.
If the concerns raised by IAMAI and other stakeholders are not adequately addressed, there is a risk that the final regulatory regime may inadvertently harm the digital economy, hamper innovation and burden platforms and creators — even as its stated aim is to reduce the harms of deepfakes.
Conversely, if a balanced, risk‑based, and technically feasible framework is adopted, India could set a global benchmark for regulating synthetic media while still nurturing a vibrant digital‑content ecosystem.
In sum, while the proposed amendments to the IT Rules reflect an urgent and necessary response to the rise of synthetic content and deepfakes, the industry body’s critique is that in their current form the rules are unimplementable and risk causing “large‑scale disruptions” across India’s fast‑growing digital economy. The regulatory challenge now lies in recalibrating the approach—sharpening the focus on harmful content, tailoring obligations to roles and risk‑levels, and safeguarding innovation and expression, while still protecting users and society from the dangers of deception in the generative‑AI era.


Leave a Reply