A government-appointed committee has released detailed AI Governance Guidelines to steer India’s approach to artificial intelligence, recommending measures for accountability, transparency, and risk management while proposing a phased roadmap for implementation.
The report, part of the India AI Governance initiative, provides practical recommendations for both industry and regulators, including voluntary compliance with privacy, fairness, safety, and transparency principles, maintenance of audit trails, human oversight, and clear grievance redressal mechanisms.
Preparing for Advanced AI
Principal Scientific Adviser Ajay Kumar Sood highlighted the urgency of preparing for next-generation AI systems, including artificial general intelligence (AGI), which he said could arrive in the next two years. “We need to prepare for AGI carefully… simply scaling GPUs is not the answer,” he said, emphasizing measured, safe deployment.
Accountability Across the AI Value Chain
The committee stressed that accountability should be distributed based on function, risk, and due diligence, with organizations updating internal policies to define responsibilities at each stage of AI development and deployment. Transparency reports, audit trails, and accessible grievance channels are recommended, while the committee suggested complementing voluntary compliance with market incentives, third-party audits, and external certification to create enforceable layers of accountability.
Human Oversight in High-Risk Systems
For high-risk AI systems, the guidelines call for “human-in-the-loop” mechanisms, allowing outputs to be reviewed or overridden. Where direct human oversight is impractical, safeguards such as automated checks, circuit breakers, and system-level constraints are recommended. Continuous monitoring, testing, and audit trails in critical sectors are advised to ensure AI operates safely.
Grievance Redressal Mechanisms
The report urges companies to establish clear, multilingual complaint systems that respond within fixed timelines and are accessible even to users with limited digital skills. Feedback from complaints should inform system improvements and reduce future risks. Regulators and the proposed AI Governance Group (AIGG) are encouraged to create standardized escalation procedures for consistent handling of grievances.
Phased Action Plan
In the short term, the report calls for setting up the AIGG, the Technology and Policy Expert Committee (TPEC), and the AI Safety Institute (AISI), alongside developing risk frameworks, voluntary commitments, and clearer liability norms. In the medium term, it recommends implementing common standards on safety and fairness, operationalizing a national AI incidents database, and piloting regulatory sandboxes for high-risk domains. The long-term vision includes integrating AI with India’s Digital Public Infrastructure and expanding international collaboration on AI safety and policy.
While non-binding, the guidelines are intended to inform both public and private actors. “These recommendations are advisory in nature. The Government of India may consider their implementation through appropriate policy measures, standards, or regulations,” the report notes.
Balaraman Ravindran, Professor at IIT Madras and committee chair, said, “We need to ensure AI developers understand their obligations under existing laws and that accountability is enforced across the ecosystem.”
The guidelines aim to balance innovation, safety, and public trust, laying the groundwork for a structured, accountable, and resilient AI ecosystem in India.


Leave a Reply