CJI Urges Executive to Lead Regulation of AI, Flags Misuse Concerns in Judiciary

The Chief Justice of India, Bhushan R Gavai, on Monday highlighted the growing risks posed by artificial intelligence (AI) technologies in the judicial domain, but clarified that the formulation of regulatory policies must be the prerogative of the executive, not the judiciary. His observations came during the hearing of a public interest litigation (PIL) seeking the establishment of a comprehensive legal and policy framework to govern the use of generative AI (GenAI) in courts and quasi-judicial bodies.

Addressing the matter before a bench also comprising Justice K Vinod Chandran, CJI Gavai acknowledged that the judiciary itself has not been immune to AI misuse. “We have seen our morphed pictures too,” he remarked, drawing attention to the circulation of AI-generated images that misrepresent judges and potentially compromise the dignity and credibility of judicial officers. However, he was careful to emphasize that the oversight and regulation of emerging technologies fall squarely within the ambit of policymaking by the executive. “This is essentially a policy matter. It is for the executive to take a call,” he added, signaling the court’s reluctance to step into the domain of technological governance.

The PIL, filed by advocate Kartikeya Rawal and argued with the assistance of advocate-on-record Abhinav Shrivastava, seeks directions from the Supreme Court to the Centre to enact a law or issue a comprehensive policy framework to ensure the “regulated and uniform” use of GenAI within judicial systems. The petition makes a clear distinction between traditional AI systems and GenAI models, emphasizing the latter’s autonomous ability to generate text, reasoning, and data patterns. According to the plea, this unique capability presents significant challenges for the administration of justice, as it introduces the risk of hallucinations—instances where AI systems produce non-existent legal principles, fabricated case citations, or erroneous interpretations.

“The characteristic of GenAI being a black box and having opaqueness has the possibility of creating ambiguity in the legal system,” the petition stated. It further cautioned that outputs generated by such systems could mislead legal professionals, introduce biased interpretations, and potentially result in arbitrary judicial reasoning. Given that Indian judicial processes are heavily reliant on precedent, traceable reasoning, and transparency, the opacity inherent in GenAI models poses a substantial threat to the integrity and consistency of judicial decision-making.

The petition also raised concerns about the potential for GenAI to replicate or even amplify existing social biases, particularly against marginalised communities. Because these models are trained on real-world data, they can inadvertently encode prejudices present in the underlying datasets. Without robust standards for data neutrality, ownership, and accountability, the plea argued, AI-assisted judicial processes risk violating fundamental rights, including the right to equality under Article 14 and the citizens’ right to information under Article 19(1)(a).

Another key issue highlighted in the PIL is the susceptibility of AI systems to cyberattacks. Integration of judicial processes, court records, and legal documents into AI-driven platforms could expose sensitive data to breaches, manipulation, or unauthorized access, potentially undermining the credibility of judicial proceedings and public trust in the legal system.

The Supreme Court bench, while noting the seriousness of these concerns, made it clear that judicial intervention should be limited to advisory oversight rather than direct regulation. It indicated that questions relating to the governance of emerging technologies fall within the remit of the legislative and executive branches, which are better positioned to draft and implement policies, ensure compliance, and enforce accountability. The bench adjourned the matter for two weeks to allow parties to submit additional details and arguments.

Legal and technology experts observing the proceedings have highlighted the significance of the court’s stance. By emphasizing that AI regulation is a policy issue for the executive, the Supreme Court is acknowledging both the technical complexities of AI and the importance of preserving judicial independence. Regulators in India face the challenge of balancing innovation with safeguards that prevent misuse while ensuring that AI tools enhance, rather than compromise, judicial efficiency and transparency.

Generative AI, unlike conventional AI models, can autonomously produce legal reasoning, draft judgments, and simulate argumentation. While this holds the potential to reduce workloads and expedite case processing, experts warn that unchecked deployment may introduce errors, bias, and ethical dilemmas. The Supreme Court’s approach indicates a recognition that courts must remain arbiters of justice without assuming the dual role of technical regulators, which could undermine their core responsibilities.

The PIL also reflects a broader concern emerging globally: as governments and judicial institutions increasingly adopt AI tools, there is an urgent need to frame policies that establish uniform standards, accountability mechanisms, and ethical guidelines. Countries that lack such frameworks risk not only technological misuse but also erosion of public trust in democratic institutions.

CJI Gavai’s remarks come at a time when AI-generated misinformation and deepfakes targeting public figures have become a global phenomenon. Courts, politicians, and civil society actors have all been targeted in ways that blur the line between legitimate discourse and malicious manipulation. By raising the issue in open court, the Supreme Court has highlighted the need for executive-led initiatives to regulate AI, including generative models, in a manner that protects both institutional integrity and public interest.

The bench’s decision to adjourn the case while leaving the responsibility to the executive underscores a key principle: the judiciary can identify risks and recommend caution, but the technical and policy solutions for regulating emerging technologies are best addressed by policymakers with domain-specific expertise. Legal analysts believe that the Supreme Court is signaling a cooperative approach where the judiciary identifies vulnerabilities but encourages proactive executive action to frame laws and policies before AI is widely integrated into the judicial system.

The case is expected to catalyze discussion on several fronts, including data governance, AI transparency, bias mitigation, cybersecurity, and ethical guidelines for AI in public institutions. By drawing attention to both the misuse and potential of GenAI, the Supreme Court is emphasizing that technological innovation must be accompanied by clear rules, accountability, and public trust.

In conclusion, the Supreme Court’s handling of the PIL underscores the judiciary’s awareness of the risks posed by AI while simultaneously reinforcing the principle that regulation of emerging technologies is an executive function. CJI Gavai’s acknowledgment of AI misuse within the judiciary, combined with his insistence that policy and regulatory frameworks must be executive-driven, reflects a balanced approach that protects judicial independence while highlighting the pressing need for responsible AI governance. As the PIL progresses, it is likely to shape India’s policy discourse on GenAI and its integration into the legal system, setting important precedents for governance, transparency, and ethical use of AI in critical public institutions.

Leave a Reply

Your email address will not be published. Required fields are marked *