CJI Bhushan R Gavai Flags AI Misuse, Says Courts Cannot Regulate Technology

New Delhi, November 10, 2025: Chief Justice of India (CJI) Bhushan R Gavai on Monday highlighted the growing misuse of Artificial Intelligence (AI) tools, including the circulation of morphed images targeting members of the judiciary. While acknowledging the concern, he emphasized that any regulatory action on Generative AI (GenAI) falls under the purview of the executive and not the courts.

“We have seen our morphed pictures too,” CJI Gavai remarked during a hearing on a public interest litigation (PIL) that sought the formulation of a legal or policy framework to govern the use of GenAI in judicial and quasi-judicial bodies. He added, “This is essentially a policy matter. It is for the executive to take a call.” The bench, which also comprised Justice K Vinod Chandran, expressed its reluctance to intervene, observing that governance of emerging technologies falls squarely within the domain of policymakers.

At the request of the counsel, the court adjourned the matter for two weeks.

The PIL and Its Concerns

The PIL was filed by advocate Kartikeya Rawal and argued with assistance from advocate-on-record Abhinav Shrivastava. It seeks directions to the Central government to enact legislation or frame a comprehensive policy to ensure the “regulated and uniform” use of GenAI within judicial systems.

The plea distinguishes GenAI from traditional AI by emphasizing its ability to autonomously generate text, data, and reasoning patterns. This capability, while innovative, can also lead to hallucinations, where the system produces non-existent legal principles or fabricated case citations. According to the petition, this poses a risk of introducing ambiguity and misinformation into judicial processes.

“The characteristic of GenAI being a black box and having opaqueness has the possibility of creating ambiguity in the legal system,” the PIL stated, warning that such outputs could result in fake case laws, biased interpretations, and arbitrary reasoning — all potentially violating Article 14, which guarantees the right to equality.

Transparency and Accountability Concerns

Judicial systems depend heavily on precedent, clear reasoning, and traceable decision-making. The petition highlighted that the opaque nature of GenAI models makes it difficult to understand how conclusions are generated, even by developers, which in turn complicates oversight.

The plea also flagged risks associated with biased AI outputs, noting that models trained on real-world data might replicate or amplify existing social prejudices, particularly against marginalized communities. The lack of standardized protocols for data neutrality and ownership could undermine the citizens’ right to information under Article 19(1)(a), the petition warned.

Additionally, the PIL raised concerns over the heightened risk of cyberattacks targeting AI-driven judicial systems. If automated platforms were to integrate court processes or documents without proper safeguards, sensitive information could be exposed to malicious actors.

Broader Implications

CJI Gavai’s observations underline a critical tension in India’s adoption of AI technologies: the potential benefits of efficiency and automation versus the risks of opacity, bias, and misuse. By acknowledging the circulation of morphed images of judges, the court also highlighted the personal impact on the judiciary and the reputational threats posed by unregulated AI applications.

While the bench refrained from issuing directives to regulate AI itself, the hearing emphasizes the urgent need for the executive to develop clear legal frameworks, ethical guidelines, and monitoring mechanisms for AI in sensitive areas such as judicial and quasi-judicial decision-making.

The case brings into focus the emerging challenge of AI governance in public institutions, particularly in sectors where decisions have profound legal, social, and human rights consequences. As Generative AI continues to evolve, courts, policymakers, and technology developers will need to collaborate closely to balance innovation with transparency, fairness, and accountability.

Leave a Reply

Your email address will not be published. Required fields are marked *