10 Nov 2025, 06:50 AM
Chief Justice of India BR Gavai on Monday remarked that he was aware of a morphed video circulating on social media that falsely depicted the shoe-throwing attempt in his courtroom.
The bench of CJI BR Gavai and Justice K Vinod Chandran was hearing a writ petition seeking directions for framing guidelines or a policy to regulate the use of Artificial Intelligence (AI) in the Indian judiciary.
During the hearing, the counsel appearing for the petitioner submitted that AI tools were increasingly being used in court processes, though they came with potential risks and drawbacks.
“Even this Court is using AI, but the ills are such—” the counsel began, when the CJI interjected, saying, “We are aware of it, we have seen the morphed video of us (two)”—referring to the fabricated video circulating online.
The Court has now posted the matter for further hearing after two weeks.
Why Are Such Regulations On Use Of AI Necessary As Per Petitioner?
The plea explains that the use of Generative AI (GenAI) is problematic as it entails a complex process of datafication which may generate words, images etc not meant by the user.
Generative AI (hereinafter referred to as "GenAI") employs ML (Machine Learning), and becomes a snbset of AI. Through ML, AI can learn automatically by identifying patterns and making deductions rather than receiving direct instructions from an Operator. The working of ML takes place by having a computer learn from data and experience to find pattern and predictions, without being explicitly programmed for every task. Since the machine cannot "learn" on its own, instead, the process of datafication takes place, which proliferates/multiplies digital tools for integrating, analysing, and displaying data patterns. This process of "datafication" which is central to quality, ownership, and management of data, makes the outcomes, often embeds "systemic biases" into the algorithm itself, biased.
The petition further explains that while judicial data is required to be unbiased and under the ownership of its author (judges), the GenAI entails the red flag called 'data opaqueness'. The plea states :
It is submitted that AI integrated into the Judiciary and Judicial functions should have data that is free from bias, and data ownership should be transparent enough to ensure stakeholders' liability. It is submitted that one of the biggest red flags of such integration is Data Opaqueness.
In tech parlance, the term "black box" is used to denote a technological system that is inherently opaque, whose inner workings or underlying logic are not properly comprehended, or whose outputs and effects cannot be explained. This can make it extremely difficult to detect flawed outputs, particularly in GenAI systems that discover patterns in the underlying data in an unsupervised manner. The opacity of such algorithms, often described as "black boxes," means that even their creators may not fully understand the internal logic, thereby creating the risk of arbitrariness and discrimination, against which could not even be controlled or in the knowledge of the creator.
The plea stresses that GenAI use within the judicial work may lead to higher risks of cyberattacks.
As per the petitioner, the black box processes may lead to 'hallucinations' by the GenAI in creating data, which in turn leads to fake case laws and AI-modified court observations which may not be accurate. Thus, the hearings and decision-making would be altered arbitrarily, violating Article 14.
It will also violate citizens' right to know under Article 19, which is part of the right of freedom of expression under Article 19(1)(a).
The plea is filed with the assistance of AOR Abhinav Shrivastava.
Case Details : RAWAL vs. UNION OF INDIA|W.P.(C) No. 001041 / 2025