India’s Supreme Court pushes back on fake AI-generated rulings: what actually happened
The Supreme Court of India has gone head-to-head with the irresponsible use of Artificial Intelligence in the judiciary after discovering that a lower court judge cited decisions that simply did not exist. Those decisions were entirely fabricated by an AI tool.
The case, which started as a property dispute in the city of Vijayawada in the state of Andhra Pradesh, ended up becoming a serious institutional problem. The country’s highest court classified the episode as a matter of public interest and made it clear that documents invented by AI have no place in a serious judicial proceeding, as they risk undermining trust in the entire system.
How AI entered the picture in a property dispute case
The starting point was a property dispute case in a lower civil court in Vijayawada. The judge in charge, a junior judge, needed to rule on an objection raised by the defendants regarding a survey report of the disputed area, prepared by a court-appointed expert.
To justify rejecting the defendants’ objection, the judge cited four previous rulings, supposedly from higher courts, that would support her reasoning. But there was a serious problem: those precedents did not exist in any official repository. They were, in practice, fake judgments generated by AI.
The citations had every appearance of real decisions: technical language, references to courts, mentions of legal principles, and the typical formatting of case law. But when checked against official databases, they simply vanished. No valid case number, no recorded decision. It was a textbook case of AI hallucination, where the system generates convincing content that is completely disconnected from reality.
The judge’s role and the mistake of blindly trusting the machine
When the case was questioned, the judge explained in a report to the state high court that she had used an Artificial Intelligence tool for the first time. She stated that she believed the citations were authentic and had no intention of deceiving anyone. According to her account, the error happened precisely because she trusted the tool too much and did not verify the decisions in official repositories.
The Andhra Pradesh High Court acknowledged that the cited decisions were nonexistent but understood the mistake had been made in good faith. In the state court’s view, the legal reasoning behind the ruling was correct even with the invalid citations. For that reason, it decided to uphold the original decision, maintaining that the mere use of incorrect references would not, by itself, be grounds to overturn the order, as long as the application of the law to the specific case was sound.
In practice, the state court took a more lenient approach: it admitted the risk of AI, criticized the careless use of the technology, but spared the judge from a harsher reprimand. It also took the opportunity to record a symbolic statement: it would be necessary to exercise human intelligence above artificial intelligence.
When the case reaches the Supreme Court: good faith is not enough
The defendants were not satisfied and took the matter to the Supreme Court of India. From that point on, the tone shifted quite clearly. Instead of viewing the problem as merely a technical detail, the highest court treated the issue as a threat to the integrity of the justice system.
The justices classified the episode as a matter of serious institutional concern. For the Supreme Court, the central problem was not just whether the final decision was correct on its merits, but rather the adjudication process that led to it. In other words, there is little point in having an apparently fair outcome if the path used to reach that outcome is propped up by fake documents.
The court made it clear that the use of fake judgments generated by AI could not be treated merely as a simple misinterpretation or technical oversight. For the justices, this borders on improper conduct, especially when it involves someone holding a role as sensitive as adjudicating disputes between people.
Supreme Court decision: order suspended and deeper investigation ahead
As an immediate response, the Supreme Court decided to stay the order from the lower court related to the property dispute. In practical terms, this means the decision based on nonexistent citations no longer has any effect until the case is reviewed more carefully.
Beyond that, the court issued notices to key authorities in the Indian justice system, including the Attorney General and the Solicitor General of the country, as well as the Bar Council of India. The goal is to involve the entire legal ecosystem in the discussion about the limits and responsibilities of using AI in judicial proceedings.
The message is straightforward: the Indian judiciary does not intend to normalize decisions based on content fabricated by algorithms. The court signaled that it will examine the case in detail and, if necessary, establish stricter protocols for the use of AI technologies by judges, lawyers, and court staff.
AI, hallucinations, and the risk to judicial decisions
Generative Artificial Intelligence tools, such as language models, are already part of the daily routine for many professionals. They speed up research, help organize information, summarize documents, and suggest arguments. But these same tools also have a well-known characteristic among those who follow the topic: the tendency to hallucinate, meaning they present something as fact when it never actually happened.
In law, this hallucination can take an especially dangerous form: invented precedents. The tool generates a text with perfect legal appearance, mentioning courts, dates, and legal principles, but everything was statistically predicted rather than retrieved from an official database. When the user fails to verify what they received and treats that text as real case law, the error stops being technical and becomes a systemic risk.
That is exactly what the Andhra Pradesh case exposed. This is not just any mistake — it is an indirect attack on the logic that holds the judiciary together: the trust that decisions are grounded in verifiable, traceable, and auditable sources.
Human responsibility above automation
The discussion raised by the Supreme Court of India connects to a central point in the global debate about AI: who is responsible for the error? The technology may be sophisticated, but the responsibility remains human. In the judicial context, this means that judges, lawyers, court staff, and even law students who use AI for research cannot treat the system’s output as automatic truth.
Some precautions that have historically always been part of legal routine carry even more weight in this scenario:
- Verify whether cited decisions appear in official case law databases;
- Check the case number, judgment date, and the court that issued the decision;
- Read the full text of the precedent, not just the excerpt selected by the tool;
- Confirm that the mentioned court actually has jurisdiction over the subject matter;
- Transparently disclose, when necessary, whether AI was used in drafting briefs or documents.
Without this verification, AI stops being a support tool and starts acting, in practice, as if it were a reliable primary source — which it is not. And that shortcut can open the door to serious distortions, particularly in cases involving freedom, property, or collective impact.
The concern over lawyers using AI and fake citations
The judge’s episode is not an isolated case in the eyes of the Indian Supreme Court. In another recent ruling, the court openly criticized the trend of lawyers using AI to draft petitions packed with citations that do not exist.
According to coverage from local legal outlets, the justices classified this practice as entirely inappropriate. The court expressed concern about the increase in petitions arriving at courts with supposedly robust procedural references that, when checked, turn out to be fabricated by AI systems.
In other words, the problem is not limited to judges. The culture of copying and pasting AI-generated content without verification is also draining courts’ time, as they need to spend energy checking whether each citation is authentic. This affects productivity, increases costs, and creates friction between the court and the professionals who practice before it.
It is not just India: courts around the world are on alert
The discussion about AI and fake rulings is not exclusive to India. Other countries have already had to deal with similar problems.
In October, in the United States, two federal judges faced criticism after admitting they used AI tools in proceedings that resulted in decisions containing incorrect information. There were public questions raised and intense coverage from specialized media, highlighting how excessive reliance on automation can erode the quality of judicial decision-making.
In the United Kingdom, the High Court of Justice of England and Wales issued a formal warning to lawyers, advising them not to use AI-produced material as the basis for case citations without verification. The trigger was a series of episodes in which petitions included nonexistent or partially fabricated decisions — something very similar to what happened in Andhra Pradesh.
In short, courts are realizing that without clear rules, AI can introduce a new type of risk: the appearance of authority without the backing of reality. And in law, form without real substance is a guaranteed recipe for conflict.
How the Supreme Court of India is trying to organize the use of AI
Faced with these risks, the Indian Supreme Court did not stop at criticism. In 2023, the court published a white paper on AI in the judiciary, with guidelines, best practices, and recommendations for the responsible use of technology in courts.
That document addresses, among other points:
- Possible applications of AI in support activities, such as case triage and case law research;
- The importance of constant human oversight at every stage where AI is used;
- The need to maintain strong institutional safeguards, preventing decisions from being outsourced to algorithms;
- Guidance aimed at judges, lawyers, court staff, and other legal practitioners.
The white paper’s central message is simple but powerful: AI can assist, but it cannot decide. The final word must always belong to people capable of interpreting context, evaluating consequences, and taking responsibility for what is being judged.
The Vijayawada case has become a symbol of a dilemma that goes far beyond India: how to use Artificial Intelligence in courts without giving up something essential — the trust in the judicial process and in the human ability to distinguish between what is a real precedent and what is merely well-written text produced by a machine.
As technology advances, episodes like this serve as a reminder that no algorithm replaces the duty to check, verify, and build arguments based on legitimate sources. At the end of the day, that is what keeps the credibility of any justice system standing, no matter how modern and digital it may be.
