How artificial intelligence fabricated fake rulings in an Indian court
The use of artificial intelligence within the judiciary just produced one of the most alarming episodes in recent years. In the city of Vijayawada, in the state of Andhra Pradesh, India, a lower court judge made a ruling in a property dispute citing four previous judgments as the legal basis for her decision. So far, nothing out of the ordinary for any standard court proceeding. The detail that turned this case into a scandal is that none of those four judgments ever existed. They were all fabricated by a generative AI tool, and the judge used them without verifying whether they were real.
When the defendants identified the fake citations and challenged the decision, the case moved quickly through the judicial system. It first reached the Andhra Pradesh High Court and then the Supreme Court of India, which responded firmly and classified the conduct not as a simple technical mistake but as an act of misconduct with a direct impact on the integrity of the adjudicatory process.
What happened in the Andhra Pradesh case
The problems began in August of last year when the civil judge of the lower court in Vijayawada issued a ruling on a disputed property. The court had previously appointed an official commissioner to inspect the property and submit a report. The defendants objected to that measure, but the judge rejected the objection, basing her decision on four previous judgments that supposedly provided legal support for her position.
The problem is that those four judgments simply did not exist. They had been generated by a generative artificial intelligence tool, and the judge incorporated them into her ruling without performing any kind of verification against the official databases of the Indian judiciary. When the defendants grew suspicious of the citations and went to check, they discovered that none of them corresponded to real cases. From that point on, they challenged the decision in the state High Court.
Generative AI systems are widely known for their tendency to hallucinate — a technical term that describes situations where the model presents false information with complete confidence, even going so far as to invent sources to lend credibility to fabricated content. This phenomenon is particularly dangerous in the legal field, where the accuracy of references is fundamental to the validity of any ruling.
The High Court reaction and the judge’s defense
When reviewing the defendants’ appeal, the Andhra Pradesh High Court acknowledged that the citations used by the judge were indeed fake. However, the court adopted a more lenient stance toward the magistrate. It accepted that the judge had made the mistake in good faith and, in a decision that generated considerable controversy, upheld the original lower court ruling.
In its reasoning, the High Court argued that although the citations were nonexistent, the judge had applied the correct legal principles to the facts of the case. According to the court, the mere mention of false or nonexistent rulings in the decision would not be sufficient grounds to overturn the order, as long as the legal logic behind it was sound.
The High Court also requested a report from the judge who had used the AI-generated judgments. She stated that it was her first time using an artificial intelligence tool and that she had believed the citations were genuine. She affirmed that she had no intention of misquoting or distorting judicial rulings and that the mistake occurred solely because she had trusted an automated source.
The High Court also made an appeal for legal professionals to prioritize the exercise of real intelligence over artificial intelligence — a phrase that resonated widely in both Indian and international media.
The Supreme Court of India did not let it slide
Unsatisfied with the High Court’s decision, the defendants appealed to the Supreme Court of India in New Delhi. And there, the reception was quite different. The justices of the country’s highest court were severe in their analysis and did not accept the argument that the episode could be treated as a good-faith error.
Last Friday, the Supreme Court suspended the lower court’s decision on the property dispute. In its pronouncement, the court classified the use of AI-fabricated judgments as an act of misconduct, not merely a decision-making error. The justices emphasized that the case represented a matter of considerable institutional concern, not so much because of the decision itself on the merits of the case, but because of what it revealed about the process of judicial determination.
The Supreme Court stated that fake judgments generated by artificial intelligence have a direct impact on the integrity of the adjudicatory process and announced that it will examine the case in greater depth. To that end, it issued notices to the Attorney General and the Solicitor General of India, as well as the Bar Council of India, summoning these authorities to weigh in on the matter.
The problem of fake judgments is not exclusive to India
This episode sets off a global alarm, and that is no exaggeration 🚨. Similar cases of fake judgments generated by artificial intelligence have already been recorded in other countries, showing that this is a troubling trend that comes hand in hand with the growing popularity of generative AI tools.
In the United States, two federal judges were called to explain themselves in October after the use of AI tools led to errors in their rulings. Also in the US, a New York attorney gained international notoriety when he used ChatGPT to prepare a court filing and ended up citing previous decisions that simply did not exist. The judge handling the case imposed sanctions on the lawyer, and the episode became a cautionary tale about the risks of irresponsible AI use in the legal field.
In England, the High Court of England and Wales issued a warning in June 2025 for lawyers not to use AI-generated legal material, after a series of cases presented fictitious or partially invented rulings as supporting arguments. The concern is the same across all of these countries: generative AI tools create extremely convincing text, including fictitious legal references that appear absolutely legitimate to anyone who does not carefully verify them against official databases.
In another recent case in India itself, last month, the Supreme Court expressed concern about the growing trend of lawyers using AI tools to draft petitions. The court described the practice as absolutely unacceptable, according to the legal news portal LiveLaw.
Why artificial intelligence fabricates information so naturally
It is important to understand the mechanism behind these fabrications. Generative AI models work by predicting what the next most likely word in a sequence of text will be. They do not consult actual legal databases, they do not access official repositories of court decisions, and they do not have the ability to distinguish between a real judgment and a fictitious one.
When they receive a prompt like cite previous judgments about property disputes in India, these models simply create text that looks like real legal citations, complete with court names, dates, case numbers, and even excerpts of reasoning — all entirely made up, but with an extremely convincing appearance. This phenomenon, known as hallucination, is a structural limitation of large language models and cannot be solved simply by writing more careful prompts or using more advanced versions of the tools.
It is precisely this naturalness in fabricating information that makes using these tools so dangerous in contexts where factual accuracy is essential, such as the judiciary, medicine, and journalism.
Legal consequences and the future of AI in judicial proceedings
The legal consequences of episodes like this go far beyond a suspended sentence or a financial penalty. When fake judgments are used to support decisions, real people suffer concrete impacts. In the Indian case, the defendants in a property dispute nearly lost their rights based on precedents that never existed. If they had not identified the fake citations and challenged the decision, the ruling would have stood, creating a dangerous and potentially irreversible precedent.
This shows how the lack of verification in the use of artificial intelligence can compromise fundamental rights and undermine public trust in the justice system as a whole.
Several countries have already begun discussing specific regulations for the use of AI in legal contexts:
- United States: some federal courts now require attorneys to formally disclose whether they used artificial intelligence tools in drafting petitions and court documents.
- European Union: the AI Act classifies the use of AI in the judiciary as a high-risk application, requiring rigorous human oversight and full transparency about the methods used.
- India: last year, the Supreme Court itself published a technical document on artificial intelligence in the Indian judiciary, listing best practices and guidelines for the use of AI by judicial institutions, lawyers, and court officials. The document emphasized the need for human oversight and the importance of keeping institutional safeguards firmly in place.
- Brazil: the National Council of Justice has already published resolutions on the subject and has been closely monitoring how Brazilian courts are implementing AI tools in their workflows, always with the premise that the final decision must be made by a human.
The balance between technology and human responsibility
The central point of this discussion is not to demonize artificial intelligence or to suggest it has no place in the judicial process. On the contrary, AI tools can be enormously helpful in organizing documents, triaging cases, conducting legal research, and reducing the bottlenecks that make justice slow in virtually every country in the world.
The problem lies in using them without oversight, without verification, and without accountability. When a legal professional — whether a lawyer, judge, or prosecutor — outsources the foundation of a decision to a tool that has no commitment to factual truth, the consequences can be devastating. AI can be a powerful ally in research and information organization, but validation, critical analysis, and the final decision must remain exclusively human responsibilities.
The India case serves as a powerful reminder that technology is a tool, and like any tool, its value depends entirely on who uses it and how it is used. The Supreme Court of India made this clear by treating the episode not as a minor hiccup but as a matter that affects the credibility of the entire judicial system. As generative AI tools become increasingly widespread, episodes like this are likely to happen again — and the response from institutions needs to be proportional to the risk they represent 🧠
