An End To Tareekh Culture: A Proposal For Tech-Enabled But Human Led Justice
- Varun Pathak & Rudraditya
- 3 hours ago
- 6 min read
By Varun Pathak and Rudraditya

This is somebody’s life. We can’t decide in five minutes. - Juror #8 in 12 Angry Men (Reginald Rose)
Judicial accountability is not a mere seminar subject; it is a battle for legitimacy. India's judiciary is overwhelmed, with 47.9 million cases outstanding in district courts (including 5.09 million that are over ten years old) and 6.35 million in High Courts.
Litigation often feels like playing the odds because so much depends on uncertainties and variables outside the parties’ control, which are inconsistent case management, mounting pendency, routine adjournments, changing costs, and unpredictability in adjudication. These operational frictions are compounded by institutional delays: open positions remain unfilled, tests have been pushed back, key suggestions have not been acted upon, and even the judiciary has, at points, come to a standstill. Further, a significant share of a judge’s time is taken up by administrative and case-management work, which adds layers of delay before a matter ever reaches final adjudication; much of this is, in principle, reducible through better processes and support management.
“Actually Contested” cases constitute a minor fraction of monthly disposals, the National Judicial Data Grid published by e-courts, illustrates the operational dynamics of the system, revealing that uncontested cases outweigh contested ones (roughly 71% vs 29%). Also, DAKSH, a think tank, notes judges are severely constrained for hearing time (often hearing large volumes daily), which leads to difficulty in recording daily orders and more adjournments, slowing the process.
To solve this problem. We must consider making a judge’s work easier and more methodical through artificial intelligence (AI). The point is not to replace judges. It is to use AI as a preliminary drafting layer, the way a disciplined law clerk would: it reads the pleadings and record, extracts a clean timeline, identifies the issues, notes what is admitted or disputed, pulls the relevant law, and produces a structured draft with reasons. That draft is not “the judgment”; it is just a starting document to be reviewed later with human reason. The crucial safeguard is contestability. After the draft is generated, both sides get a narrow, structured choice: either (a) accept the draft as accurate on facts and citations, or (b) challenge specific components — a wrong fact, a missing document, a misread paragraph, a weak link in reasoning, or a faulty citation. This changes the courtroom dynamic in a good way; instead of wasting dates on confusion and incomplete records, the next hearing becomes focused on pinpoint disputes. In addition, a structured pre-scrutiny, by the registry and trained court staff/judicial clerks, can catch filing defects, missing documents, service issues, and compliance gaps before the file reaches the judge, conserving hearing time for actual adjudication. The judge then performs the only role that cannot be automated with AI: evaluating fairness, credibility, proportionality, and context; the human work of drawing distinctions and adjudication.. In other words: AI writes; the judge decides, thus retaining the “human element”, also echoed by CJI Suryakant while addressing a symposium.
And in truth, judicial authorship has never been a solitary act. Courts already rely heavily on “ratified rationalizations”, i.e drafts, notes, and bench memos prepared by clerks, researchers, or staff, which the judge later endorses after scrutiny. AI simply makes that hidden workflow more visible and potentially more accountable; if every claim in the draft is traceable to the judicial record and every citation is verifiable.
There is another problem of “algorithmic aversion”, i.e., the inclination to prefer human judgment over an algorithm, even though the algorithm is more accurate. Why does this happen ? - because people penalize algorithmic failures more harshly than human faults and quickly lose trust when they see a machine "get it wrong," particularly when the system's rationale appears opaque. AI is also known as a “high-throughput cognitive clerk”, obviously above humans in terms of memory and scale but structurally weaker at the “reason-giving” functions of adjudication, where transparency, contestability, and normative judgment are doing the real work. However, this asymmetry is also narrowing. Newer “reasoning models” are increasingly post-trained (an interim-output is fixed loop by loop, till it reaches the right output) and their performance improves both with more train-time and with allocating more test-time compute (i.e., letting the model spend more inference “thinking time”).
As the State moves toward "intelligent systems" through eCourts Phase III for data-driven scheduling and prioritization, the question of legitimacy comes up: will technology show systemic trends? Some legal scholars view this change as a "division of labor" within the judiciary. Efficient technology will oversee monotonous, repetitive tasks, while human ethics and personal judgment will uphold essential values. The OECD (Organization for Economic Co-operation and Development) asserts that AI holds considerable promise inside the legal system, contingent upon the appropriate management of associated risks. Also echoed by Justice Ravi Nath Tilhari in point 16 of his recent judgment (CIVIL REVISION PETITION NO:2487 OF 2025 on 23rd January - triggered by AI-generated, non-existent citations) “The use of Artificial Intelligence (AI), in its present stage of development, may function only as a tool capable of assisting in tasks such as organising information and summarizing records. It does not possess consciousness, moral reasoning, or the capacity to weigh evidence, or appreciate the nuances of human conduct.”
The Madras High Court allowed AI to help with record analysis in the Gammon v. Chennai Metro Rail Corporation arbitration case, but only in a very limited way. The AI can be told to do "clerk work," like turning large files into searchable text, pulling out timelines, organizing papers, and making a list of what was already recorded. The Court made it clear that the AI can't judge trustworthiness, figure out what someone meant, draw legal conclusions, or give "opinions." But, in Christian Louboutin SAS v. Shoe Boutique–Shutiq, the Delhi HC refused to treat ChatGPT responses as a basis for adjudicating issues (AI can’t substitute human/humane adjudication), and the decree also directed ₹2 lakhs as costs (in that suit’s disposal terms).
AI’s influence grows because it can deliver what “defined justice” promises like speed and consistency. If the legal test is fixed and the inputs are standard, an AI system can apply the same rule across thousands of cases quickly, producing outcomes that look uniform and predictable. That scale is attractive in a system choked by pendency. But this very advantage can become a risk when the system cannot clearly explain why it reached a result. Where “explainability” is weak, AI may produce automated rationalizations; a tidy, persuasive narrative that sounds like reasoning, but does not truly reflect what happened inside the model or which factors actually drove the output. Private developers have strong incentives to market such systems: profit and long-term institutional contracts (courts and governments are big customers), a branding advantage in portraying human decision-making as “biased” while presenting AI as “neutral,” and reduced accountability pressure when the model’s internal logic is hard to audit, challenge, or cross-examine.
Should India consider addressing pendency by transitioning from human judges to preliminary AI adjudication, several challenges arise in this context. The challenges are-
Firstly, incomprehensibility: justice is not merely an outcome; it is a rational journey that individuals might pursue. When the process transforms into a black box, accountability transitions from judicially provided justifications to model logic that a litigant cannot contest. Despite advancements in interpretability, private systems and technical obstacles may render the "true" explanation inaccessible, resulting in unequal transparency susceptible to manipulation by astute entities.
Second: datafication, it involves the compression of complex lived experiences into rigid classifications, rendering unquantifiable aspects as insignificant; biased inputs may perpetuate existing biases.
Third: disillusionment; when individuals perceive the system as governed by "scores," even accurate results may appear illegitimate. For example - Police will find more crimes in places where they patrol more if they use tools like crime mapping, such as the Delhi crime mapping and predictive analytics (CMAPS). The tool then marks those spots as "high-risk," which means that cops go there even more. That circle may look fair, but it actually spreads old bias. After that, the same reasoning is used in court for bail-risk scores, "flight-risk" labels, forecasts of repeat offenses, and watchlists. Then people start fighting over a score, but not concrete evidence.
Fourth: alienation; when the process resembles automated bureaucracy, engagement diminishes and authority shifts to those who program or manipulate the system. This does not prohibit AI; it establishes the non-negotiable conditions for its use like very strict parameters, which must remain open to challenge anytime.
“Reason is the heartbeat of every conclusion,” Justice Arijit Pasayat warned; without it, orders turn “lifeless”. If AI drafts, it must draft only what parties can challenge. And as Justice Ravi Nath Tilhari cautions the court should prefer “actual intelligence” over artificial. In the end, we should always keep the litigant at the center of the judicial system and not shy away from technology for better administration and their betterment.





