In my prior post, I endorsed the prospect of AI conducting binding arbitration. Arbitration is a matter of consent: a court would not enforce an arbitration award by an AI unless the parties agreed to that form of arbitration in advance. What should be the role of AI when the parties do not consent to using AI and a human judge is presiding over the case?
I do not think it is likely, or desirable, that AI will replace human judges. The prospect of an AI Supreme Court, with nine laptops perched on the elegant chairs, perhaps with one wearing a jabot, seems a long way off and would require multiple constitutional amendments. In addition, having read many dystopian science fiction novels over the years, I am not quite ready to be ruled by robots.
However, in the shorter term, AI could—and, in my view, should—serve the same role that law clerks play now. The AI would review the briefs, summarize the arguments, and make a recommendation as to how the case should come out. After the judge decides how the case should be resolved, the judge could summarize the rationale to the AI, and the AI could prepare a draft of the judicial opinion based on that rationale. The judge would review and edit, the AI would cite-check and offer further comments, and the opinion would then be released.
The benefits of using AI for this purpose are twofold: accuracy and speed. If the AI is better and faster, judicial decision-making will be better and faster.
Accuracy: One might be skeptical that an AI, standing alone, could ever decide cases more accurately than a human judge. But it seems obvious that the combination of a judge and AI is better than a judge standing alone. The skills of the judge and the AI complement each other. The judge provides common sense, wisdom, life experience, a deep understanding of how doctrines fit together, and other qualities that are, at least at present, ineffably human. The AI provide an encyclopedic knowledge of legal materials, ironclad logical reasoning skills, and technical perfection in checking citations and fixing errors.
Speed. Using AI would accelerate judicial decision-making significantly, particularly in conjunction with lawyers’ use of AI to conduct discovery and write briefs (the subject of my next post). After a judge receives a motion, an AI could summarize the arguments on both sides in twenty seconds. The judge could decide the motion and offer a basic description of the rationale to the AI, and the AI could return a reasoned, if bland, draft of a judicial opinion within twenty seconds. If the judge is unhappy with particular aspects of the AI’s draft, the judge could give specific instructions to modify or improve aspects of the AI’s reasoning, which would be executed within seconds with no offense taken. The practice of waiting months or years for judicial decisions would become a musty memory of the past, like enforcement of the Establishment Clause.
For many reasons, a speedier justice system is a better justice system.
The folk wisdom that “justice delayed is justice denied” is correct. If people are denied access to injunctive relief, or money, during a span of time, they can never get that time back. If someone deserves to be paid when she is 40 and she is paid as part of a court order when she is 45, she is worse off than if she had been paid in the first place.
Delay reduces the quality of fact-finding. When it takes years to get from a lawsuit to a decision, documents are lost, witnesses forget things or die, and the facts on the ground change to the point where it’s impossible to reconstruct what happened years earlier.
Delay increases the expense of litigation, given that lawyers are paid by the hour and work tends to fill up the amount of time allotted to it.
Delay between the lawyers’ submissions and judicial decisions reduces the quality of judicial decision-making. It is not uncommon for judicial decisions to be released two or more years after briefing and oral argument. By the time the decision comes out, the judges will have forgotten the briefs and oral argument. Perhaps they could reread the briefs and re-listen to the oral argument audio, but the judges are busy with other cases and they are unlikely to undertake the same level of preparation as they did at the time of the hearing. Moreover, there are likely to have been multiple generations of law clerks between the time of argument and decision, each generation less familiar with the legal issues than the last. Many trial-level state judges do not have law clerks, and those judges tend to be incredibly busy; for them, it is particularly unlikely that they will recall the nuts and bolts of a dispute months after a hearing takes place.
Delay encourages the filing of weak lawsuits and the assertion of weak defenses. Litigants with bad arguments benefit from a judicial system that is inaccurate, slow, and expensive. They benefit from inaccuracy because cases that deserve to lose will sometimes win; they benefit from delay and expense because it encourages their opponents to settle, even if their opponents have a winning hand. This creates a vicious cycle where a slow justice system encourages the filing of weak lawsuits, which in turn makes the justice system slower. To some extent this effect is offset by plaintiffs being deterred from filing lawsuits on the ground that the system is too slow. But as a result, bad lawsuits crowd out good lawsuits: litigants with bad claims will be encouraged to file because the system often errs, while litigants with good claims will be discouraged from filing for the same reason.
AI will make judges release more accurate decisions more quickly. This is good.
You’re telling me there aren’t any harms to letting AI tell judges what to do?
Yes, that is basically what I am telling you. I’m not persuaded by the arguments against the use of AI by judges, as the Q-and-A portion of this post will reveal.
Q. Using AI is unfair to litigants, who do not consent to judges using AI to decide cases.
A. Litigants do not consent to judges using law clerks, either. Nor do they consent to the particular judge that they get. If you can’t choose your judge, why should you choose how your judge decides cases?
Q. Judges will rely too heavily on AI. Judges might pretend to be deciding the cases, but they will be so tempted to rely on the AI that they will barely check over the AI’s work and instead sun themselves on the beach.
A. Perhaps some judges won’t read the briefs and will instead merely read the AI’s summary, but the same thing happens now with law clerks’ summaries. Indeed, I think there is a greater risk of judicial over-reliance on law clerks. Judges will be nervous about rubber-stamping the AI, at least at first.
Q. Yes, but at least the law clerks are human! I’d rather have a judge rely on the output of another human than rely on the output of a machine.
A. It may take awhile for AI to reach the level of a human judge, but it will not take long for AI to reach the level of a human law clerk. I spent two years as a federal law clerk. I was very confident in my own abilities at the time, but upon reflection, I was immature, did not understand how litigation worked, and knew little or nothing about many areas of law. Moreover, the question is not whether an AI is better than a human law clerk; the question is whether the conjunction of an AI and a human judge is better than the conjunction of a human law clerk and a human judge. I think the answer is yes; one might debate whether a computer is a better mathematician than a human, but a smart human with a computer is undoubtedly a better mathematician than a smart human assisted by a skilled abacus-user.
Q. What about judges without law clerks? They will transition from personally deciding cases to rubber-stamping an AI’s output, making judicial decisionmaking worse.
A. Judges without law clerks, typically state trial court judges, preside over thousands of cases simultaneously and can barely tread water. It is simply not possible to render thoughtful, well-reasoned decisions in every case. If there is any type of judge who will benefit from AI, it is them.
Q. Individual AIs will become too powerful. If thousands of judges use a single AI, then the negligence or bias of the AI’s designers will poison thousands of judicial decisions. Judicial overreliance on law clerks might be bad, but law clerks serve only one judge and in many cases serve only one year, ensuring that no one bad law clerk can cause too much damage.
A. This is definitely a problem, but judges might reasonably use multiple AIs, at least as to the bottom-line question of recommending how the case should be decided. Also, in difficult cases, the AI might not recommend a particular disposition but instead present the arguments on both sides and characterize the case as difficult.
Q. What does it even mean to say that AI will make judicial decisionmaking more “accurate”? Judging is ideological. “Improving” judicial decisionmaking just means “pushing judicial decisions in my preferred ideological direction,” and replacing the views of judges with the views of machines (or their designers).
A. Most decisions are not ideological. Many cases have right or wrong legal answers, and merely require the mechanical application of existing law. In other cases, existing law provides no clear answer, but the zone of reasonable debate is still narrow. For example, it might be undebatable what legal standard must be applied, and the only source of reasonable disagreement is how that legal standard should apply to particular facts.
Judges still sometimes get these cases wrong, because they are human and make mistakes. Judges sometimes make legal errors: They may be unaware of a case, or misunderstand a doctrine, or make a logical error. They also sometimes make factual errors, such as misunderstanding testimony. By filtering out these types of errors, AI would improve judicial accuracy.
Also, AI would assist not only with the bottom-line result, but with the reasoning. Lawyers constantly debate what particular judicial decisions mean. Often this is because the decisions are ambiguous or contain internal contradictions. Lawyers spend pages and pages of their briefs arguing whether the court should rely on one fragment of a footnote or a different fragment of a different footnote, or whether a particular statement is a holding or dicta. Hours of billable time would be saved if an AI could identify these issues before the opinion is released.
Q. Maybe judicial opinions would be more accurate in some sense, but they will be bland. ChatGPT’s outputs are cool, but they are all formatted the same way and they get boring after awhile.
A. Many judicial opinions, particularly in trial courts, are bland. This is probably a good thing. Judicial opinions are supposed to serve the practical purpose of establishing and explaining the law, not to be literary. The judges who prefer to prepare artisanal judicial opinions can still do that while using the AI to cite-check.
Q. What about cases that are ideological? Do we really want a single AI, or maybe a batch of a few AIs, being responsible for offering recommendations in every single case?
A. I’m not concerned about AI moving the law in a particular ideological direction. First, judges could ask how a case should be decided under particular judicial philosophies; for instance, a judge could instruct the AI to ignore all legislative history in making a recommendation, or ask the AI to heavily weight Founding-area sources.
Second, I predict the AI’s recommendations will point in the direction of blandness rather than a particular political direction. This strikes me as basically OK. I am not fond of the phenomenon of judges going out of their way to hire conservative or liberal law clerks, and indeed many excellent judges do not apply ideological filters in their clerkship hiring. Is it really preferable that judges receive recommendations and draft opinions from ideological 26-year-olds?
Third, as a practical matter, judges will ignore or give little weight to AI recommendations in politically charged cases. The AI recommendations will be useful in the 99% of cases without political weight.
Q. But what about the law clerks whose careers we are displacing? Shouldn’t they get the chance to be mentored by a judge?
A. Judges could still hire law clerks. Perhaps the clerks could manage the AI, or prepare memos accompanying the AI, or something. Also, law clerks, particularly federal law clerks, typically come from high-ranked law-schools or the top of their classes from lower-ranked law schools. These lawyers will land on their feet regardless of whether they have a clerkship on their resume. I spent two years as a law clerk earlier in my career; I worked on many interesting cases and had high job satisfaction, but I would have become a lawyer even without these experiences and I’m not sure that giving law school graduates high job satisfaction early in their careers is a high policy priority.
Next week: AI and the practice of law.
(Finally, a disclaimer: this post reflects my own views, not those of Jenner & Block.)