Welcome to my Substack! My name is Adam Unikowsky. I’m a lawyer at Jenner & Block in Washington, DC. I plan to post on matters related to law, legal practice, and the Supreme Court, among other things. In my first post, I will argue that parties should be permitted, and even encouraged, to submit their disputes to an AI for binding arbitration. Perhaps not quite yet, but soon.
Like many others, I find ChatGPT to be one of the most incredible technologies ever. It is leaps and bounds ahead of any AI I expected to see in my lifetime.
ChatGPT and other modern AIs will change all industries, but it is uniquely well-suited to change the practice of law. AI is perfectly suited to the tasks of writing briefs and judicial opinions, and both lawyers and judges will come to rely on AI for those purposes. These changes will take some time because of the inherent conservatism of the legal profession, but they will happen. Human judges and lawyers will still have a role to play, but it will be very different from the role we play now. If you think these statements are starry-eyed or outlandish, you probably have not used ChatGPT.
Should we be worried about these changes? No. AI is going to provide tremendous benefits to the legal profession. Disputes will happen much less frequently, and when they do happen, they will be resolved more efficiently and more accurately. I imagine some well-meaning legal ethicists will seek to ban or restrict the use of AI by judges, lawyers, or both. Those ethicists are wrong. We should welcome and celebrate the use of AI.
I am going to write a series of Substack posts to address the impact of AI on law. In the first post, which is below, I will discuss the use of AI to decide disputes via binding arbitration. In future posts, I will discuss judges’ use of AI in deciding cases and lawyers’ use of AI in brief-writing and oral argument.
AIs should be permitted—and encouraged—to decide cases
In my view, it should be legal for parties to sign arbitration agreements in which the AI serves as an arbitrator, with no human input. In fact, there should be a policy favoring such arbitration agreements, akin to the current federal policy favoring arbitration.
What are the advantages of using an AI as an arbitrator?
It is accurate. According to initial reports, ChatGPT nearly passes the bar exam. In a few years, it will pass the bar exam. I would guess that within ten or twenty years, it will perform better on law school issue-spotters than most, if not all, humans. Before using ChatGPT, this would have seemed like science fiction to me. But ChatGPT has made me a believer.
It is knowledgeable. Humans cannot be expected to be familiar with every doctrine and every case from every jurisdiction, so the bar exam is generally limited to the law of one jurisdiction plus the generic common law. By contrast, AI knows every doctrine and every case from every jurisdiction. AI will not only ace the actual bar exam but will also be able to ace all hypothetical bar exams on all topics.
It is cheap. ChatGPT is currently free. Maybe a paid license will eventually be necessary, but the cost will be far lower than the cost of an arbitrator.
It is fast. AI will resolve the parties’ dispute with a reasoned decision in twenty seconds.
It is unbiased. Judges take an oath to “administer justice without respect to persons”—that is, to focus on the facts and law rather than the identity of the litigants. Judges sometimes violate that oath, but AI never will. AI will never rule on the basis of the race, gender, religion, sexual orientation, or other personal characteristic of a lawyer or litigant. AI will also never be swayed by financial considerations.
AI will save both sides tremendous amounts of time and money by allowing the parties to seek guidance from the AI as often as they want, ex parte if necessary.
Suppose you’re a plaintiff with a potentially lucrative lawsuit, but you don’t know whether you will win. In the current world, you have to file the lawsuit or arbitration demand, litigate for years and expend millions of dollars, and hope for the best. Suppose you could consult an oracle, at the start of the case, that would tell you whether you’d win. Would this be helpful? Of course it would. If the oracle says “yes,” you press forward. If it says “no,” you can save years of time and millions of dollars.
AI is that oracle. You can present your complaint to it, explain the bad facts that are likely to come out in discovery, and ask, will I win? The AI will answer your question. The AI will be the actual adjudicator later on, so if it says you’ll lose, you’ll lose. If you’re worried about revealing bad facts to the other side, this conversation can be ex parte and the AI can be programmed to forget about it immediately.
The same goes for defendants. When defendants are sued, they typically investigate the facts and assess whether to settle or fight. But it is often hard to assess the probability of success. Many defendants have suffered through years of discovery only to be hit with a mammoth verdict. With AI, this never has to happen. After being sued, the defendant can simply plug in the complaint plus the bad facts into the AI and ask what is going to happen. If the AI says the defendant will lose, the defendant can settle immediately and save money.
Of course, both sides might view the facts differently, so the AI might give opposite answers to both sides. There will still be some litigation. But in many cases, both sides have a pretty good idea what the facts and law are, and a private twenty second conversation with the AI can give a pretty good sense of the outcome of the dispute.
Litigants can seek guidance from the AI before the dispute arises, thus preventing any dispute from arising at all.
Suppose you’re in a business relationship that has gone sour. You think that your adversary has breached your contract and that you’re allowed to repudiate the contract as a result. You know that your adversary doesn’t think it breached the contract and that if you repudiate it, you’ll be sued. So do you take the chance? In the current world, this is a difficult choice with no good options. Stick to the contract and face financial loss; repudiate the contract, get sued, and pay lawyers with an uncertain outcome. But if your arbitrator is an AI, you can just privately ask the AI: is it legal for me to repudiate the contract? If it says “yes,” you’re in the clear, because the same AI is going to be the adjudicator later on. If the AI says “no,” you can grit your teeth and stick to the contract. The other side doesn’t even have to know that you asked the AI, ensuring that the business relationship doesn’t deteriorate further.
To sum up, AI is faster, cheaper, more accurate, and can provide instant guidance at all stages of the dispute. It’s great.
How about the drawbacks?
Let’s move to Q&A mode.
Q. This is completely illegal. The Federal Arbitration Act and similar state laws require an arbitrator to be a human. Also, this is unauthorized practice of law.
A. Perhaps so. Statutes and ethical rules might have to be amended. That is OK.
Q. We shouldn’t amend those statutes and ethical rules. It is fundamentally creepy for human litigants to be forced to submit to the dictates of a robot.
A. Arbitration agreements are consensual. You will submit to the robot only if you agreed in advance to submit to the robot.
Q. Yeah, but what about consumer or employee arbitration agreements? People can’t negotiate with their cable company or bank. They just sign on the dotted line. It will be … off-putting for people to be forced to submit to robot arbitrators as a condition for obtaining a cell phone or credit card.
A. If people are squeamish about this, there could be a law requiring explicit notice of AI arbitration and creating a statutory right for consumers and employees to opt out.
Q. ChatGPT is not ready for prime time; it has all kinds of weird failure modes.
A. True, ChatGPT is not ready to conduct binding arbitrations. But we should be optimistic about this technology. It’s already incredible and OpenAI has already made clear that ChatGPT is essentially an early beta and will get much better. Given what OpenAI has already achieved, shouldn’t we believe them? We’re not just talking about OpenAI here. DeepMind and others are developing competing technology. DeepMind developed an AI that became the best chess player in the world merely by playing chess with itself over and over again for a couple of hours. Chess is harder than law; I have little doubt that an AI will soon be able to solve legal problems as well or better than any human.
Also, the question is not whether AI is perfect; it’s whether AI is better than a human arbitrator. Many arbitrators are terrible. Even the better ones make errors all the time. Here’s an interesting archive of questions ChatGPT gets wrong. I suspect human arbitrators would also get many of these questions wrong.
Q. But law cannot be reduced to a series of problems with right or wrong answers. Many cases are hard and require the exercise of discretion and reasoned judgment. Those are the cases in which a human being is needed the most.
A. Actually, those are the cases in which a human being is needed the least. Cases requiring “discretion and reasoned judgment” are cases in which different arbitrators would come out different ways. In these cases, the AI can’t be wrong; any decision it makes would match the decision of a human. It’s most important for the AI to get the easy cases right, and if an AI can solve chess it can get easy legal cases right.
Q. How is AI supposed to resolve disputes based on written submissions when the parties cannot even agree on what question is being asked? You can’t just reduce a case to a simple written query that both sides can agree upon.
A. Of course you can’t, but you don’t have to. Each side will prepare a written submission and present it to the AI, exactly as currently occurs before human judges.
Q. Maybe AI can resolve disputes presented in written submissions, but it cannot preside over evidentiary hearings and make credibility assessments.
A. True, but many, if not most, disputes are resolved on the papers. Many arbitrations are paper arbitrations without live hearings and many lawsuits are resolved at summary judgment. Moreover, there is abundant research that humans are terrible at making credibility assessments based on live testimony. Fact-finders often unconsciously (or sometimes consciously) rest their credibility assessments on racial stereotypes, leading to rampant racial discrimination. I would not mourn a reduction in the number of live trials.
Q. Sure, but a trial is sometimes necessary, and in that scenario AI can’t resolve the dispute.
A. Fine. AI can be used to resolve the summary judgment motions. AI can’t do everything, but it can do a lot of things.
Q. AI isn’t really “thinking.” It’s just parroting back plausible-sounding strings based on billions of tokens of data. An adjudicator should be able to think.
A. Much of adjudication consists of looking back at prior decisions and applying reasoning from those decisions to the facts of the current case, which is essentially what AI does. As long as AI returns correct answers to legal questions, we shouldn’t care that it isn’t “thinking.” AI already is able to write and debug code based on natural language inputs. If the code works, no one cares that the AI isn’t “thinking.” It should be no different for the arbitrator.
If we want to get more philosophical here … AI is conducting a computation based on its exposure to lots of data, which the human brain also does. I’m not sure the computations in the human brain are entitled to pride of place. But I don’t think it’s necessary to ponder what “thinking” means, whether machines can experience qualia, and the like. If AI works, we should use it.
Q. We don’t really understand how AI works, it’s creepy to give so much power to this mysterious machine.
A. I agree there are multiple levels of incomprehension here: (A) 99.99% of lawyers cannot begin to understand how deep learning and neural nets work; (B) Even the 0.01% who understand the topic generally cannot understand how ChatGPT works specifically because the specific algorithms are trade secrets; (C) Even the ChatGPT developers do not really understand how, in any particular case, the algorithm reaches the answer it reaches.
So what? This does not bother me. We don’t really understand how the human brain works, either. In fact we have a much deeper understanding of how AI generates language than how humans do. As long as the AI reaches the right answer, who cares?
Q. The designers of ChatGPT are biased and hence ChatGPT produces biased outputs, such as refusing to advocate for “conservative” legal positions while happily advocating for “liberal” legal positions.
A. There is considerable debate on whether ChatGPT has a political lean, whether that is changing over time, etc. But even if this is true, it should not disqualify ChatGPT. Humans have leanings, too. Even assuming ChatGPT is as liberal as Judge Reinhardt, which I don’t think it is, Judge Reinhardt was allowed to be a judge. And many of the disputes ChatGPT will resolve aren’t controversial political disputes anyway.
Q. Yes, but the problem is that if ChatGPT resolves thousands of disputes, the ChatGPT designers will have too much power. It is better that this power be distributed over thousands of adjudicators.
A. This is a problem, sure, but remember that we’re not just talking about ChatGPT. Soon there will be multiple competing AIs, each controlled by different sets of programmers, perhaps with different political views or funders. If there is concern that a particular AI is biased, then parties are free to agree in their arbitration agreements to submit the same dispute to multiple arbitrators. They can agree to majority rule or even provide that unless the AIs are unanimous, the case goes to a human adjudicator. Also, even just looking at ChatGPT, I think the “political bias” problem is solvable. I don’t think it would be that hard to offer a mode in which political filters, broadly described, are removed. In fact I think it’s harder to add them than take them away. OpenAI may make a business decision not to offer such a mode, but the technology is feasible and the gap would be filled by other developers.
Q. ChatGPT could be manipulated by ex parte contact. If you tell ChatGPT enough times that your client should win, it will think your client should win.
A. There would be a rule of professional ethics barring this as well as software solutions. The AI would be programmed to forget any ex parte communications in a legal dispute and efforts to evade this would be easily detected.
Q. If AI is given too much power, authorities will start to regulate it, stifling innovation and introducing an additional source of potential bias.
A. I am concerned about this problem, but legislators will try to regulate AI no matter what. I hope that Congress refuses to regulate AI and preempts all state law that would purport to do so. Perhaps a coalition of anti-regulation Republicans and pro-tech company Democrats could come together on this? Any such regulation would also raise First Amendment concerns. All in all, this is a problem but I’m not sure that using AI in arbitration would make the problem worse.
* * *
I’m sure there are other objections as well, but on balance the arguments in favor of AI are compelling. The technology has to improve a little bit (the easier problem) and we have to get used to the idea of AI resolving legal disputes (the harder problem). Both problems will be solved.