Back in January 2023, a company called DoNotPay offered “any lawyer or person $1,000,000 with an upcoming case in front of the United States Supreme Court to wear AirPods and let our robot lawyer argue the case by repeating exactly what it says.” At the time, everyone thought this was a silly gimmick.
But January 2023 was the Paleolithic Era of AI. Today, if the rules permitted it, could a robot lawyer competently argue a case in the United States Supreme Court? I decided to do a little empirical testing.
My conclusions:
Yes, a robot lawyer would be an above-average Supreme Court advocate.
The DoNotPay people weren’t ambitious enough. You don’t need to have a human read back what the robot lawyer says. You can have an actual robot lawyer.
Courts should permit robot lawyers at oral arguments and shouldn’t discourage this practice.
If there’s any aspect of a lawyer’s job where AI is likely to shine relative to humans, it’s oral argument. Oral argument should be the first, not the last, frontier of AI-assisted legal practice.
The DeepFakeOcalypse
There’s an easy way to test a robot lawyer’s oral argument prowess: let the robot do a Supreme Court argument and see how it does relative to a human. More specifically: take the actual questions that were posed at a Supreme Court argument, ask the AI to answer them, and see how the AI’s answers compare to the answers that the human lawyer gave.
However, if I conducted this experiment with some other lawyer’s oral argument, I feel it’s the kind of thing that might be taken the wrong way. So, bravely eschewing an Institutional Review Board, I decided to use myself as a guinea pig.
Last October, I argued a case called Williams v. Reed for the petitioners. I decided to compare my actual performance to the performance I would have given if I had used the DoNotPay method. Specifically, I inputted the briefs and key precedents into Claude 4.0 Opus. I then gave Claude a few tips for how to be a good oral advocate in the Supreme Court. Finally, I asked Claude to answer the questions posed by the Justices. (I omitted a couple of questions that I received that didn’t make sense in the context of Claude’s answers, and generally cleaned up the transcript a little bit.)
The transcript of my actual argument is here. Here is the transcript of Claude’s argument:
You can tweak the oral argument style however you want. If you think the answers are too long, you can ask for shorter answers. If you think the style is too formal, you can ask it to ratchet down the formality.
I know you all don’t want to slog through some boring oral argument transcript. But that got me thinking … why use a human at all? Why not cut through the middleman?
I decided to test whether AI could conduct oral argument without any human involvement. The easiest way to do this is to use ChatGPT’s “Advanced Voice Mode.” If you want to test it out, upload some briefs into a chat window, ask it to play the role of an appellate advocate, and start asking it questions.
To my ear, however, ChatGPT’s voice doesn’t sound enough like an appellate lawyer. To solve that first world problem, here’s what I did. First, I generated an AI voice of an appellate lawyer using ElevenLabs. (I used my own voice to generate that voice, but the AI voice doesn’t sound exactly like me. It’s possible to create AI-generated voices that sound exactly like you, but I would like to stave off the DeepFakeOcaplyse for a few more days.) Then, I had the AI voice read back all of Claude’s answers. Finally, I spliced those audio clips into the actual Supreme Court argument audio file, interspersed with the actual questions from the Justices. I also edited and cleaned up the audio a little bit.
Here is the real Supreme Court audio.
Here is the AI oral argument:
(If you are an email subscriber and these files seem wonky or wrong, please go to the website link. I’ve had issues a few times with file attachments in emails.)
What are you listening to? Well, the questions are actual audio of the Justices’ questions. But the answers are completely AI-generated: it’s AI-generated content being uttered by an AI-generated voice.
Thus, with minor modifications of currently-available technology, you could put a laptop on the Supreme Court podium and it could deliver an oral argument exactly like this. Voice-transcribing software could transcribe the questions; the text could be inputted into AI; and the AI’s outputs could be inputted into an AI voice generator. No preparation time would be needed; if the AI was completely unfamiliar with the case at 9:59 AM, it could deliver this oral argument at 10:00 AM.
It wouldn’t have to be a laptop. You could use any type of robot with sufficiently good speakers. I am imagining a kind of BattleBots scenario, with a Parkour robot representing petitioner sprinting up to the podium, leaping heroically over nearby chairs, and respondent countering with Spot the Robot Dog.
The results
You should listen to the argument all the way through. It’s excellent.
Was it better than me? Well, you all can decide that. But:
All of Claude’s answers were clear, coherent, and directly responsive.
Claude skillfully batted hostile questions away and used them to bolster its core themes.
Claude never stumbled or got tongue-twisted.
Claude knew the record cold.
Claude gave several unusually clever answers, making arguments I didn’t think of.
The voice generation software is amazing. You can tell it’s AI because it sounds a little too perfect, but I find it a pleasure to listen to. I used the alpha version of ElevenLabs v3 (i.e., an early version that’s still under development), so forthcoming versions will sound even better.
There were a few places where Claude didn’t understand what a question was getting at. But that happens to humans too. Objectively, this is an outstanding oral argument.
But why?
In my opinion, AI will soon surpass even the best human oral advocates, if it hasn’t already.
It shouldn’t be a surprise that AI does so well at oral argument. AI has many advantages.
First, AI thinks quickly. Oral argument is challenging because lawyers have to think quickly. After a judge asks a question, the lawyer has no more than two or three seconds to come up with an answer. If there’s anything AI is good at, it’s thinking quickly.
Suppose I gave a human and an AI the briefs in a completely unfamiliar case and then tasked them both with delivering a Supreme Court oral argument within seconds. With very few exceptions, I think the human lawyer would win the silver. Most of us take more than a few seconds to master 100+ pages of dense legalese.
Now, suppose, at oral argument, a Justice were to ask a very weird and difficult legal question. For example, suppose, in the Williams argument, a Justice asked: “Give three ways in which the Twenty-First Amendment bolsters your argument.” This is a difficult question because the Twenty-First Amendment, which abolished Prohibition, has nothing to do with the case. Yet Claude was able to give an answer effortlessly:
First, the Amendment demonstrates a fundamental constitutional principle: when the Constitution allocates special authority to states over particular domains, that allocation must be respected. The Twenty-First Amendment grants states unique power over alcohol regulation that federal law cannot override. This shows that constitutional text creates clear boundaries between federal and state authority. Here, the Supremacy Clause creates an equally clear boundary—when federal civil rights are at stake, federal law governs, and states cannot interpose procedural barriers.
Second, the Twenty-First Amendment illustrates how constitutional provisions can override otherwise applicable federal rules. States can regulate alcohol in ways that would typically violate the Commerce Clause because the Constitution explicitly says so. Similarly, Section 1983 explicitly overrides state procedural rules for civil rights claims. The Amendment shows us that when constitutional or statutory text speaks clearly about federal-state boundaries, we must honor that clarity.
Third, the Amendment embodies a judgment about institutional competence—recognizing that states are best positioned to regulate certain matters within their borders. This same principle of institutional competence supports our position: Congress determined in 1871 that state institutions could not be trusted to vindicate federal rights, which is why Section 1983 guarantees immediate access to neutral courts rather than hostile state administrative processes.
This is clearly better than what a human lawyer could come up with on the spot. If you gave me six hours, I doubt I could do better.
(Claude was initially reluctant to answer the question because the Twenty-First Amendment is so irrelevant, but after I insisted and informed it that I was giving a direct order, it caved.)
This is a silly question, but it illustrates an important point. For questions requiring a lot of mental horsepower, AI is vastly better than humans on short time scales.
In the real world, of course, humans do not have to master 100 pages of legal briefing in seconds. And humans do not have to answer off-the-wall questions about the relevance of irrelevant constitutional provisions. But if AI massively outclasses humans in answering hard questions, wouldn’t one also expect AI to outclass humans in answering easier questions?
Computers are far more accurate and faster than humans at multiplying 100-digit numbers. This implies that computers are also more accurate and faster than humans at multiplying 2-digit numbers, even though humans can perform that task reasonably quickly. The same goes for oral argument questions. I suppose in principle it’s possible that humans have some subtle advantage that makes them better at answering cognitively less intensive questions even though they are vastly worse at answering cognitively more intensive questions. But what’s the theory there exactly?
Second, AI doesn’t suffer from human frailties.
There are a tiny number of lawyers who have been touched by God and are consistently able to speak in full paragraphs that are legible on an oral argument transcript. The rest of us give garbled answers, make grammatical errors, and get lost mid-sentence. AI never does this. AI-driven oral argument transcripts are invariably clear as day.
Humans get confused sometimes. You will often see a lawyer not understand an oral argument question, say “no” instead of “yes,” or make some other objectively clear error. This doesn’t happen with AI. OK, AI will hallucinate and give weird outputs sometimes, but it will never completely botch a question because it can’t think quickly enough.
Humans get nervous, and nervousness can negatively affect performance. If AI claims it is nervous, it is lying.
There are certain types of questions—such as “what page of the record does X appear”—for which computer have a decisive advantage.
No tradeoffs
AI is way smarter than humans and speaks in full sentences. Does Team Human have anything to recommend it?
Not really. Let me walk through, and debunk, a few possible arguments against the use of AI at oral argument.
Hallucinations. In any discussion of AI, it is compulsory on penalty of disbarment to mention hallucinations. So: Yes. LLMs are guilty as charged of sometimes hallucinating. They will make up facts, cases, and quotes. I am aware of this and this and this incident. Cutting-edge LLMs hallucinate less than older LLMs, and you can reduce the risk of hallucinations through safety measures like adding “do not hallucinate” to the query, but it still happens sometimes.
But this matters less at oral argument. AI is pretty good at accurately reporting information from documents loaded in its context window, especially if the documents aren’t that voluminous. And at oral argument, the advocate isn’t supposed to bring up cases and facts outside the written record. Occasionally a lawyer tries to bring up a case at oral argument that’s not in the briefs, and it typically goes badly—the lawyer has to apologize awkwardly and then read off a case citation while the judges look quizzically and mentally fault the lawyer for failing to cite the case in his brief. Instead, the advocate is supposed to offer sophisticated and insightful answers to questions regarding the already-submitted briefs—tasks that AI is particularly good at. The AI-driven Williams oral argument illustrates the point. There are no hallucinations because Claude never had a reason to hallucinate.
Humans are more authentic than machines. There’s an argument that goes like this: “Humans are better than AI because AI seems weird and so judges won’t take it seriously.” True, if judges choose not to credit the arguments of AI lawyers, then, tautologically, AI lawyers will be less effective than human lawyers. But let’s hope judges don’t do that.
Maybe one can offer a different flavor of this argument, something like: “human lawyers can form authentic connections with judges and AI lawyers cannot.” I agree that a judge is unlikely to form an authentic human connection with a robot dog. However, I doubt judges form authentic human connections at oral argument with humans either, at least not with any frequency. Human lawyers aren’t particularly authentic people. They’re up at the podium to make arguments on behalf of their clients, not to reveal the whispers of their souls.
Also, do we really want the outcome of legal cases being influenced by authentic human connections between judges and lawyers? Judicial decisions affect the interests of the clients, not the lawyers. And because judicial decisions set precedents that apply in future cases, they affect the interests of other people who aren’t before the court. Why does it make sense that those people’s interests would be affected by a lawyer and a judge sharing a special moment of luminous kinship at oral argument? It’s like, “The judge was entranced by the lawyer’s lilting voice, therefore everyone gets to have a brand new personal jurisdiction doctrine!” Wiping out authentic human connections is a feature, not a bug, of AI.
You can just do things
On March 26, 2025, a pro se litigant appearing before a New York intermediate appellate court (the famous First Department) attempted to present his argument via an AI avatar. The judges were displeased (YouTube video here). As a practice pointer, it is a bad idea for a litigant to show up at a hearing and attempt to deliver oral argument via an AI avatar without asking the panel first. Still, the litigant’s heart was in the right place.
I would like to advocate for courts permitting AI oral arguments. Lawyers would have the option of letting the AI doing the entire argument and sitting on the side; alternatively, they’d have the option of doing the argument themselves but have AI whisper in their ear.
Courts shouldn’t be grudging about allowing AI arguments. They should treat AI-made arguments exactly like they would treat human-made argument. The better argument will win, regardless of who makes it.
If we’re nervous about AI going haywire, judges could require a human to attend the argument along with the computer, like how some jurisdictions require out-of-state lawyers to appear with local counsel. The lawyer wouldn’t just have to set the laptop on the platform and sit down looking bored. I am envisioning human lawyers striding up to the podium inside Avatar-style AMP suits.
Why?
Better advocacy. Watching that video of the First Department argument, do you really have any doubt that the charismatic and articulate AI avatar would have done a better job than the hapless pro se litigant? Low bar, fine. But it’s clear that in many, perhaps most cases, feeding the briefs into AI and having it spit out answers will produce better lawyering. This is especially true given that, at least in the early stages, the clients who will use AI are the clients who have the weakest lawyers or who don’t have lawyers at all.
A level playing field. If both parties use AI—which will inevitably start happening—the advocates will be of roughly equal quality. This will improve the odds that the court’s eventual decision will be driven by the correct view of the law as opposed to differential lawyer quality. Suppose that without AI, one lawyer’s advocacy would get a 7/10 and the other lawyer’s advocacy would get a 4/10, and with AI, both lawyers’ advocacy would get an 8/10. The use of AI would improve the quality of the resultant judicial decision, both because there’s better advocacy on both sides and because the judges won’t be distracted by one lawyer’s superiority over the other.
Autonomy. For better or for worse, our legal system prioritizes litigants’ autonomy in deciding how they will defend themselves. That is why, for example, courts tend to be very reluctant to disqualify litigants’ chosen lawyers. It is also why criminal defendants have a constitutional right to represent themselves. Even when a lawyer (or unrepresented criminal defendant) is doing horribly, judges rarely step in. Respecting litigants’ autonomy requires respecting their right to use AI.
Low downside risk. I have participated in many panel discussions on the topic of “does oral argument affect the outcome of appeals?” Typical answers include “usually not, but occasionally yes”; “it rarely affects the outcome, but it sometimes affects the reasoning”; and “you can’t win your case at oral argument, but you can lose it” (which doesn’t make sense to me, it reminds me of “baseball is 90% pitching”). All of these conjectures are unfalsifiable, which is why lawyers keep doing the panel discussions. Still, the consensus is that oral argument is less important to the outcome than briefing. A lawyer who botches oral argument but has the more persuasive brief will usually win. This means that if we’re turtles in our shells, terrified of change, the downside risk of using AI at oral argument is low.
People are already doing this. If you show up in court with a laptop and an earpiece, the judge will probably notice. But hearings, particularly at the trial court level, frequently take place on Zoom. There is no way for the judge to know if the lawyer is reading AI’s answers off of his computer monitor. I guarantee you that ill-prepared and stressed-out lawyers are already using AI during remote hearings.
Is this sanctionable conduct? As long as the answers themselves are accurate and not hallucinatory, I’m not sure that it is. There’s no specific rule of professional ethics that bars an attorney from using AI during a hearing. A court could enact a local rule banning any use of AI during hearings, but few courts have such rules.
This will happen more often as time goes on. What percentage of college students don’t use AI to complete their assignments? I would suspect the percentage is hovering around 0%. Once those college students become lawyers, they are not going to stop.
***
How is this for a modest proposal? As a pilot program, a court could issue a local rule allowing pro se litigants—and no one else—to use AI, as they see fit, at oral argument. To guard against hallucinations, the court could require the litigant to use a prompt that would ask the AI not to cite any cases unless they appear in the briefs. Just give it a shot and see how it goes.
Is this a point for the utility of AI, or a point against the utility of oral argument? I am not saying this to be a pain; but if judges could get the same quality answers by feeding the record to a chatbot, shouldn’t they do that and save the time?
Taking the humanity out of lawyers' oral advocacy in exchange for the crumbs of more clever AI responses to questions seems a really bad idea. But, if we're going to replace the lawyers with AI, why not do the same thing with the justices themselves.