I know it's been a while since you posted this, but I also know you're generally a proponent of automating large parts of the legal system and I just had this thought on that general topic.
If you want to know what an automated, AI-driven legal system looks like, look at Youtube's system for handling copyright and dangerous or offensive content. We see:
- Pirates willing to put minimal effort into content theft are able to do so with impunity.
- Spurious strikes and reports are routinely used to suppress criticism.
- Sufficiently litigious and/or paranoid companies use automated (or sometimes human-driven!) systems to suppress and/or demonetize *all* discussion of their content, critical or otherwise.
- "Copyright trolls" have almost total impunity to claim ownership of public domain content, such as music that has fallen out of copyright, things like "the sound of rainfall", or, most commonly, music that was deliberately made part of a copyright-free library.
- Certain sorts of media discussion (especially those related to music) are nearly impossible to monetize due to hypersensitive copyright systems.
- Some topics of discussion (especially those related to minority groups or any sexual topic) are subject to near-automatic suppression and demonitization, regardless of whether the content of the discussion actually breaks ToS.
- Falling afoul of these systems can easily destroy a channel with no realistic prospect for appeal; popularity is more or less the only defense, and independent monetization is the only way to mitigate the issue.
- These flaws are so well known and their consequences so routine that well-founded takedowns of copyright-violating content (eg of "Man in Cave", by Internet Historian) are presumed to be spurious if the channel isn't transparently built on content theft, and sometimes even then. The same is almost certainly true for ToS violations, though I can't think of an obvious example.
All of this leads to a conclusion that feels like it should be obvious, but that many legal commentators seem to miss: consistent decision-making vaguely resembles correct decision-making, but that does not make the two identical.
I suspect the many thousands of inmates in our federal and state prisons may wish to give AI a try; and may well have benefited from Claude serving as their public defender.
The most obvious use of AI in criminal appeals would be for the court to submit the parties’ briefs and let Claude decide the case and draft an opinion. I mean, why not? The court can ignore Claude if it chooses to.
Replacing the advocates is less obvious to me. So many of the appeals that I see involve post-trial development of facts that are not clearly in the record, especially relating to ineffective assistance arguments. I am not sure that AI is well-suited here, but I am keeping an open mind about it.
As an aside, I am curious about Claude hallucinating. Hallucinations seem to require imagination and creativity. Those are human things. How long before Claude becomes an expert at “creative writing”?
AI merely produces an average brief based on its inputs, it has no understanding of the law or language in any sense of the word "understand". It doesn't know what the word "physically" means.
That AI wrote a better brief than the appellate lawyers in this case, simply means that it was better than *these* lawyers. In any group of lawyers, there will always be those who are above average, and those who went to Yale. AI will be better than some, worse than others.
Because AI is a mathematical construct, it has no empathy, and suffers the same biases as its input data.
AI also hallucinates caselaw. It is essentially correlation based, so what it makes up sounds plausible (but isn't real).
I am not saying there is not role. Everyone should try it once. But use with extreme cautious and lots of oversight.
Speaking as an 80th percentile criminal defense lawyer on only my best days, this is unsettling. But your demonstration is fascinating. I don’t agree with the normative claim that better written laws give better notice of proscribed conduct, or that clarifying written laws should be even a top-100 concern for criminal justice reform. But the prospect of an appellate remedy for an obvious trial error within weeks for someone who can’t get an appeal bond is intriguing. Maybe such a quantum leap in efficiency also destroys the rationale for harmless error review, which would be great.
Still, I think (I think) that world is a better place where process takes time, we ignore inefficiency in cases dealing with core liberties, and bail is liberally granted. Thanks for this.
I know it's been a while since you posted this, but I also know you're generally a proponent of automating large parts of the legal system and I just had this thought on that general topic.
If you want to know what an automated, AI-driven legal system looks like, look at Youtube's system for handling copyright and dangerous or offensive content. We see:
- Pirates willing to put minimal effort into content theft are able to do so with impunity.
- Spurious strikes and reports are routinely used to suppress criticism.
- Sufficiently litigious and/or paranoid companies use automated (or sometimes human-driven!) systems to suppress and/or demonetize *all* discussion of their content, critical or otherwise.
- "Copyright trolls" have almost total impunity to claim ownership of public domain content, such as music that has fallen out of copyright, things like "the sound of rainfall", or, most commonly, music that was deliberately made part of a copyright-free library.
- Certain sorts of media discussion (especially those related to music) are nearly impossible to monetize due to hypersensitive copyright systems.
- Some topics of discussion (especially those related to minority groups or any sexual topic) are subject to near-automatic suppression and demonitization, regardless of whether the content of the discussion actually breaks ToS.
- Falling afoul of these systems can easily destroy a channel with no realistic prospect for appeal; popularity is more or less the only defense, and independent monetization is the only way to mitigate the issue.
- These flaws are so well known and their consequences so routine that well-founded takedowns of copyright-violating content (eg of "Man in Cave", by Internet Historian) are presumed to be spurious if the channel isn't transparently built on content theft, and sometimes even then. The same is almost certainly true for ToS violations, though I can't think of an obvious example.
All of this leads to a conclusion that feels like it should be obvious, but that many legal commentators seem to miss: consistent decision-making vaguely resembles correct decision-making, but that does not make the two identical.
As an appellate criminal defense attorney reading this, I feel like I need to apply to Old Glory for some insurance against robots: https://youtu.be/g4Gh_IcK8UM?si=4utn-n6iTbTMTttg
I suspect the many thousands of inmates in our federal and state prisons may wish to give AI a try; and may well have benefited from Claude serving as their public defender.
Have you determined that Claude is better than ChatGPT for doing this type of work?
The most obvious use of AI in criminal appeals would be for the court to submit the parties’ briefs and let Claude decide the case and draft an opinion. I mean, why not? The court can ignore Claude if it chooses to.
Replacing the advocates is less obvious to me. So many of the appeals that I see involve post-trial development of facts that are not clearly in the record, especially relating to ineffective assistance arguments. I am not sure that AI is well-suited here, but I am keeping an open mind about it.
As an aside, I am curious about Claude hallucinating. Hallucinations seem to require imagination and creativity. Those are human things. How long before Claude becomes an expert at “creative writing”?
To me the weird thing about this is why a common robbery is even in federal court. Why even have state courts if they can’t handle this sort of thing?
AI merely produces an average brief based on its inputs, it has no understanding of the law or language in any sense of the word "understand". It doesn't know what the word "physically" means.
That AI wrote a better brief than the appellate lawyers in this case, simply means that it was better than *these* lawyers. In any group of lawyers, there will always be those who are above average, and those who went to Yale. AI will be better than some, worse than others.
Because AI is a mathematical construct, it has no empathy, and suffers the same biases as its input data.
AI also hallucinates caselaw. It is essentially correlation based, so what it makes up sounds plausible (but isn't real).
I am not saying there is not role. Everyone should try it once. But use with extreme cautious and lots of oversight.
Speaking as an 80th percentile criminal defense lawyer on only my best days, this is unsettling. But your demonstration is fascinating. I don’t agree with the normative claim that better written laws give better notice of proscribed conduct, or that clarifying written laws should be even a top-100 concern for criminal justice reform. But the prospect of an appellate remedy for an obvious trial error within weeks for someone who can’t get an appeal bond is intriguing. Maybe such a quantum leap in efficiency also destroys the rationale for harmless error review, which would be great.
Still, I think (I think) that world is a better place where process takes time, we ignore inefficiency in cases dealing with core liberties, and bail is liberally granted. Thanks for this.