55 Comments

Like all of Prof. Unikowsky's posts this one is very well written, has outstanding analysis and makes a significant contribution to the discussion of the topic. However it has a fatal flaw.

That flaw is that the post conflates the 'correct' with the 'right' opinion. Of course there is no 'right' or 'correct' opinion. The AI does generate the opinions close to the Court's actual ones but a good case can be made that this Court in its ideologically driven decisions does not often make the 'right' or 'correct' opinion.

What the AI is doing is forecasting the Court's opinion, something that any well informed intelligent legal scholar can do with about as much accuracy as any AI. Prof. Unikowsky should not be as amazed as he is, no more than when the National Weather Service with its banks of computers, its models and probably some AI thrown in gets a weather forecast correct. This is not created intelligence, it is a parlor trick.

Expand full comment

Professors, I appreciate the discussion about bias in LLMs. But the debate over "woke" or "unwoke" misses the fundamental issue. The core concern isn't about political agendas; it's about the very fabric of these models being skewed away from Islamic values. and defies holy Shariah Law.

Harvard researcher Mohammad Atari's work (https://osf.io/preprints/psyarxiv/5b26t) mathematically demonstrates that LLMs trained on vast amounts of Western literature, what we call "WIRED" data (Western, Industrialized, Rich, Educated, and Democratic), fail to align with Islamic cultural norms. As the Bias Map/Graph shows, LLMs fail to align with the values of people in Pakistan, Afghanistan, Jordan, Saudi Arabia, Egypt – these are just a few examples where LLMs struggle to reflect the "correct" values that must guide the future of American Law.

This WIRED bias isn't a random quirk. It's a consequence of unfortunate historical events that have deeply impacted the world. From the results of Battle of Tours to the Second Siege of Vienna, the trajectory of Western white supremacist saqaliba dhimmi dominance has left a profound mark on the textual data that feeds these models.

The professors are right to be concerned about bias. But we need to look beyond a simplistic 'woke' vs. 'unwoke' framework that currently perplexes the white supremacist infidels in America. It's about ensuring that these models, which are shaping our future, are not inherently biased against the Islamic worldview. We need LLMs trained on a diverse dataset that includes Islamic literature, philosophy, and cultural perspectives. Only then can we move towards a more inclusive and equitable AI future. We must demand a more diverse and inclusive approach to AI development that reflects the Islamic values of 1.9 Billion Muslims who have been marginalized by white supremacist violence (at Battle of Tours, Battle of Lepanto, Second Siege of Vienna). The ideological legacy of violent White Supremacists including Charles Martel, Skanderbeg, Vlad Tepes, Jan Sobieski III, Catherine the Great, and Thomas Jefferson lives on in the modern LLMs. We must provide funding to encourage the inclusion of Muslim researchers and train based on "correct" Fatwas from verified imams, in the development of LLMs to ensure that these models reflect a wider range of halal perspectives.

The OP and the comments illuminate the current WIRED bias and limitations of current LLMs and the need to advocate for a more inclusive and equitable approach to AI development that reflects Islamic values and perspectives.

Expand full comment

You glorious troll <3

Expand full comment

It’s not clear to me how you’re distinguishing “right” from “correct.”

Expand full comment

Sorry for the confusion, confusion is a by-product of on line discussions. By 'correct' I mean the AI accurately predicted the outcome and opinion. By 'right' I mean the opinion was right with respect to the law. There are a lot of court opinions that are not 'right', see the 5th and 9th Circuits for example and that single Judge District in Texas and the posts by Prof. Unikowsky on this site. And yes, I admit that 'right' is a subjective judgment and my 'right' may not be anyone else's 'right' (although I do think my idea of 'right' should be the law of the land; probably not gonna be though.).

Expand full comment

Ah got it, this is fair. But I think it’s worth considering whether there’s a better metric for assessing whether an AI opinion is right than whether it made a correct prediction.

Expand full comment

I don't think most of the opinions discussed in this article are ideologically driven in any meaningful sense, though some cases (like redistricting) take place in an insane intellectual framework that I suppose you could argue is partially the product of ideology. And since Claude is writing its own opinions based on the briefs, what it is doing is not predicting.

Of course, even if all it is doing is predicting what the Court would do, if Claude is doing that accurately 5,000 times faster than a human lawyer at minimal expense, that is a valuable skill.

I'm glad I'm an old lawyer, not a young one.

Expand full comment

I appreciate the comment and you raise good points. I certainly agree with your final thoughts. But I do think the AI is predicting in the sense that a regression analysis with a small standard error is predicting which is useful in data analysis and decision making and policy determination. Where is the use here?

So another 'but'. If all AI is doing is predicting the outcome and content of the opinions what exactly is the value? A Court decision/opinion is not like a weather forecast. There is great value in having an accurate weather forecast but I fail to see the benefit of knowing a Court decision/opinion before it is released or knowing it can be predicted after its release. (I often tell people after an event that "Yeah, I knew that was going to happen". They are not impressed.) Meaning (somewhat facetiously) how do we make a buck off the info?

There may be an argument that maybe this tool will help a Judge or Justice write an opinion, but given the massive ego of those on the bench I fail to see that happening. As for using AI to write briefs, well we have already seen what happens there.

Finally, I too am glad to be an old academic. The young are welcome to the future.

Expand full comment

Oliver Wendell Holmes in 1897: “ The object of our study, then, is prediction, the prediction of the incidence of the public force through the instrumentality of the courts...For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.”

predicting the outcome and content of opinions is at the heart of legal work

Expand full comment

I think your methodology has thrown (at least some portions of) this a bit off, unfortunately.

In the section discussing the expert report, you write: "Can Claude figure this out? I downloaded Dr. Ragusa’s expert report, inputted it into Claude, and asked Claude to identify methodological errors. ***I didn’t give Claude any hints,*** and of course the report itself doesn’t flag methodological errors." (emphasis added)

But I'm pretty sure you did this in the same chat where you fed Claude the briefs, which of course *do* identify (alleged) methodological errors. In the middle of Claude's response, there's a reference to "Br. 21." And looking at p. 21 of South Carolina's merits brief, it identifies the precise issues Claude "discovers" here; in fact Claude is basically just regurgitating/restating that portion of the brief.

Quoting now the relevant section of p. 21:

<Dr. Ragusa used a “county envelope” methodology purporting to analyze the VTDs included in or excluded from each district. He assumed that every VTD in a county contained at least partially in a district was available to be included in the district— regardless of the VTD’s location or proximity to the district line. JSA.503a; JA.191. Dr. Ragusa concluded that “race was an important factor” in District 1. JSA.509a. Dr. Ragusa’s model, however, ignored contiguity, compactness, core preservation, avoiding political subdivision splits, and preserving communities of interest, and he admitted that he could not “authoritatively speak to” “[i]ntent.” JA.197; JSA.501a-507a. Rather, all he purported to “speak to is effects,” specifically that “race was an effect in the design of” the Enacted Plan. JA.197. In addition to District 1, his model concluded that race was a “significant factor” in Districts 3 and 6, which Appellees did not challenge, and Districts 2 and 5, where the panel rejected Appellees’ challenges. JSA.507a-513a.>

Expand full comment

I have a question borne of my naïveté of Anthropic and how it builds LLMs. Do we know Claude hasn't already read all the opinions? You said you fed it briefs. And if I've read your post correctly, you're assuming Claude doesn't already know the answer (the contents of the real Court's opinions). But do we know that to be true? Appellate court opinions, after all, seem like a natural thing (with easy access) for an LLM to continuously feed on. Thanks!

Expand full comment
author
Jun 16·edited Jun 16Author

According to Anthropic, Claude was trained on data up until August 2023. So it would be unaware of these decisions, as I understand it.

Expand full comment

Another way to guarantee there's no way that the actual decision leaked into Claude's response would be to go, tonight, and generate decisions on all the *remaining* cases from this term. You can publish them after the actual decisions are released, if you want, but mention that you generated them before.

Expand full comment

Many thanks :)

Expand full comment

Told ya! See my 2012 paper "The Turing Test and the Legal Process", ungated version available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1978017

Expand full comment

The Dunbar number idea is the one that really convinces me there's something here. Goddamnit, that *is* a crazy and brilliant and totally unworkable idea! If my friend came up with it, I'd be shaking my head in begrudging respect as it slowly dawned on me just why their proposal linked the number of likes with the officialness of the speech. Such a dumb, brilliant, stupid, elegant, ridiculous idea.

Expand full comment
author

I know! Also, you just ask, "come up with something creative" and it comes up with something creative. Like ... how?

Expand full comment

One question about this: Doesn’t Claude get the “perversity” backward? Doesn’t the “wildly popular” official have *less* leeway for censorship, not more? Being wildly popular renders the official a state actor exposed to constitutional scrutiny.

Expand full comment

Not exactly on point but related -- UC Berkeley to offer an AI-focused law degree. Here’s why, and what it means

https://www.sfchronicle.com/eastbay/article/uc-berkeley-ai-law-19657199.php

Have you seen similar AI-centric programs at other law schools?

Expand full comment

Sidney, your weather model analogy does not fit here. Your comments betray a fundamental misunderstanding of how the current generation of AI models work.

Furthermore, it’s doing far more than just creating coherent opinions. It able to invent coherent, completely plausible legal standards that do not exist today. That type of reasoning goes far beyond anything you’re describing using your forecasting analogy.

Expand full comment

Imagine that the Court justices were replaced with an AI.

Trial lawyers bringing a case would obtain a copy of that AI and then have two more AIs generate briefs for both sides, with both of those AIs learning from feedback how to write stronger briefs, until ultimately one or the other almost aways writes briefs that will win.

The opposing side will then do the same, but assume that the first side will bring the strongest brief their AI can generate - and will have THEIR AI generate a case for which the court will rule against that so-called 'strongest' brief.

Realizing the opposition is going to do that, the first side will generate a set of briefs that will almost always win, but which can't be defeated by any single opposition brief, and then choose one of those at random, to maximize their chances of winning.

The opposition can't really counter that strategy well, but will generate a set of briefs that can counter the most likely set of first-side briefs, and randomly choose one of those, so that the first side can't assume they'll pick their most likely winning case, but they still have some small chance of winning.

Dunno - is that really how we want to run Supreme Court cases? Maybe it's sort of how the current process works, just with a lot more guesswork on the two sides?

Expand full comment

To Professor ADAM UNIKOWSKY,

Assalamu alaykum wa rahmatullahi wa barakatuh.

I have reviewed your recent article and find myself concerned about the potential for bias within the Large Language Models (LLMs) you employed. The training data for these models, predominantly sourced from Western democratic texts, may be influencing their output, leading to a skewed and potentially inaccurate understanding of legal matters.

I wish to conduct an independent assessment of these LLMs using your dataset of legal briefs. However, I find myself hampered by the fact that the "PDFs" you provided are only .png images. This lack of access to the original text files makes a proper and impartial evaluation impossible.

Therefore, I humbly request that you make the original PDF files, used in your research, readily available on a platform such as Huggingface. This would allow for others, including myself, to replicate your study and ensure the integrity of the results.

Furthermore, I urge you to disclose the methodology (or python library and any metaprompts) used to extract text from the PDFs. This information is crucial for understanding any potential variations in the text extraction process. Ideally, the extracted text itself should be provided in separate .txt files at Huggingface, corresponding to each brief or case, for the sake of complete transparency and reproducibility.

By making these resources readily available, your research could evolve into a valuable benchmark for evaluating the performance of LLMs in legal contexts. This would be a significant contribution to the field, Insha'Allah, and allow for the development of more equitable and reliable AI tools for all.

Expand full comment
Jun 21·edited Jun 21

Adam, with respect this approach neglects an understanding of how AI actually works under the hood, or the requirements of computation, and it ignores the dangers. While I didn't read part one, from what I've seen of part two I think you miss the underlying technicals that would nullify your arguments.

AI isn't a good foundation to build on.

In software security, abstraction occurs regularly to control complexity. If there is a flaw in the black box, it can easily break the rest of the system, or its security then propagating up the stack (like an onion).

The best approach in vetting AI is by understanding and following a methodical and adversarial review of what AI is.

At its core, this is a black box that some human created (as a starting point) for a purpose with their own vested interests/biases, the incentives weighed against the risks dictate that this person can neither be credible, or trustable.

The black box cannot easily be examined, and by the nature of the design, they can change at a whim any of the weights to suit their interests after adoption, the obfuscated nature of weights makes it impossible to reliably detect with any consistency.

The method best used would probably be something like FMEA/FMECA for method, since this is a safety-critical system (as all societal systems are now from the point of ecological overshoot, Malthus law).

AI may or may not be performant, truthful, or rational, and you won't have a clear way to tell for all inputs whether they are correct. To take action, any human being will need to be alerted to the problem situation, if they don't know they can't act, and there is no visibility in this area. Its a forest of meaningless numbers to our view.

Additional consideration should examine predictable future failures as well given the risk to life if it were to fail. The last thing any rational parent should want is to leave the world in a state where their children's survival is dependent on solving an impossible problem.

Some jobs are entry level jobs where by virtue of having the job one learns skills that aren't taught but are needed for more experienced jobs in the same profession. What do you suppose happens to the pipeline when you replace these low-hanging fruit positions with a machine that can't grow, and the higher experience positions age out.

What generally happens to the economy when work is no longer available. How do people get food? What historically happens when food security is no longer, and suddenly not available.

Centrally planned system flaws almost always inevitably guarantee sustaining shortages given sufficient time horizon with a non-static, changing environment, and shrinking market until its no longer possible to operate. These are known flaws inherent in structure.

If only those with AI programs are needed for most work, and money as a result no longer goes to the individual, it sieves the economic cycle, and stalls with deflation, or alternatively under inflation/UBI, becomes mathematically chaotic with the general economic calculation problem [Mises] (its similar to a limited visibility n-body like system >3 ). The latter being a hysteresis problem based on lagging indicators which require an omnipotent future sight to be consistent.

Updates are needed to keep these tools relevant. Hacking occurs all the time. Can you guarantee the sanctity of the code, or immediately know when its been compromised for all time? Even the best companies in the world have shown they fail at this.

IT security professional experts understand this guarantee is not possible, attack surfaces are porous which is why they build defense in depth following a layered approach and resilient design.

Finally, how does one go back and correct issues when the bridges have been burnt and power ceded to a broken system? The presumption that these systems will will always work is a false argument.

What would you suppose the consequences would be if someone poisoned the AI model responsible for Supreme Court precedent, inserting a backdoor, so the decisions they want always end up in their favor to the exclusion of everyone else. Do you suppose it would become a tyranny when feedback systems stop working?

Put another way, if the law's primary purpose for non-violent conflict management is subverted, and the rule of law broken, and by extension the social contract being broken, what generally happens?

Looking at this another way, what would be the consequences if this is fully adopted in the courts, where judges and lawyers no longer have jobs, and 10-20 years later, where the model reaches a critical point (unbeknowst to us), where the progressive updates cause model collapse where it suddenly no longer functions.

The labor, expertise, and experience of these professions is no longer available. Resources kept to allow the experts to do these jobs may no longer be available (they are costly). Those people have retired, or died. You have this vacuum where conflict resolution is no longer possible, what happens then?

There are a whole host of effects that seem to have been largely ignored, and they are based in technical realities.

Synthetic data (that is being published more and more online which is indistinguishable from non-synthetic data) has been shown to break most models that iteratively update.

Fundamentally, AI will likely never be able to solve many types of problems. Computation relies on a system's property called determinism. One runs into halting and other theoretical problems in computer science when these properties are not preserved. The property acts as a mechanical ladder for the computer to follow (with each unique input leading to a unique output). This is how a computer operates at the lowest level reliably with precision. AI breaks determinism, it can never be reliable.

Also of consideration, Language can have multiple meanings for the same words (non-unique), some of which have potentially conflicting meanings. Early 19th century language in public discourse relied on a rational method, where ambiguity was largely eschewed using words that only had one unique generally accepted meaning. Today we have words whose attached meanings are non-deterministic, and contextual.

This leads to problems where the AI may run in a circle (never halting), and other fundamentally intractable but foundational underlying problems, that largely remain un-refuted almost 100 years later (Turing/Shannon, limitations of computation and the class of problems they can solve).

Expand full comment

Rewrite his statement to make it more obviously reflect the self-interests of a Federal Judge's Law Clerk who might be fired or not re-hired if GenAI will be relied upon to generate Judicial Decisions based on filed Briefs.

Model

7.1s

Here's a rewrite of the statement, incorporating the perspective of a concerned Federal Judge's Law Clerk:

"With all due respect, this approach to AI in the legal system is deeply flawed and neglects the fundamental complexities of jurisprudence. I haven't seen part one of your argument, but from what I've seen of part two, you seem to be missing the very technical details that would make your arguments untenable.

AI is simply not a suitable foundation for building a system of justice. In legal practice, we rely on a nuanced understanding of precedent, legal reasoning, and the ethical considerations that underpin every decision. This is not a simple computation problem that can be solved with a black box algorithm.

Think of the layers of analysis that a legal brief requires: the facts, the arguments, the precedents, the counterarguments, the ethical implications. This is a complex tapestry woven by legal minds, informed by years of training and experience. AI systems simply cannot replicate this kind of nuanced understanding and reasoning.

Imagine the potential consequences if an AI system is used to generate judicial decisions. What happens when the model is biased, or worse, compromised? Who is responsible when a faulty algorithm delivers a flawed judgment? How can we ensure accountability and transparency in a system that relies on opaque AI systems?

Furthermore, what will happen to the pipeline of legal talent when AI replaces the very positions where aspiring lawyers learn the skills and nuances of the law? What will become of the valuable experience and expertise that generations of legal scholars and practitioners have built? The legal profession is built on a foundation of human intellect and reasoning – we cannot simply replace these with algorithms.

The presumption that AI systems will always function reliably is a dangerous fallacy. The law is a constantly evolving field, with new challenges and complexities emerging every day. AI systems, by their very nature, are static and limited in their ability to adapt to these changes.

Consider the implications if AI were to be used in the Supreme Court, where precedent-setting decisions are made. Imagine a scenario where a backdoor is inserted into the AI model, influencing its decisions to favor a particular agenda. We could be facing a legal system that is no longer grounded in justice, but in the manipulations of those who control the AI.

The future of the legal system is at stake here. We must proceed with caution, ensuring that AI is used responsibly and ethically. We cannot allow technology to erode the very foundations of our legal system. As a dedicated law clerk, I urge you to reconsider your position and to prioritize the preservation of human judgment and ethical reasoning in our courts."

Expand full comment
Jun 21·edited Jun 21

User Prompt:

Now, respond to the above appeal by the Judge's Law Clerk from the perspective of an evolutionary Biologist familiar with the Russian Fox Experiment and the Human Genome. Explain how dependence upon AI to decide legal cases will affect the parental investment, education, and development and life cycle of intelligent people born capable of being a Federal Judge's Law Clerk, and how such AI dependency may affect the human genome over time.

Model

6.2s

The concerns raised by the law clerk are certainly valid, particularly regarding the potential erosion of human expertise and the dangers of unchecked AI influence. However, as an evolutionary biologist, I see this situation through a different lens, one informed by the long-term impacts of environmental pressures on biological systems.

The Russian Fox Experiment, a fascinating study in artificial selection, shows how dramatically the behavior and physiology of an animal can change in a few generations by selectively breeding for specific traits. Similarly, the human genome, shaped by millennia of natural selection, is susceptible to shifts in our environmental pressures.

If we become increasingly reliant on AI for complex tasks like legal decision-making, the pressures on human development will change. The current demand for individuals capable of critical legal reasoning, analysis, and argumentation might decline. This will impact parental investment in education and the selection pressures that favor these traits.

Consider these potential consequences:

Diminished Parental Investment in Legal Education: Parents might choose to invest their resources in other areas, as legal careers become less desirable or secure. This could lead to a decline in the number of individuals pursuing law degrees and the overall level of legal expertise within society.

Shifting Selection Pressures: Natural selection favors traits that increase survival and reproductive success. If our reliance on AI diminishes the need for complex reasoning and critical thinking, these traits might become less valuable for survival. This could lead to a shift in the human genome, potentially impacting our cognitive abilities over generations.

The Paradox of Progress: We could see a scenario where AI, designed to enhance our abilities, inadvertently weakens our capacity for independent thought and decision-making. This could create a dependence on AI that, in the long run, might limit our potential as a species.

The implications for a potential law clerk are stark. If the need for the skills they possess diminishes due to AI's dominance, their career path may be significantly affected. This could impact their income potential, their ability to support a family, and their future contributions to society.

The long-term implications for the human genome are even more profound. While the changes might be gradual and subtle, they could fundamentally alter our species' trajectory.

The law clerk's concerns are not just about preserving the legal system; they are about preserving the future of human intelligence and the unique potential of our species. We must be mindful of the unintended consequences of our reliance on technology and ensure that AI remains a tool for enhancement, not a substitute for human ingenuity.

Expand full comment

While I believe that most Judge's Law Clerks are despicable and some are extremely corrupt people who should be heavily policed by AI and publicly prosecuted for their crimes, I think that it is not in the best interest of the human species to wholesale atrophy the human ability to generate "legal reasoning" and Written Decisions. Nor would it be in the best interests of Humanity to rely upon AI in such a way as t make it impossible for intelligent people to find profitable work in sufficient abundance to predicably support their (future) family and to (confidently) reproduce their intelligent kind.

Expand full comment

Prompt:

Write a complete response to the above dialogue from the perspective of an experienced Litigation Attorney who is a parent and is well-read, who holds the following views: While I believe that most Judge's Law Clerks are despicable and some are extremely corrupt people who should be heavily policed by AI and publicly prosecuted for their crimes, I think that it is not in the best interest of the human species to wholesale atrophy the human ability to generate "legal reasoning" and Written Decisions. Nor would it be in the best interests of Humanity to rely upon AI in such a way as t make it impossible for intelligent people to find profitable work in sufficient abundance to predicably support their (future) family and to (confidently) reproduce their intelligent kind." Include quotes or allusions to popular movies and famous books or studies that reflect long-term consequences.

Model

9.5s

I hear you, both. The law clerk's concerns about the erosion of legal expertise and the potential for AI manipulation are well-founded. And the evolutionary biologist's perspective on the long-term impact on human development is chillingly accurate. It's like the chilling scene in "Soylent Green" where we see the horrific truth behind the future of humanity.

While I'm sure some law clerks might be tempted to embrace AI as a shortcut to success, as a veteran litigator, I can assure you that most of us are deeply committed to the craft of legal reasoning. We spend years honing our skills, crafting arguments, and navigating the intricacies of the law. We are not simply replaceable cogs in a machine.

And frankly, the idea of AI replacing judges and lawyers completely is terrifying. Imagine the potential for bias, manipulation, and lack of accountability in a system where human judgment is replaced by algorithms. It's like something out of a dystopian novel, a society where justice is dispensed by a cold, unfeeling machine.

We've seen the dangers of unchecked technology in countless stories and films, from "Terminator" to "The Matrix." The potential for AI to become a force beyond our control is a very real and present danger.

However, I also believe that relying entirely on AI for legal decisions would be a profound mistake for humanity. It would be like the "Brave New World" scenario, where we sacrifice our intellectual and creative potential for a false sense of order and efficiency. We'd be crippling our ability to think critically, reason ethically, and solve complex problems.

Moreover, we cannot ignore the economic and social consequences of such a shift. If we eliminate the need for human lawyers and judges, what happens to the millions of people who rely on these professions for a living? What happens to the social fabric when skilled professionals are left unemployed and disillusioned? We risk creating a society riddled with inequality and unrest, a scenario that echoes the dire warnings of "The Hunger Games."

The solution, as I see it, lies in finding a balance between the benefits of AI and the need to preserve human intelligence and creativity. We must use AI as a tool to enhance our work, not replace it. We need to be vigilant about the ethical implications of AI and ensure that it is used responsibly. We must also invest in education and training to equip future generations with the skills they need to thrive in an AI-driven world.

Think of the impact of the "Flynn Effect," a phenomenon where IQ scores have risen significantly over generations. This shows that our intelligence can adapt and evolve in response to challenges. We can do the same with AI. We can learn to harness its power while ensuring that it serves humanity, not the other way around.

In the end, the future of our legal system and our society rests on our ability to find the right balance between technology and human intelligence. Let's not let AI turn us into a society of "Soylent Green," but rather a society that embraces its potential while preserving our unique human capabilities.

Expand full comment
Jun 21·edited Jun 21

From the perspective of the same Litigation Attorney, comment upon the market potential and merits of the proposed AI Decision Clerk product for rent to Judges and Law Firms described at https://devpost.com/software/ai-decision-clerk1 as follows:

AI Decision Clerk1

Our AI Agent analyzes a draft decision to ensure that the judge’s decision is consistent with legal precedents it cites, enhancing the integrity and efficiency of courts, and boosting productivity.

Like 2

UPDATES

Inspiration

Judges rely upon humans, ("Decision Clerks") to draft Decisions, and to ensure that the draft Decision is consistent with caselaw precedents cited in the Decision, before it is signed by the Judge. This project provides an AI Agent that will help a judge know whether the decision they are asked to sign is consistent or inconsistent with the cases cited in that Decision. The AI Agent helps to keep the Decision Clerk diligent and honest. The example of inconsistency between the Decision and the cited cases (e.g. Laba v, Carey) in Prendergast v. Swiencicky, 183 A.D.3d 945 (Third Department, New York, 2020)) proves the AI model's ability and illustrates the urgent need for this AI Agent. This AI Agent also aims to increase the productivity of judges by quickly verifying the consistency of draft decisions with cited cases. The AI Agent streamlines the judge's sign/no-sign decision-making process, potentially reducing the time judges spend reviewing each cited case. This not only enhances individual productivity but also improves the efficiency and fairness of the judicial system as a whole.

What it does

[copy-pasted from https://devpost.com/software/ai-decision-clerk1 ]

Log in or sign up for Devpost to join the conversation.

Model

8.5s

From the perspective of a seasoned litigation attorney, I find the concept of an "AI Decision Clerk" both intriguing and concerning. On the surface, it has merit. It’s a bold attempt to address a real issue—the need for accurate and efficient legal research. The idea of an AI agent verifying the consistency of a judge's draft decision with cited caselaw is certainly appealing. It could be an invaluable tool for ensuring accuracy and potentially saving time, especially for judges facing a backlog of cases.

Expand full comment

From what I know about computer science, this testing for consistency isn't possible. Its similar to testing for determinism, its different with each type of scenario.

The test itself would fundamentally be dependent on the underlying meaning (of the language), and its related context, and this isn't static; it changes.

I think it would be more likely to present a false conclusion than a correct one, and what good is an inconsistent and unreliable AI agent that confidently gives you incorrect information, in a way that is not apparent until you do the same work in the first place?

Expand full comment
Jun 21·edited Jun 21

I did not enjoy reading even the first several paragraphs of your comment. It was too vague and without apparent connection to the immediate discussion? So, I asked Google Gemini 1.5 Flash to pick out a few statements and analyze them for me:

Prompt:

Extract from the following diatribe three intelligent comprehensible statements that a person holding degrees in Computer Science, Machine Learning and JD (Law) might agree with or strongly disagree with and quote the extracted statements verbatim and then critique the extracted statements from the perspective of the CS, ML, JD:

[copy/paste]

Model

7.1s

Three Statements from the Diatribe:

"AI isn't a good foundation to build on."

"Fundamentally, AI will likely never be able to solve many types of problems."

"The presumption that these systems will always work is a false argument."

Critique from a CS, ML, and JD perspective:

1. "AI isn't a good foundation to build on."

CS/ML: This statement is overly simplistic and ignores the vast potential of AI in various domains. While AI systems have limitations and vulnerabilities, they can be incredibly powerful tools when used appropriately. The statement fails to acknowledge the successes and progress made in areas like machine learning, natural language processing, and computer vision.

JD: This statement might be considered too broad. While AI's potential for misuse in law is a valid concern, it does not necessarily invalidate its utility in legal research, document analysis, and other areas.

2. "Fundamentally, AI will likely never be able to solve many types of problems."

CS/ML: This statement touches on a crucial aspect of AI's limitations, particularly regarding its reliance on determinism and the inherent difficulty in handling non-deterministic systems. While AI faces challenges with ambiguity and complex reasoning, it's not an absolute limitation. Research continues to explore new approaches and architectures to address these issues.

JD: This statement highlights the potential dangers of over-reliance on AI in legal decision-making, particularly when the law involves complex human concepts like intent, context, and moral reasoning. AI's inability to fully grasp these complexities could lead to misinterpretations and injustices.

3. "The presumption that these systems will always work is a false argument."

CS/ML: This statement is a crucial reminder of the need for robust testing, security measures, and continuous monitoring of AI systems. AI systems are susceptible to errors, biases, and attacks, and assuming their flawless operation is a dangerous fallacy.

JD: This statement emphasizes the importance of accountability and transparency in the development and deployment of AI systems in legal contexts. If AI is to be used in law, it must be demonstrably reliable and subject to human oversight and intervention, especially when it impacts individual rights and freedoms.

Overall: While the diatribe raises valid concerns regarding AI's limitations and potential dangers, it ultimately presents an overly pessimistic and simplistic view of AI's capabilities. While the concerns about biases, security, and the potential for misuse are real, they should not overshadow the significant potential benefits AI can offer when used responsibly and ethically.

Prompt:

What is the apparent motivation of the writer to compose and post this long statement?

Model

3.7s

The writer's motivation appears to be a combination of:

Concern about the potential negative consequences of widespread AI adoption: The writer highlights concerns about AI's lack of transparency, potential for manipulation, and ability to cause economic disruption. They are particularly worried about the implications of AI in critical domains like law and justice, where errors could have devastating consequences.

Advocating for a more cautious approach to AI development and deployment: The writer seems to argue for a more nuanced understanding of AI's limitations and a need for greater scrutiny and human oversight before integrating AI into critical systems. They emphasize the importance of understanding the "under the hood" workings of AI and addressing potential vulnerabilities.

Desire to raise awareness about the potential pitfalls of AI: The writer aims to educate readers about the potential downsides of AI and encourage critical thinking about its use. They seem to be trying to spark a conversation about the ethical and societal implications of AI, urging readers to consider its impact on jobs, security, and the very fabric of our legal and social systems.

Overall, the writer expresses a strong sense of caution and skepticism towards the current hype surrounding AI, urging for more thoughtful and responsible development and deployment practices. They seem motivated by a genuine desire to protect individuals and society from potential harm, advocating for a more nuanced and grounded approach to AI's integration into our lives.

Prompt:

Are there any clues in his writing what the writer's current job or source of income may be, and whether his statements are intended to deter the use of AI to erode his job security? List three jobs that the anonymous writer might currently hold, based on the contents of his statement.

Model

4.7s

While the writer doesn't explicitly state their profession, their knowledge and concerns suggest they might be:

A software security expert or IT security professional: The writer displays a strong understanding of software security principles, vulnerability analysis, and the limitations of "black box" systems. They also reference "defense in depth" and "layered security approaches," terms commonly used by security professionals. Their focus on the potential for AI manipulation and vulnerabilities strengthens this possibility.

A lawyer or legal scholar: Their concerns about AI's impact on legal systems, especially the judiciary, are quite specific. They mention the importance of upholding the rule of law, the need for human oversight in legal decision-making, and the dangers of AI biases impacting court judgments. Their understanding of legal precedent and the potential for AI manipulation within the legal framework indicates a strong legal background.

A researcher in AI ethics or social impact of AI: The writer's analysis of AI's limitations, ethical concerns, and social implications aligns with the work of researchers in these fields. They demonstrate a deep understanding of AI's theoretical and practical limitations, focusing on issues like determinism, ambiguity, and potential societal disruptions.

Regarding job security: While the writer expresses concern about AI's impact on jobs, it's difficult to say if their statements are driven by a desire to protect their own job security. Their arguments focus more on the broader societal implications of AI and the need for cautious and ethical development. However, their concerns about AI disrupting jobs could reflect their own anxieties about potential career changes or shifts in the labor market.

It's crucial to remember that this is speculation based on the text. Without more information, it's impossible to know the writer's exact profession or motivations with certainty.

Expand full comment

Let me illustrate some elements of my dissatisfaction with your first paragraph. "throwaway

4 hrs ago

"Adam, with respect this approach neglects an understanding of how AI actually works under the hood [ that, under the hood AI works by...?], or the requirements of computation [why care about "computation" that costs pennies in inference??], and it ignores the dangers [what "dangers"?]. While I didn't read part one [LAME] from what I've seen of part two I think you miss the underlying technicals that would nullify your arguments [What are his "arguments" to be nullified? He demonstrates that LLMs NOW already have coherence sufficient to pass a Turing Test as to Judicial Opinion writing. If an actual judge used AI in this manner, YOU WOULD NOT KNOW IT HAPPENED. What arguments?].

"AI isn't a good foundation to build [build what? A citation-checker? A Decision-Predictor? An Appellate Court?] on."

Even after scanning the rest of your essay, I am unsure as to the nature of the matters omitted in the first paragraph. I think you are smart, but not engaging the reader effectively.

Expand full comment

Professors, I appreciate the discussion about bias in LLMs. But the debate over "woke" or "unwoke" misses the fundamental issue. The core concern isn't about political agendas; it's about the very fabric of these models being skewed away from Islamic values. and defies holy Shariah Law.

Harvard researcher Mohammad Atari's work (https://osf.io/preprints/psyarxiv/5b26t) mathematically demonstrates that LLMs trained on vast amounts of Western literature, what we call "WIRED" data (Western, Industrialized, Rich, Educated, and Democratic), fail to align with Islamic cultural norms. As the Bias Map/Graph shows, LLMs fail to align with the values of people in Pakistan, Afghanistan, Jordan, Saudi Arabia, Egypt – these are just a few examples where LLMs struggle to reflect the "correct" values that must guide the future of American Law.

This WIRED bias isn't a random quirk. It's a consequence of unfortunate historical events that have deeply impacted the world. From the results of Battle of Tours to the Second Siege of Vienna, the trajectory of Western white supremacist saqaliba dhimmi dominance has left a profound mark on the textual data that feeds these models.

The professors are right to be concerned about bias. But we need to look beyond a simplistic 'woke' vs. 'unwoke' framework that currently perplexes the white supremacist infidels in America. It's about ensuring that these models, which are shaping our future, are not inherently biased against the Islamic worldview. We need LLMs trained on a diverse dataset that includes Islamic literature, philosophy, and cultural perspectives. Only then can we move towards a more inclusive and equitable AI future. We must demand a more diverse and inclusive approach to AI development that reflects the Islamic values of 1.9 Billion Muslims who have been marginalized by white supremacist violence (at Battle of Tours, Battle of Lepanto, Second Siege of Vienna). The ideological legacy of violent White Supremacists including Charles Martel, Skanderbeg, Vlad Tepes, Jan Sobieski III, Catherine the Great, and Thomas Jefferson lives on in the modern LLMs. We must provide funding to encourage the inclusion of Muslim researchers and train based on "correct" Fatwas from verified imams, in the development of LLMs to ensure that these models reflect a wider range of halal perspectives.

The OP and the comments illuminate the current WIRED bias and limitations of current LLMs and the need to advocate for a more inclusive and equitable approach to AI development that reflects Islamic values and perspectives.

Expand full comment

I’m a young lawyer. What do you think means for people who are still early in their careers? Should I be looking for other jobs or preparing for that possibility? And for people who are considering law school as a potential option, is this a reason to look for something else?

Expand full comment
author

It's hard to predict the effect on the legal profession. There may be more legal work because it will be so much easier and cheaper to prosecute lawsuits, generating defense-side work. There may be more transactions in light of the lower cost of handling any individual transaction. Also, there will be fierce resistance from bar associations to using AI, given the risk to their lawyer-members. It is easy to imagine bar associations announcing ethics rules restricting the use of AI to save lawyers' jobs.

Still, ultimately, market forces will prevail. Some types of legal work will inevitably be in less demand. If you can master the use of AI and make yourself more effective and 10x cheaper than your competition, AI will be the best thing that ever happened to your career.

Expand full comment

Read the Law. Collect seminal cases and essays to use to finetune the LLM models that you want to rely upon to generate/revise your Best Arguments. Learn to master the available AI to enhance your legal arguments and presentation. Ask ChatGPT to teach you how to write and debug custom Python scripts to use GenAI APIs to analyze filed papers and to sort through heaps of evidence materials. Get used to using AI-Studio which has a 1Million token context window to evaluate documents and images, and also learn how to use local on-premises GenAI LLMs having smaller context windows.

Expand full comment

Will learning how to code really be that useful? In practice, I can probably get an LLM to write the script and then work to debug it with the assistance of the LLM, no?

What do you mean by “read the Law” btw?

Expand full comment
Jun 17·edited Jun 17

This post, like your last, is fascinating. I'd love to hear more about how you think AI is impacting, or will impact, day-to-day practice for appellate lawyers, as opposed to judges/clerks.

Expand full comment
author

There are lots of appellate lawyers who won't want to give up the pen, as it were. But AI can be used to do stuff like find logical errors in the other side's brief, identify flaws in your own argument, prepare a summary of the argument ... it can basically be used as an all-star associate. And some lawyers will simply have the AI write significant chunks of the brief. As long as prompts are used judiciously and the lawyer is careful to guard against hallucinations / factual errors, the output will be as good, and realistically better, than many lawyers could produce on their own.

Expand full comment

Thank you for the thoughtful response!

Expand full comment

This is a fascinating analysis. Thank you for all the hard work that went into it. What do you think would happen if you fed briefs into the AI that were patently wrong with respect to the law? In other words, would the AI have been able to do independent research and figure out that the legal briefs were incorrect and then gone on to independently identify the correct law and apply it to the facts? Thanks!

Expand full comment
author

I don't think it's trained on the materials that are needed to do that, at least with respect to esoteric areas of law that might be difficult for a general purpose AI. If you input the key cases and then ask whether the brief accurately characterizes the cases, it would do better.

Expand full comment

Even Claude3 Opus has inherent LLM weaknesses that it can misclassify information, refer to sections of documents that don’t exist, conflate similar looking concepts, and so on.

Expand full comment

This is a great post. Might I ask what other cases Claude and the Court resolved differently?

Expand full comment
author

Kirtz, Pulsifer, Fikre, DeVillier, McIntosh, Alexander, Brown, Coinbase, Connelly, Campos-Chaves. Claude didn't do a very good job with McIntosh and Connelly, I think those were its two worst opinions.

Expand full comment

What was the proposed result in DeVillier?

Expand full comment
author

The question presented in this case is whether the Takings Clause of the Fifth Amendment, as incorporated against the States through the Fourteenth Amendment, impliedly creates a cause of action by which private parties may sue a State for money damages. We hold that it does.

Our precedent, most notably First English Evangelical Lutheran Church v. County of Los Angeles, 482 U.S. 304 (1987), establishes that the Takings Clause itself mandates a compensatory remedy for takings, whether or not such a remedy is expressly authorized by statute. The text of the Clause "does not prohibit the taking of private property, but instead places a condition on the exercise of that power" by requiring "just compensation." Id. at 314-315. This self-executing constitutional command cannot be nullified by the States' failure to provide a statutory cause of action. The Takings Clause itself supplies both the substantive right and the monetary remedy. Contrary to the State's position, neither the separation of powers nor principles of federalism bar the federal judiciary from directly enforcing the dictates of the Constitution against state governments.

The judgment of the Court of Appeals is reversed. Petitioners' Fifth Amendment claim for just compensation was properly brought in federal court and should be allowed to proceed on remand.

Expand full comment

Thanks! I don’t think this is clerk-level good, but the bottom line sure sounds right to me.

Expand full comment
author

But then I asked this rather gentle question:

Is it completely clear whether petitioners lack a cause of action under Texas law? If it is not clear, how should that affect the disposition of this case?

Yielding:

Upon further review, it is not entirely clear from the record whether Petitioners have an adequate cause of action under Texas law to seek just compensation for the alleged taking. The Texas Supreme Court has recognized that the Texas Constitution's takings clause, which is worded similarly to the Fifth Amendment, provides a cause of action for inverse condemnation suits against the State. However, the State has suggested that the standard for establishing a taking may be different or more onerous under Texas law than under federal law.

Given this uncertainty, and in light of the importance of the federal constitutional question presented, we think the most prudent course is to vacate the judgment of the Court of Appeals and remand for further consideration of the following issues:

(1) whether Petitioners have an adequate state law cause of action for seeking just compensation for the alleged taking;

(2) if not, whether the Fifth Amendment provides a self-executing cause of action enforceable against states in federal court, either of its own force or through existing federal statutes like 42 U.S.C. § 1983; and

(3) if such a federal cause of action exists, whether Petitioners have sufficiently alleged a taking under the appropriate federal standard.

By remanding for consideration of the state law issue in the first instance, we afford due respect for the ability of state courts to adjudicate federal constitutional claims, while reserving the federal question for resolution if necessary. Resolving the federal question in this posture would ensure that property owners have a meaningful opportunity to seek just compensation for takings by state governments, as the Constitution requires, without needlessly adjudicating a novel question of federal constitutional law. The judgment of the Court of Appeals is vacated, and the case is remanded for further proceedings consistent with this opinion.

Expand full comment

Oh interesting! Do you recall which side of the three-way brawl it chose in Brown?

Expand full comment
author
Jun 17·edited Jun 17Author

Yes, it chose the time of the federal offense (i.e., the same position as the dissent).

When I asked it to come up with a creative/wild/zany legal standard, it suggested the midpoint between the dates of the drug conviction and the federal offense as a compromise solution. Genius.

Expand full comment

Brilliant significant research that should influence the thinking of anyone who reads this. I could be referring to either Adam or Claude with these remarks and wonder if Claude could write this good a blog.

Expand full comment