How many lawyers need to screw themselves over by using LLMs to write legal briefs before the others realize that doing so just might be a bad idea?
I mean, come on, people. There is no such thing as actual artificial “intelligence.” There are programs that try to mimic intelligence like LLMs but they are not actually intelligent. These models are trained using data from all over the internet with no vetting as to accuracy. When the thing searches for legal cases to cite, it is just as likely to cite a fictional case from some story as it is to cite an actual case.
It’s not like it’s looking up anything either. It’s just putting words together that sound right to us. It could hallucinate a citation that never even existed as a fictional case, let alone a real one.
Absolutely this. LLM basically is trained to be good at fooling us into thinking it is intelligent, and it is very good at it.
It doesn’t demonstrate how good it is in what it is doing, it demonstrates how easy it is to fool us.
My company provides copilot for software engineering and I use it in my IDE.
The problem is that it produces code that looks accurate, but it often isn’t. I frequently tend to disable it. I think it might help in area where I don’t know what I’m doing, so it can get some working code, but it is a double edged sword, because if I don’t know what I’m doing I will not be able to catch issues.
I also noticed that what it produces when correct, I can frequently write a simpler and shorter version that fits my use case. It looks very likely like code you see students put on GitHub when they post their homework assignment, and I guess that’s what it was trained on.
And you pinpointed exactly the issue right there…
People who don’t know what they’re doing asking something that can’t reason to do something that neither of them understand. It’s like the dumbest realization of the singularity we could possibly achieve.
LLM basically is trained to be good at fooling us into thinking it is intelligent, and it is very good at it.
That’s a fascinating concept. An LLM is really just a specific kind of machine learning. Machine learning can be amazing. It can be used to create algorithms that can detect cancer, predict protein functions, or develop new chemical structures. An LLM is just an algorithm generated using machine learning that deceives people into thinking it’s intelligent. That seem like a very accurate description to me.
It could hallucinate a citation that never even existed as a fictional case
That’s what happened in this case reviewed by Legal Eagle.
The lawyer provided a brief that cited cases that the judge could not find. The judge requested paper copies of the cases and that’s when the lawyer handed over some dubious documents. The judge then called the lawyer into the court to ask why he submitted fraudulent cases and why he shouldn’t have his law licence revoked. The lawyer fessed up that he asked ChatGPT to write the brief and didn’t check the citations. When the judge asked for the cases, the lawyer went back to ask ChatGPT for them, and it generated the cases…but they were clearly not real. So much so that the defendants names would change throughout the case, the judges who ruled on the cases were from different districts, and they were all about a page long when real case rulings tend to be dozens of pages.
At this point, everyone should understand that every single thing a public AI “writes” needs to be vetted by a human, particularly in the legal field. Lawyers who don’t understand this need to no longer be lawyers.
(On the other hand, I bet all the good law firms are maintaining their own private AI, where they feed it the relevant case histories directly, and specifically instruct it to provide citations to published works and not make shit up on its own. Then they validate it all, anyway, because their professional reputation depends on it).
The fact that so many lawyers are pulling this shit should have people terrified about how much AI generated documents are making it into the record without being noticed.
It’s probably a matter of time before one these non-existent cases results in decisions that will cause serious harm.
It’s one thing to use it as a fancy spell check, it’s another to have it generate AI slop then present that as a legal argument without reading it
Yes
And trump admin used LLM to generate tariff policy and also to decide who should lose their visa and get deported. And I’m sure there’s more.
The whole AI craze is showing that billionaires really got fooled what LLM is, and also shows us that to be a billionaire the requirement isn’t to be smart, but to be born to already a wealthy family and be a psychopath.
You’ve got to be completely brainless and utterly lazy to let AI build you anything
Sounds like someone is getting sanctioned and maybe disbarred.
Some of the lawyers I’ve dealt with can’t write correctly even without using AI