Georgia had problem where a lawyer submitted documents with fictitious case citations. https://youtu.be/6RBQrcp0Lrg
Perhaps the way out is low tolerance for lazy, sloppy malpractice.
We're already that much closer to where a real ruling will include fictitious citations. Perhaps the LexisNexis and Westlaws of the world need to promulgate more toolbars and plugins to automatically check citations in documents for validity.
I am not a lawyer, but I have picked up a little bit of knowledge of US legal procedures over the years, so let me try to explain this a little for anyone who hasn't read US legal documents before. There is a lengthy set of rules for how lawsuits have to be conducted, called the Federal Rules of Civil Procedure. One of them, rule 11, basically says "Anything you file with the court should be supported by existing law or should have a reasonable argument for why existing law should be modified." This includes citing cases: if you cite a case in your argument, your citation must be correct, and must accurately summarize the case.
As everyone who deals with LLMs should know by now, they can be prone to "hallucinate", or make things up, under certain circumstances. Citations seem especially prone to hallucinations, probably because the text the LLM was trained on has relatively few citations so its "knowledge" base of citations is relatively poor. Not very many Reddit articles or Facebook posts are citing Smith v. Jones, 123 U.S. 456, 789 (2038), after all. And so if lawyers use an LLM to generate the text of a legal document, it is especially important for them to verify the citations in the generated text. First, to ensure that the cases being cited are real cases that really exist, and second, to double-check that the case they're citing actually advances their argument.
Since more and more lawyers have started using LLMs to help them generate legal documents, courts have decided to treat this as similar to a lawyer asking a legal secretary or a paralegal to draft the document. The legal secretary or paralegal may make mistakes, but if the lawyer signs the document, then that lawyer is the person ultimately responsible for any mistakes: it was his or her responsibility to check the document for errors before signing it.
Here, the lawyers used AI to draft a document, checked it for errors, but didn't catch all of the errors, so the document they submitted to the court contained citations to cases that don't exist. US courts have already established in other cases that citations to cases that don't exist are a violation of rule 11 (because cases that don't exist are NOT existing law, obviously). The lawyers in this case did not argue that point. At the top of page 4 there's an exchange where the judge asks Mr. Kachouroff (one of the lawyers involved), "And did you double-check any of these citations once it was run through artificial intelligence?" Mr. Kachouroff replies, "Your Honor, I personally did not check it. I am responsible for it not being checked." He does not try to claim that it wasn't his job to check the document, he admits that it was his job and he failed to do it.
The rest of the document involves the argument by Mr. Kachouroff that he and his colleague (Ms. DeMaster) accidentally submitted the wrong file to the court, submitting the draft instead of the version with the errors corrected. The judge didn't buy their argument, for various reasons, and she fined them $3,000 each, which is similar to what lawyers have been fined in other cases of citing nonexistent cases.
Short version: lawyers who submit legal documents are supposed to check that they're correct. Whether they were created by AI, a legal secretary or paralegal, or a law student interning with the law firm, the lawyer who signed the document is responsible for any mistakes in it. In this case, the lawyers submitted a document full of mistakes, and were fined for not being careful enough and wasting the court's time.
Would the result (a fine of that amount) have been identical had the document been prepared by a paralegal or junior lawyer, who with no use of AI accidentally left in a "John Doe vs I Hope I Can Find A Case Like This" citation? (Or how ever many errors there were in this case.)
i.e. all details same (lawyer saying sorry we submitted wrong version, etc) except that the mistake had been made by a junior person rather than AI?
I don't know how they actually do it, but I would imagine that an obvious placeholder citation could be treated less severely than a hallucinated citation. In one, every reader is immediately alerted to the error, similar to a typographical or formal error. In the other, the error goes undetected until/unless someone checks.
>Since more and more lawyers have started using LLMs to help them generate legal documents, courts have decided to treat this as similar to a lawyer asking a legal secretary or a paralegal to draft the document.
Since this was apparently wortha a news story, the key thing I'm curious about: has the frequency of fine-worthy errors increased with the use of AI, or are such errors just getting more coverage because AI is in the mix as opposed to legal secretaries.
I am currently involved in a small claims civil action, as the pro se plaintiff.
During my free time, I have attended a few unrelated sessions in my county courthouse... just to see how it's conducted; I also have two attorney brothers (one is an appellate judge) whom have expressed "ProllyInfamous is ranting crazytalk again about LLMs."
It is absolutely incredible to me how little faith I've observed in these situations, e.g. an attorney, unrelated to me, who recently responded "I think you're putting a little bit too much faith in ChatGPT, bruh."
"Everybody, particularly any/all attorneys/judges, should read the SCOTUS end of 2023 report" [0], was my response.
For my own particular case, Perplexity.ai has been absolutely incredible in helping me to formulate my initial complaint, as well as respond and file motions.
tl;dr: LLMs are massively going to help laypeople inundate court procedings.
>For those who cannot afford a lawyer, AI can help. It drives new, highly accessible tools that provide answers to basic questions, including where to find templates and court forms, how to fill them out, and where to bring them for presentation to the judge—all without leaving home. These tools have the welcome potential to smooth out any mismatch between available resources and urgent needs in our court
system.
Next, on Steve Lehto...
Georgia had problem where a lawyer submitted documents with fictitious case citations. https://youtu.be/6RBQrcp0Lrg
Perhaps the way out is low tolerance for lazy, sloppy malpractice.
We're already that much closer to where a real ruling will include fictitious citations. Perhaps the LexisNexis and Westlaws of the world need to promulgate more toolbars and plugins to automatically check citations in documents for validity.
I am not a lawyer, but I have picked up a little bit of knowledge of US legal procedures over the years, so let me try to explain this a little for anyone who hasn't read US legal documents before. There is a lengthy set of rules for how lawsuits have to be conducted, called the Federal Rules of Civil Procedure. One of them, rule 11, basically says "Anything you file with the court should be supported by existing law or should have a reasonable argument for why existing law should be modified." This includes citing cases: if you cite a case in your argument, your citation must be correct, and must accurately summarize the case.
As everyone who deals with LLMs should know by now, they can be prone to "hallucinate", or make things up, under certain circumstances. Citations seem especially prone to hallucinations, probably because the text the LLM was trained on has relatively few citations so its "knowledge" base of citations is relatively poor. Not very many Reddit articles or Facebook posts are citing Smith v. Jones, 123 U.S. 456, 789 (2038), after all. And so if lawyers use an LLM to generate the text of a legal document, it is especially important for them to verify the citations in the generated text. First, to ensure that the cases being cited are real cases that really exist, and second, to double-check that the case they're citing actually advances their argument.
Since more and more lawyers have started using LLMs to help them generate legal documents, courts have decided to treat this as similar to a lawyer asking a legal secretary or a paralegal to draft the document. The legal secretary or paralegal may make mistakes, but if the lawyer signs the document, then that lawyer is the person ultimately responsible for any mistakes: it was his or her responsibility to check the document for errors before signing it.
Here, the lawyers used AI to draft a document, checked it for errors, but didn't catch all of the errors, so the document they submitted to the court contained citations to cases that don't exist. US courts have already established in other cases that citations to cases that don't exist are a violation of rule 11 (because cases that don't exist are NOT existing law, obviously). The lawyers in this case did not argue that point. At the top of page 4 there's an exchange where the judge asks Mr. Kachouroff (one of the lawyers involved), "And did you double-check any of these citations once it was run through artificial intelligence?" Mr. Kachouroff replies, "Your Honor, I personally did not check it. I am responsible for it not being checked." He does not try to claim that it wasn't his job to check the document, he admits that it was his job and he failed to do it.
The rest of the document involves the argument by Mr. Kachouroff that he and his colleague (Ms. DeMaster) accidentally submitted the wrong file to the court, submitting the draft instead of the version with the errors corrected. The judge didn't buy their argument, for various reasons, and she fined them $3,000 each, which is similar to what lawyers have been fined in other cases of citing nonexistent cases.
Short version: lawyers who submit legal documents are supposed to check that they're correct. Whether they were created by AI, a legal secretary or paralegal, or a law student interning with the law firm, the lawyer who signed the document is responsible for any mistakes in it. In this case, the lawyers submitted a document full of mistakes, and were fined for not being careful enough and wasting the court's time.
Would the result (a fine of that amount) have been identical had the document been prepared by a paralegal or junior lawyer, who with no use of AI accidentally left in a "John Doe vs I Hope I Can Find A Case Like This" citation? (Or how ever many errors there were in this case.)
i.e. all details same (lawyer saying sorry we submitted wrong version, etc) except that the mistake had been made by a junior person rather than AI?
I don't know how they actually do it, but I would imagine that an obvious placeholder citation could be treated less severely than a hallucinated citation. In one, every reader is immediately alerted to the error, similar to a typographical or formal error. In the other, the error goes undetected until/unless someone checks.
>Since more and more lawyers have started using LLMs to help them generate legal documents, courts have decided to treat this as similar to a lawyer asking a legal secretary or a paralegal to draft the document.
Since this was apparently wortha a news story, the key thing I'm curious about: has the frequency of fine-worthy errors increased with the use of AI, or are such errors just getting more coverage because AI is in the mix as opposed to legal secretaries.
I am currently involved in a small claims civil action, as the pro se plaintiff.
During my free time, I have attended a few unrelated sessions in my county courthouse... just to see how it's conducted; I also have two attorney brothers (one is an appellate judge) whom have expressed "ProllyInfamous is ranting crazytalk again about LLMs."
It is absolutely incredible to me how little faith I've observed in these situations, e.g. an attorney, unrelated to me, who recently responded "I think you're putting a little bit too much faith in ChatGPT, bruh."
"Everybody, particularly any/all attorneys/judges, should read the SCOTUS end of 2023 report" [0], was my response.
For my own particular case, Perplexity.ai has been absolutely incredible in helping me to formulate my initial complaint, as well as respond and file motions.
tl;dr: LLMs are massively going to help laypeople inundate court procedings.
[0]: https://www.supremecourt.gov/publicinfo/year-end/2023year-en...
>For those who cannot afford a lawyer, AI can help. It drives new, highly accessible tools that provide answers to basic questions, including where to find templates and court forms, how to fill them out, and where to bring them for presentation to the judge—all without leaving home. These tools have the welcome potential to smooth out any mismatch between available resources and urgent needs in our court system.
>But any use of AI requires caution and humility.
AI discussion starting on page 5 of [0]