In the rapidly evolving landscape of legal technology, artificial intelligence has emerged as a powerful tool for legal professionals. As with any tool, however, its misuse can lead to significant errors and consequences. Recent cases have highlighted the pitfalls of relying too heavily on AI without proper oversight, underscoring the need for caution and diligence in its application.
In the case of Reddy v. Saroya, the appellant’s counsel utilized a large language model to draft the original factum, which included references to seven non-existent cases. This error was uncovered when the respondent’s factum highlighted the absence of hyperlinks or copies of the cited cases. The appellant’s counsel admitted to outsourcing the drafting of the factum to a third party and failing to verify the cases due to time constraints. The legal consequence of this oversight was a significant blow to the credibility of the appellant’s submissions. This case underscores the critical importance of verifying all legal references and the risks associated with outsourcing legal work without adequate oversight. It serves as a cautionary tale about the potential for AI to generate “hallucinated” cases and the necessity for lawyers to maintain rigorous standards of accuracy and diligence in their work.
In R. v. Chand, the defense counsel’s submissions were marred by numerous errors, including fictitious case citations and the inappropriate use of unrelated civil cases as precedents. The court responded by ordering the defense counsel to personally prepare a new set of submissions, explicitly prohibiting the use of generative AI for legal research. This directive highlights the legal consequence of relying on AI-generated content without verification, which can lead to judicial reprimands and the need for re-submission of legal documents. This case emphasizes the necessity of human oversight in legal research and the importance of ensuring the accuracy and relevance of all cited authorities. It serves as a reminder that AI should augment, not replace, the critical judgment and expertise of legal professionals.
In Hussein v. Canada, the applicant’s counsel used generative AI to prepare legal submissions, which resulted in the inclusion of non-existent cases. It took four court directions before the counsel admitted to using AI, leading to a modest cost award against him personally. The legal consequence of this case was a financial penalty and a potential impact on the counsel’s professional reputation. This case illustrates the importance of transparency and accountability in the use of AI in legal practice. It serves as a lesson that failure to disclose the use of AI can lead to sanctions and underscores the need for lawyers to be forthcoming about their methods and tools in legal proceedings.
In Zhang v. Chen, the respondent’s counsel included two non-existent cases in a notice of application, which were later discovered to have been generated by ChatGPT. The court found the conduct to be presumptively negligent but did not impose special costs against the lawyer, as there was no intent to deceive. The legal consequence was a finding of negligence, though without severe financial penalties. This case highlights the necessity for lawyers to be aware of the limitations of AI tools and to verify all legal authorities before submission. It serves as a reminder of the importance of due diligence and the potential professional consequences of failing to thoroughly vet AI-generated content.
In Ko v. Li, the applicant’s counsel relied on an AI-generated factum containing fake cases, resulting in a show cause order for contempt of court. The counsel admitted the error, apologized, and proposed corrective measures, including undertaking professional development courses on the use of AI in legal practice. The legal consequence was a serious reprimand and the potential for contempt charges, which were mitigated by the counsel’s proactive response. This case demonstrates the importance of accountability and the need for legal professionals to understand the risks associated with AI. It underscores the necessity for ongoing education and the implementation of corrective measures to prevent future errors.
One of the most significant errors lawyers make is relying on AI-generated outputs without verification. AI systems, while powerful, are not infallible and may produce outputs that are inaccurate or incomplete. This can lead to the inclusion of non-existent or “hallucinated” case law in legal documents, as seen in several recent cases.
To avoid over-reliance, lawyers should:
Many lawyers are not fully aware of the capabilities and limitations of AI tools. This lack of understanding can result in misuse or over-reliance on AI, leading to errors in legal submissions.
To enhance technological competence, lawyers should:
Courts are increasingly requiring transparency regarding the use of AI in legal submissions. Failure to disclose AI use could lead to sanctions or adverse inferences. Going forward non-disclosure could lead to ethical violations and undermine the credibility of a lawyer’s work.
To ensure transparency, lawyers should:
The absence of rigorous review processes can result in the submission of inaccurate or incomplete legal documents. AI is not (yet) infallible.
To effectively review AI work, lawyers should:
Delegating the preparation of legal documents to staff or third parties without proper oversight can lead to errors, especially when AI tools are involved. Lawyers must maintain ultimate responsibility for the content of their submissions and ensure that all delegated work is thoroughly reviewed.
To delegate responsibly, lawyers should:
The integration of AI into legal practice offers significant benefits, including increased efficiency and the ability to handle large volumes of data. However, the misuse of AI can have serious implications, including the potential for miscarriages of justice and damage to the credibility of the legal profession. Lawyers must exercise caution and diligence when using AI tools, ensuring that they are used as an aid rather than a substitute for professional judgment.
By adopting best practices for AI use, such as verifying AI outputs, enhancing technological competence, and maintaining transparency, lawyers can harness the advantages of AI while minimizing the risks. The legal profession must continue to evolve and adapt to technological advancements, ensuring that the integrity of the justice system is upheld.
The content provided in this blog post is for informational purposes only and does not constitute legal advice. AI was used in the preparation of this article. Readers are advised to consult with a qualified lawyer for advice regarding specific legal issues or concerns. The information herein is not intended to create, and receipt of it does not constitute, a solicitor-client relationship.