burgess-law

Journal

menu
crest close
  • Approach
  • Background
  • Services
  • Journal

e. jeff@burgesslawsk.com
p. 306.518.2244
f. 306.500.9941

3 October 2025

When AI Fails Lawyers (and when lawyers fail with AI)

In the rapidly evolving landscape of legal technology, artificial intelligence has emerged as a powerful tool for legal professionals. As with any tool, however, its misuse can lead to significant errors and consequences. Recent cases have highlighted the pitfalls of relying too heavily on AI without proper oversight, underscoring the need for caution and diligence in its application.

Recent Cases of AI Misuse

Reddy v. Saroya (2025 ABCA 322)

In the case of Reddy v. Saroya, the appellant’s counsel utilized a large language model to draft the original factum, which included references to seven non-existent cases. This error was uncovered when the respondent’s factum highlighted the absence of hyperlinks or copies of the cited cases. The appellant’s counsel admitted to outsourcing the drafting of the factum to a third party and failing to verify the cases due to time constraints. The legal consequence of this oversight was a significant blow to the credibility of the appellant’s submissions. This case underscores the critical importance of verifying all legal references and the risks associated with outsourcing legal work without adequate oversight. It serves as a cautionary tale about the potential for AI to generate “hallucinated” cases and the necessity for lawyers to maintain rigorous standards of accuracy and diligence in their work.

R. v. Chand (2025 ONCJ 282)

In R. v. Chand, the defense counsel’s submissions were marred by numerous errors, including fictitious case citations and the inappropriate use of unrelated civil cases as precedents. The court responded by ordering the defense counsel to personally prepare a new set of submissions, explicitly prohibiting the use of generative AI for legal research. This directive highlights the legal consequence of relying on AI-generated content without verification, which can lead to judicial reprimands and the need for re-submission of legal documents. This case emphasizes the necessity of human oversight in legal research and the importance of ensuring the accuracy and relevance of all cited authorities. It serves as a reminder that AI should augment, not replace, the critical judgment and expertise of legal professionals.

Hussein v. Canada (2025 FC 1138)

In Hussein v. Canada, the applicant’s counsel used generative AI to prepare legal submissions, which resulted in the inclusion of non-existent cases. It took four court directions before the counsel admitted to using AI, leading to a modest cost award against him personally. The legal consequence of this case was a financial penalty and a potential impact on the counsel’s professional reputation. This case illustrates the importance of transparency and accountability in the use of AI in legal practice. It serves as a lesson that failure to disclose the use of AI can lead to sanctions and underscores the need for lawyers to be forthcoming about their methods and tools in legal proceedings.

Zhang v. Chen (2024 BCSC 285)

In Zhang v. Chen, the respondent’s counsel included two non-existent cases in a notice of application, which were later discovered to have been generated by ChatGPT. The court found the conduct to be presumptively negligent but did not impose special costs against the lawyer, as there was no intent to deceive. The legal consequence was a finding of negligence, though without severe financial penalties. This case highlights the necessity for lawyers to be aware of the limitations of AI tools and to verify all legal authorities before submission. It serves as a reminder of the importance of due diligence and the potential professional consequences of failing to thoroughly vet AI-generated content.

Ko v. Li (2025 ONSC 2965)

In Ko v. Li, the applicant’s counsel relied on an AI-generated factum containing fake cases, resulting in a show cause order for contempt of court. The counsel admitted the error, apologized, and proposed corrective measures, including undertaking professional development courses on the use of AI in legal practice. The legal consequence was a serious reprimand and the potential for contempt charges, which were mitigated by the counsel’s proactive response. This case demonstrates the importance of accountability and the need for legal professionals to understand the risks associated with AI. It underscores the necessity for ongoing education and the implementation of corrective measures to prevent future errors.

Common Mistakes (How to Avoid Them)

Common Mistake #1: Over-Reliance on AI.

One of the most significant errors lawyers make is relying on AI-generated outputs without verification. AI systems, while powerful, are not infallible and may produce outputs that are inaccurate or incomplete. This can lead to the inclusion of non-existent or “hallucinated” case law in legal documents, as seen in several recent cases.

To avoid over-reliance, lawyers should:

  1. Cross-Verification. Always cross-check AI outputs with primary legal sources such as statutes, case law, and legal commentaries to ensure that the information is not only accurate but also contextually relevant.
  2. Human Judgment. Use AI as a tool to augment, not replace, human judgment and always apply their own legal expertise to interpret and validate AI-generated data.
  3. Regular Audits. Implement regular audits of AI systems to assess their performance and accuracy to help identify any systemic errors or biases in the AI’s outputs.

Common Mistake #2: Lack of Technological Competence.

Many lawyers are not fully aware of the capabilities and limitations of AI tools. This lack of understanding can result in misuse or over-reliance on AI, leading to errors in legal submissions.

To enhance technological competence, lawyers should:

  1. Ongoing Training. Engage in continuous education programs focused on AI and its applications in law through workshops, webinars, and certification courses.
  2. Stay Informed. Keep abreast of the latest developments in AI technology and its implications for the legal field through professional journals, conferences, and online resources.
  3. Collaborate with IT Experts. Work closely with IT professionals to understand the technical aspects of AI tools and how they can be effectively integrated into legal workflows.

Common Mistake #3: Failure to Disclose AI Use.

Courts are increasingly requiring transparency regarding the use of AI in legal submissions. Failure to disclose AI use could lead to sanctions or adverse inferences. Going forward non-disclosure could lead to ethical violations and undermine the credibility of a lawyer’s work.

To ensure transparency, lawyers should:

  1. Explicit Disclosure. Clearly disclose the use of AI in legal documents and submissions and specify the extent to which AI was used in research, drafting, or analysis.
  2. Client Communication. Inform clients about the use of AI in their cases and explain how it benefits their legal matters to foster trust and ensure informed consent.
  3. Ethical Guidelines. Adhere to ethical guidelines and standards set by the relevant legal associations regarding the use of AI in practice.

Common Mistake #4: Inadequate Review of AI.

The absence of rigorous review processes can result in the submission of inaccurate or incomplete legal documents. AI is not (yet) infallible.

To effectively review AI work, lawyers should:

  1. Multi-tiered Review. Implement a multi-tiered review process where AI outputs are reviewed by multiple legal professionals to help identify and correct errors.
  2. Checklists and Templates. Use checklists and templates to standardize the review process and ensure that all necessary elements are considered.
  3. Feedback Loops. Establish feedback mechanisms to continuously improve AI systems based on review findings to enhance the accuracy and reliability of AI outputs over time.

Common Mistake #5: Delegation Without Oversight.

Delegating the preparation of legal documents to staff or third parties without proper oversight can lead to errors, especially when AI tools are involved. Lawyers must maintain ultimate responsibility for the content of their submissions and ensure that all delegated work is thoroughly reviewed.

To delegate responsibly, lawyers should:

  1. Supervisory Protocols. Develop clear protocols for supervising tasks delegated to AI to ensure that all outputs are reviewed and approved by a qualified lawyer.
  2. Accountability Measures. Establish accountability measures to track the use of AI in delegated tasks and ensure that all legal content meets professional standards.
  3. Regular Updates. Hold regular meetings in their office and in their teams to discuss AI-related tasks and address any issues or concerns that arise.

Final Thoughts

The integration of AI into legal practice offers significant benefits, including increased efficiency and the ability to handle large volumes of data. However, the misuse of AI can have serious implications, including the potential for miscarriages of justice and damage to the credibility of the legal profession. Lawyers must exercise caution and diligence when using AI tools, ensuring that they are used as an aid rather than a substitute for professional judgment.

By adopting best practices for AI use, such as verifying AI outputs, enhancing technological competence, and maintaining transparency, lawyers can harness the advantages of AI while minimizing the risks. The legal profession must continue to evolve and adapt to technological advancements, ensuring that the integrity of the justice system is upheld.

 

Disclaimer.

The content provided in this blog post is for informational purposes only and does not constitute legal advice. AI was used in the preparation of this article. Readers are advised to consult with a qualified lawyer for advice regarding specific legal issues or concerns. The information herein is not intended to create, and receipt of it does not constitute, a solicitor-client relationship.

#AI #ArtificialIntelligence #LegalTechChallenges #LegalEthics #LegalInnovation

Back to Journal

burgess-law

Burgess Law offers legal, strategic, and business advice to clients and is often called upon to act as external general counsel to businesses. Our practice focuses on corporate and commercial work for small and medium-sized businesses, entrepreneurs, and start-ups.

#201 - 728 Spadina Cres. E.
Saskatoon, Saskatchewan
S7K 3H2
e. jeff@burgesslawsk.com
p. 306.518.2244
f. 306.500.9941
  • Approach
  • Background
  • Services
  • Journal

© Burgess Law Professional Corporation 2025
Privacy Policy | Terms of Service