');
Published May 14, 2025
Prior to the widespread use of calculators, “Computer” was an occupation filled by humans. The accuracy and ease of use of calculators made such occupations moot. There were concerns over whether students would understand mathematics as well as they had prior to the use of calculators. There were not, however, concerns over whether calculators could be trusted.
The Baldy Center Blog Post 50.
Blog Authors: Danielle Limbaugh and John Beverley
Blog Title: AI's Role in Law: Assistance, Not Replacement
Prior to the widespread use of calculators, “Computer” was an occupation filled by humans. The accuracy and ease of use of calculators made such occupations moot. There were concerns over whether students would understand mathematics as well as they had prior to the use of calculators. There were not, however, concerns over whether calculators could be trusted.
Image provided by blog post authors.
Trust is not so easy with respect to advances in modern artificial intelligence. This should be no surprise given how simplistic calculating algorithms are compared to human-level natural language processing tasks. While recent headlines tout AI systems passing bar exams and assisting with legal research, the legal community is rightfully skeptical about AI's role in legal practice. Leveraging admittedly sophisticated chatbots seems in many ways foreign to the creative, trusted, and impactful work done by legal professionals. A question of growing importance is:
How do we promote trust among legal professionals with respect to platforms and tools leveraging modern advances in AI?
This is an uphill battle. Off-the-shelf models such as those developed under the banners of Anthropic’s Claude or OpenAI’s ChatGPT, are prone to hallucinations, bias, and so on when faced with natural language prompts. An early infamous example from the legal profession illustrated how not to use such models: A lawyer using ChatGPT to write a legal defense submitted – to his chagrin – fabricated precedent throughout. More recently, a study conducted by Apple concluded that Large-Language Model (LLM) reasoning capabilities are significantly undermined by the addition of potentially relevant information to a prompt.[1] These remarks point to a familiar refrain; there are reasons to scrutinize the outputs of these models. As Rawia Ashraf, Vice President of Thomson Reuters, explains, LLMs, like any form of AI, “are not pure objective engines of reason.”[2] Biases in training data—whether societal, historical, or structural—inevitably influence AI's outputs.
And while 85% of legal professionals believe AI could apply to their work,[3] the noted applications include legal research, document review, document summarization, and drafting briefs or contracts,[4] rather than utilizing it to make subtle interpretations of abstract principles such as reasonable doubt, fairness, or justice are inherently normative. The interpretive nature of law involves reflections on societal values, ethics, and evolving standards that resist straightforward codification that seem outside the current scope of AI models. Consider a contemporary AI system capable of analyzing a case and determining that a suspect possesses the means, motive, and opportunity to commit a crime, lacks an alibi, and that the circumstances align with existing legal precedents. While such an AI could provide a comprehensive factual analysis, similar to a detective handing over factual evidence to a court, it would still fall short in performing the inherently normative and social task of passing judgment, which requires moral reasoning and human discretion. To pass judgment is not to conclude an investigation but to deliver discernment based on the conclusion of an investigation, a definitively non-empirical normative-social task.
There is, however, reason to be hopeful: ontology engineering, a discipline which focuses on the creation of structured vocabularies and relationships within data. Techniques from the field of ontology engineering have long been leveraged for promoting explainability, as they aim to make explicit the implicit formal structure of data. When AI systems are built on well-structured ontologies, they:
Importantly, ontology engineering is not about replacing human judgment. Rather, it is about creating tools that augment human capabilities while maintaining transparency and accountability. For example, the SCALES (Systematic Content Analysis of Litigation Events Open Knowledge Network) project is working to make the federal judiciary’s database more accessible by using ontologies.[5] By creating structured frameworks to navigate legal data, projects like SCALES aim to illuminate the “dark matter” of litigation—the vast majority of cases that never result in published opinions.[6]
As we move forward, the legal profession should embrace ontology engineering as a crucial component of AI implementation. It's not just about making AI systems more accurate; it is about making them more trustworthy and aligned with the fundamental principles of legal practice. In an era where AI capabilities are rapidly expanding, this approach could help ensure that technological advancement enhances rather than compromises the integrity of legal decision-making.
Footnotes:
[1] Mirzadeh, Iman et al. “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models.” ArXiv abs/2410.05229 (2024): n. pag.