LARGE LANGUAGE MODELS AND ETHICAL PITFALLS: TESTING THE LEGAL LIMITS OF “ROBOLAWYERING”

A Note by Lee Walter

Download the full note here.

Over the past several months, advanced machine learning algorithms called “large language models” (LLMs) have led to the creation of a variety of AI-powered legal software services.[1] LegalZoom leverages a simple LLM to interpret user responses to online questionnaires and generate boilerplate forms for estate planning and new business registration.[2] EU-based LegalAi uses the technology to provide prelitigation assessments of lawsuit validity to consumers.[3] And Casetext provides document drafting and review for attorneys.[4] But by far the buzziest and highest profile of these large language models is Open AI’s ChatGPT (short for “generative pre-trained transformer”). Launched in 2015, ChatGPT has rapidly become synonymous with LLMs, and many legal tech companies have already integrated ChatGPT into their platforms.[5] Most recently, Casetext announced a contract with Am Law 20 firm DLA Piper to provide a ChatGPT-powered legal large language model it calls “CoCounsel,” which can draft a variety of legal documents and which the company claims “has the potential to save up to 60% of attorneys’ time.”[6]

These developments have excited practitioners and unnerved regulators, as the potential use cases for LLMs in the legal field range from glorified document template fetchers to full-on “robo-lawyers.” Use cases describing how LLMs like ChatGPT could interact with the legal field, or indeed how artificial intelligence could interact with society in general, fall into three categories.[7] The terminology of each model reflects the role humans would ultimately play in an ideal end state of human-LLM interaction: 1) human in-the-loop, 2) human on-the-loop, and 3) human out-of-the-loop.[8]


[1] See Adam Zewe, Solving a Machine-Learning Mystery, MIT News (Feb. 7, 2023), https://news.mit.edu/2023/large-language-models-in-context-learning-0207.

[2] Hello, We’re LegalZoom, LegalZoom, https://www.legalzoom.com/about-us (last visited Mar. 18, 2023). 

[3] Case Resolution Platform, LegalAI, https://www.legalai.io/ (last visited Mar. 18, 2023).

[4] The Legal AI You’ve Been Waiting For, Casetext, https://casetext.com/cocounsel/?utm_medium=paidsearch&utm_source=google&utm_campaign=brand-research&utm_content=_&utm_term=casetext (last visited Mar. 18, 2023).

[5] Id.

[6] Top Global Law Firm DLA Piper Announces Addition of CoCounsel to Enhance Practice and Client Services, Casetext (Mar. 23, 2023), https://casetext.com/blog/law-firm-dla-piper-announces-casetext-cocounsel/.

[7] Shana Lynch, AI in the Loop: Humans Must Remain in Charge, Stanford University Human-Centered Artificial Intelligence (Oct. 17, 2022), https://hai.stanford.edu/news/ai-loop-humans-must-remain-charge; Sundar Narayanan, Human-in-the-Loop or on-the-Loop is Not a Silver Bullet. Evaluate Their Effectiveness, Medium (Jan. 3, 2022), https://medium.com/mlearning-ai/human-in-the-loop-or-on-the-loop-is-not-a-silver-bullet-evaluate-their-effectiveness-82f37835d765; Arne Wolfewicz, Human-in-the-Loop in Machine Learning: What is it and How Does it Work?,Levity AI (Nov. 16, 2022), https://levity.ai/blog/human-in-the-loop; Junzhe Zhang and Elias Bareinboim, Can Humans be Out of the Loop?, Proceedings of Machine Learning Research, vol. 140:1–22, 2022, https://causalai.net/r64.pdf.

[8] This “loop” language is primarily used in the context of the military. For example, Congress recently considered a bipartisan resolution mandating that humans remain “in-the-loop” in decisions to use the nation’s nuclear weapons. See Elizabeth Elkind, AI Banned from Running Nuclear Missile Systems Under Bipartisan Bill, Fox News (Apr. 28, 2023), https://www.foxnews.com/politics/ai-banned-running-nuclear-missile-systems-under-bipartisan-bill. However, the classifications are equally applicable to any use case. See Shana Lynch, AI in the Loop: Humans Must Remain in Charge, Stan. U. Human-Centered Artificial Intelligence (Oct. 17, 2022), https://hai.stanford.edu/news/ai-loop-humans-must-remain-charge; see also Sundar Narayanan, Human-in-the-Loop or on-the-Loop is Not a Silver Bullet. Evaluate Their Effectiveness, Medium (Jan. 3, 2022), https://medium.com/mlearning-ai/human-in-the-loop-or-on-the-loop-is-not-a-silver-bullet-evaluate-their-effectiveness-82f37835d765; Arne Wolfewicz, Human-in-the-Loop in Machine Learning: What Is it and How Does it Work?, Levity AI (Nov. 16, 2022), https://levity.ai/blog/human-in-the-loop; Junzhe Zhang and Elias Bareinboim, Can Humans be Out of the Loop?, Proceedings of Machine Learning Rsch., vol. 140:1–22, 2022, https://causalai.net/r64.pdf.