Since ChatGPT’s launch last month, the newest chatbot model from artificial intelligence research non-profit OpenAI has been touted as “sophisticated” and even “magical.” The chatbot is a major stepping stone in generative AI, and likely has significant practical uses for legal professionals. But what do lawyers need to know about it before they can harness its potential?
Most attorneys who responded to Bloomberg Law’s most recent Legal Ops & Tech Survey feel that harnessing legal tech is important to meet the demands of their clients. However, advanced AI models such as ChatGPT come with challenges that many attorneys likely don’t know about or haven’t thoroughly considered.
Having a basic understanding of how chatbots work and what kind of ethical and security implications they present is crucial for incorporating the technology into the legal practice. But because of ChatGPT’s sophistication (as well as some lurking problems), law firms likely won’t be in a position to implement it in the near future.
Chatbots are AI systems that use natural language processing to understand and to respond to human communication.
Chatbots were initially created in 1966 at MIT, and since their inception, they’ve been a common implementation of generative AI in everyday business operations. Many companies leverage the flexible structure of chatbots to route and manage customer service queries or technical support.
Chatbots are generally constructed under two forms: retrieval and generative, with ChatGPT being the latter. Retrieval chatbots categorize a user input pattern and deliver a prepared response, while generative chatbots actually create the output, with the help of an underlying deep learning model.
Utilizing a powerful language model could empower lawyers to automate essential but tedious functions. An advanced language model could be used to generate draft contracts or briefs quickly and efficiently with minimal oversight, allowing attorneys to spend their valuable time finalizing and polishing rather than drafting. Similarly, such a model could focus its language analysis to draw crucial insights on precedents and procedures from large amounts of text data.
This type of assistance could be a boon to the legal industry: If chatbots quickly perform the more mundane tasks, attorneys can focus their time on more substantive work. Their clients would win because attorney fees will likely be lower, and attorneys can take on additional clients with their extra time.
But one characteristic of ChatGPT—low interpretability—will likely prevent its wholesale adoption by the legal industry for the time being.
A key concept in machine learning, interpretability can help elucidate why more advanced forms of AI like ChatGPT may not soon be implemented across the legal profession.
Interpretability measures the ability for a user, especially a non-expert, to understand the cause-and-effect relationship within a model. Users who have ethical obligations of managing risk and bias require highly interpretable models that can explain their methods and understand results. Generally, these are associated with less sophisticated models with lower-accuracy performance. Deep learning models—like the one used in ChatGPT’s underlying structure—are difficult to interpret, as these “black box” models limit a user’s ability to draw thorough understanding of how a model comes to a prediction.
Measures are being taken to make AI more transparent, and thus more interpretable. But until then, deep learning models will continue to face headwinds with legal professionals.
ChatGPT has shown an impressive ability for generating complex code and advanced technical documents—the latter of which is of potentially huge value to the legal profession. And unlike other advanced AI models in the category, ChatGPT’s simple interface makes it easy to use for even the least technically advanced user. Communicating with the chatbot is as simple as asking a question in the text input prompt.
So why won’t the legal industry see it on a large-scale basis soon?
AI technology has historically faced implementation challenges in the legal industry that are driven by trust, security, and cost concerns. It will likely be no different with ChatGPT. While generative models pose a great deal of opportunity, they also come with a swath of ethical and security concerns that are for the most part related to the complex structure of ChatGPT.
For one thing, lawyers have a duty of technical competence, and would be expected to understand how the chatbot works—and what its benefits and risks are—if they use it in their practice. And lawyers who decide to use it have to be able to intelligibly explain these pros and cons to their clients. An additional concern is the potential data bias-related risk that comes with any AI tool. Also, issues relating to confidentiality come into play if client data is stored with third parties.
Deep learning language models like ChatGPT have access to massive amounts of open data sources to train their model effectively. ChatGPT is built with GPT3.5, an updated version of the GPT3 language model leveraging the same initial training set but including a human feedback component called Reinforced Learning from Human Feedback (RLHF). The current version (GPT3.5) was trained with 175 billion parameters or 600GB of data, and its successor (GPT4), expected to be released in early 2023, is anticipated to use 100 trillion parameters.
Most, if not all, law firms lack the hardware to manage data at a quantity to achieve comparable results to GPT-like models within their secure firewall. Law firms that decide to use ChatGPT or a similar program must therefore pay extra attention to inputs originating outside their network. The scale of data required to use these AI models makes the job of security exponentially more difficult and likely quite daunting for most firms.
ChatGPT will likely not suddenly appear in legal offices in the new year. The introductory steps for AI and machine learning (ML) in the legal industry will involve lower-complexity ML algorithms used to process language, classify data, or note complex linear relationships that mirror ChatGPT’s accessibility.
Additionally, although most law firms are still resisting implementing full data teams, having access to experts in data science, whether among in-house staff or through outside consultants, gives firms a great opportunity to break down low-interpretability models and provide valuable strategic, contextual, and technical guidance to attorneys and their clients.
Bloomberg Law subscribers can find related content on our Surveys, Reports, and Data Analysis resource.
If you’re reading this on the Bloomberg Terminal, please run BLAW OUT <GO> to access the hyperlinked content, or click here to view the web version of this article.
To read more articles log in.
Learn more about a Bloomberg Law subscription.