Since it was released in November last year, artificial intelligence chatbot ChatGPT has taken the Internet by storm. Hailed as the most sophisticated form of generative AI yet, the tool has impressed by competently engaging in sophisticated dialogues, drafting essays, summarising documents, and even writing software.
The release of ChatGPT by OpenAI not only prompted major internet giants to rush out rival AI tools but also sent ripples across the legal industry as the system has shown it has not just the capability of writing basic legal agreements, but also providing rudimentary answers to some legal questions.
With the advent of ChatGPT and similar AI tools, lawyers believe the legal industry is bracing for unprecedented disruption. No one is expecting wholesale job losses just yet, but lawyers are certainly looking forward to workflow efficiencies and lower costs. However, they also warn of risks that come with the technology’s limitations.
HOW DOES CHATGPT WORK?
Simon Bollans, a data protection partner at Stephenson Harwood, explains the rationale behind Chat-GPT’s apparent humanised style and cohesive logic.
“ChatGPT is a language processing tool. It uses a deep learning technique to generate text responses based on huge amounts of data it has been trained on. More specifically, it uses a trans-former architecture to sift through terabytes of data – such as books, articles, and other documents – to answer questions based on patterns and correlations,” says Bollans.
But he stresses that the popular AI system is incapable of grasping the meaning of the content it generates, and thus lacks the ability to understand the context or make judgments based on ethical or moral considerations.
“ChatGPT provides coherent responses by selecting the statistically most appropriate words based on patterns in language and how words and phrases relate to each other. Given this lack of real-world intelligence, its confidently delivered output may not always be consistent with human expectations or desirable values,” he says.
WHAT ARE THE PROS AND CONS OF USING CHATGPT IN LAW?
In February, Magic Circle firm Allen & Overy announced a partnership with Harvey, which helps lawyers to conduct research and due diligence using natural language instructions by leveraging the most up-to-date OpenAI large-language model. Bollans expects the legal industry to increasingly adopt these generative AI tools.
This development is likely to have a substantial impact “not last seen since the dawn of the Internet” on the way legal work is conducted today, according to Bollans.
“ChatGPT could help lawyers more quickly and efficiently research legal questions and find relevant case law and legislation,” says Bollans.
“Tools like ChatGPT will also likely be deployed to streamline contact drafting and to conduct document reviews to identify relevant information and potential issues or discrepancies, and to analyse legal data to identify patterns that could be used to make predictions about risks and legal outcomes.”
But Bollans cautions against potential moral quandary given the inability of ChatGPT and similar AI models to ethically process the information they collect.
“One of the key risks in using ChatGPT is that its output may appear to be correct, when in fact it is wrong or in some way biased or infringing of other work. Any output is also limited by the data it was trained on, which for ChatGPT had a cut-off in September 2021,” says Bollans.
For lawyers, these risks are even higher because tools such as ChatGPT are less likely to fully understand legal concepts and nuances in legal language, according to Bollans.
“AI models can also be difficult to understand and interpret, which makes it challenging for lawyers to explain to clients how they (or the AI) arrived at a particular recommendation,” he adds.
Furthermore, “unlike a trainee or junior lawyer, ChatGPT doesn’t flag any concerns or queries in its work product, and can’t necessarily be effectively interrogated on its rationale or specific sources of information,” warns Bollans.
HOW DO LAWYERS USE TOOLS LIKE CHATGPT?
As the challenges posed by Chat-GPT’s limitations become clearer, Bollans suggests that lawyers deploy extra caution when applying ChatGPT in legal work.
For example, “lawyers need to ensure that they have appropriate rights to share client data (including any personal data) with Chat-GPT, noting that input data may be subsumed into the training model,” notes Bollans.
“Lawyers should also consider whether their terms of business adequately cater for the use of AI tools (such apportioning risk and liability appropriately), and such use complies with any regulatory rules or codes of practice,” he adds.
Moreover, in a bid to minimise unnecessary chaos, Bollans believes regulators should play a proactive role in introducing clear guidelines and regulations for the use of AI in legal practice.
“These guidelines should address issues such as bias, transparency, ethical considerations, and liability, and should establish clear expectations for the roles and responsibilities of lawyers and AI tools in the decision-making process. It is also important for legal organisations to develop their own policies and procedures for the use of AI in their business, and to ensure that their lawyers are properly trained to use AI tools effectively and ethically,” he says.
As formidable as ChatGPT may seem, Bollans is convinced that the rise of AI tools should be viewed as an incentive for lawyers to upskill instead of a warning that their jobs are at immediate risk.
“Delivering quality legal advice requires human judgment, based on an individual’s experience, expertise, and understanding of the circumstances of a given situation. Complex and novel legal queries also require a detailed understanding of the wider context and tactical considerations,” says Bollans.
“AI and advanced software currently lack this human judgment and are unlikely to be able to make the same level of nuanced decisions that human lawyers can,” he adds.