news
Andrew Wong, Danielle Benecke. Michelle Mahoney

With the emergence of generative artificial intelligence heralding a new era of productivity, law firms are weighing the use of AI to improve efficiencies and cut costs at a time when economic uncertainty is dampening revenue prospects. But as the technology's inherent flaws come to light, not to mention the potential for abuse, firms need to move with caution to deal with the flip side of AI.

 

How is your firm taking measures to address the risks associated with using AI when embedding these specialist tools in your operating system to complete legal tasks and drive business outcomes?

 

ANDREW WONG, innovation and knowledge management solutions product manager, Dentons Rodyk: As a global law firm committed to upholding professional and ethical obligations, we approach disruptive technologies with curiosity, at the same time, caution, remembering our duties to clients on data privacy and confidentiality. We adopt a measured approach, prioritising education and understanding how the technology operates. This allows us to make informed policy decisions regarding its acceptable use within the legal domain.

Generative AI holds significant transformative potential and has already gained widespread adoption. However, it also comes with limitations and security implications that need to be addressed. For example, tools such as ChatGPT cannot guarantee the veracity of their output, so users must exercise caution and independently verify the results.

To tackle these challenges, we have adopted a two-pronged approach:

  • We have adopted and communicated Dentons' global policy on the acceptable use of generative AI, ensuring that all firm members are aware of the guidelines. Additionally, we have circulated an internal white paper providing extensive education and background information on the topic while highlighting specific risks and considerations relevant to the legal industry.
  • We have organised in-house sessions that serve as both educational tools and forums for vital discussions on generative AI. These enable our firm members to stay informed on the latest developments, share insights, and collectively address the associated challenges.

By taking proactive measures, we ensure that we stay ahead of the curve in leveraging generative AI while mitigating any potential risks it may pose, thereby enabling us to navigate this evolving technology landscape responsibly and effectively.

DANIELLE BENECKE, founder and global head of machine learning practice, Baker McKenzie: Baker McKenzie has been piloting OpenAI's GPT series and other large language or foundation models since well before the release of ChatGPT when other firms and industry players started to take notice. Client and industry feedback is that our application and understanding of the impact of AI is market-leading, including more recent advances in generative AI.

There's an important distinction between a firm or other industry player's use of an application like ChatGPT, on the one hand, and the use of the underlying large language models on the other. For instance, we do not allow our people to use an external "consumer" app like ChatGPT in connection with confidential client data or information. It goes without saying that such use would raise significant legal privilege, data governance, IT security and other risks.

For other types of use cases, we have adopted what we call a "permission and support" governance framework. We don't have a blanket prohibition but ask teams who want to work with such apps or models to centrally coordinate so we can: 1. manage the risks; 2. understand use cases and where demand is coming from; and 3. provide guidance and support.

We also have a growing set of internal tools and capabilities that enable the use of models such as the GPT series with appropriate safeguards including around IT security, professional responsibility and client permission. Legal industry players need to think about their governance and investment strategy across multiple commercial and technology stack layers. The tech and market conversation have already evolved to a point where this is not about any one app - it's about using foundation models and apps powered by them more broadly.

MICHELLE MAHONEY, executive director of innovation, King & Wood Mallesons: King & Wood Mallesons has been using AI models on client engagements since 2015. However, with any new technology, large language model (LLM) tools like ChatGPT are not deterministic. They carry risks around reliability, accuracy, providence, output quality and privacy. To mitigate these risks, the firm has implemented a number of guardrails.

While the firm encourages staff to learn, test and explore embedding, prompt engineering and elaboration skills with tools like ChatGPT, no client data (in any format), firm data or information about individuals is allowed to be used during experimentation and learning.

As the generative AI model landscape continues to evolve at pace, we are developing a robust AI risk management framework, an AI ethical use policy, as well as training for employees to support implementation. We are also watching key developments worldwide, including the recent passing of the EU AI Act, discussion surrounding AI inputs (data used when training an AI system) and outputs (data generated by an AI system), and the application of copyright law. Earlier this month, Japan decided that using datasets for training AI models does not violate copyright law.

King & Wood Mallesons remains committed to supporting innovation and building the digital literacy skills of our people. We are excited by the role AI tools will play in how we deliver value for our clients and new growth opportunities.

TO CONTACT EDITORIAL TEAM, PLEASE EMAIL ALBEDITOR@THOMSONREUTERS.COM

Related Articles

FORUM: Tread Carefully

by Sarah Wong |

With the emergence of generative artificial intelligence heralding a new era of productivity, law firms are weighing the use of AI to improve efficiencies and cut costs at a time when economic uncertainty is dampening revenue prospects. But as the technology's inherent flaws come to light, not to mention the potential for abuse, firms need to move with caution to deal with the flip side of AI.