ALB JUNE 2023 (ASIA EDITION)

4 ASIAN LEGAL BUSINESS – JUNE 2023 WWW.LEGALBUSINESSONLINE.COM BRI EFS FORUM TREAD CAREFULLY ANDREW WONG, innovation and knowledge management solutions product manager, Dentons Rodyk As a global law firm committed to upholding professional and ethical obligations, we approach disruptive technologies with curiosity, at the same time, caution, remembering our duties to clients on data privacy and confidentiality. We adopt a measured approach, prioritising education and understanding how the technology operates. This allows us to make informed policy decisions regarding its acceptable use within the legal domain. Generative AI holds significant transformative potential and has already gained widespread adoption. However, it also comes with limitations and security implications that need to be addressed. For example, tools such as ChatGPT cannot guarantee the veracity of their output, so users must exercise caution and independently verify the results. To tackle these challenges, we have adopted a two-pronged approach: - We have adopted and communicated Dentons’ global policy on the acceptable use of generative AI, ensuring that all firm members are aware of the guidelines. Additionally, we have circulated an internal white paper providing extensive education and background information on the topic while highlighting specific risks and considerations relevant to the legal industry. - We have organised in-house sessions that serve as both educational tools and forums for vital discussions on generative AI. These enable our firm members to stay informed on the latest developments, share insights, and collectively address the associated challenges. By taking proactive measures, we ensure that we stay ahead of the curve in leveraging generative AI while mitigating any potential risks it may pose, thereby enabling us to navigate this evolving technology landscape responsibly and effectively. DANIELLE BENECKE, founder and global head of machine learning practice, Baker McKenzie Baker McKenzie has been piloting OpenAI’s GPT series and other large language or foundation models since well before the release of ChatGPT when other firms and industry players started to take notice. Client and industry feedback is that our application and understanding of the impact of AI is market-leading, including more recent advances in generative AI. There’s an important distinction between a firm or other industry player’s use of an application like ChatGPT, on the one hand, and the use of the underlying large language models on the other. For instance, we do not allow our people to use an external “consumer” app like ChatGPT in connection with confidential client data or information. It goes without saying that such use would raise significant legal privilege, data governance, IT security and other risks. For other types of use cases, we have adopted what we call a “permission and support” governance framework. We don’t have a blanket prohibition but ask teams who want to work with such apps or models to centrally coordinate so we can: 1. manage the risks; 2. understand use cases and where demand is coming from; and 3. provide guidance and support. We also have a growing set of internal tools and capabilities that enable the use of models such as the GPT series with appropriate safeguards including around IT security, professional responsibility and client permission. Legal industry players need to think about their governance and investment strategy across multiple commercial and technology stack layers. The tech and market conversation have already evolved to a point where this is not about any one app - it’s about using foundation models and apps powered by them more broadly. ANDREW WONG With the emergence of generative artificial intelligence heralding a new era of productivity, law firms are weighing the use of AI to improve efficiencies and cut costs at a time when economic uncertainty is dampening revenue prospects. But as the technology’s inherent flaws come to light, not to mention the potential for abuse, firms need to move with caution to deal with the flip side of AI. DANIELLE BENECKE MICHELLE MAHONEY HOW IS YOUR FIRM TAKING MEASURES TO ADDRESS THE RISKS ASSOCIATED WITH USING AI WHEN EMBEDDING THESE SPECIALIST TOOLS IN YOUR OPERATING SYSTEM TO COMPLETE LEGAL TASKS AND DRIVE BUSINESS OUTCOMES?

RkJQdWJsaXNoZXIy MjA0NzE4Mw==