news

 

Yeong Zee Kin became the chief executive of the Singapore Academy of Law in April, following a career geared towards technology regulation and data protection in both public and private sectors. The former deputy personal data protection commissioner shares his thoughts on AI, and how Singapore’s legal profession can evolve, making the best use of it.

 

ALB: How will your experience in the Singapore government contribute to your new role?

Yeong: Each job prepares you for the next. The Singapore Academy of Law plays important roles in the legal sector. These may broadly be categorised as professional development, development of Singapore law and provision of trust services. These are not dissimilar to functions of government in capabilities and capacity building, industry development and provision of public services. My experience in the government encompassed these areas and will, therefore, be directly relevant to my new role as Chief Executive of the Academy.

In my previous role at the Infocomm Media Development Authority and Personal Data Protection Commission, I focused on data innovation, protection, and AI ethics. My responsibilities included developing governance for responsible AI and data-driven technology policies and positioning Singapore as a thought leader in these areas. Having implemented AI projects and been intimately involved in internal proof-of-concept projects involving large language models, I am also able to bring these experiences into my current role.

ALB: With your experience in AI ethics and data privacy issues, what are your predictions of Singapore’s legal market direction and what role is SAL planning to play along the way?

Yeong: We already know that generative AI, such as large language models (LLMs), can produce legal documents. Given the right training dataset, the quality of the draft that LLMs can produce will improve. We can expect that as the market starts discovering how to make use of LLMs, there will be fine-tuned versions for the legal domain. When these appear, the average person on the street will have access to a tool that can help generate legal documents.

This will have an impact on lawyers in at least two areas. First, lawyers must be familiar with and start using such tools. Only by doing so will we know their limits and discover how lawyers, too, can benefit from using them appropriately and responsibly. For example, when we use services like ChatGPT, we know how it can help provide a draft when we have writer’s block so that we can get some inspiration. We also discover that the way the service operates involves sending content in our prompts to the service provider, who may choose to use them for further training of their models. Equipped with this knowledge, we are better able to safeguard confidential client information (don’t send them with your prompts) and advise our clients.

Second, we are cognizant that the proliferation of AI will affect how we plan the professional development of junior lawyers. If LLMs make obtaining a non-disclosure agreement for clients much easier, we can expect that clients’ expectations of their legal service providers will be elevated. In a commercial world where such tools are accessible to clients, we need to review how we train our junior lawyers and help them move up the value chain. And to do so in a much shorter period than before. The impact is not limited to junior lawyers. The availability of such tools will put pressure on seniors of the profession to rethink how legal services ought to be delivered.

ALB: What are the opportunities generative AI offers the legal profession?

Yeong: There is currently a frisson of innovative energy, as the market discovers how to make use of LLMs. I am supporting a research project using LLMs to interrogate a repository of documents and am also aware of legal tech startups in Singapore developing this as a feature for their products. This can help lawyers looking for quick answers but does not replace the need for lawyers to be familiar with their case files and documentary evidence.

We expect the question-and-answer user experience to replace the keyword searches that the current generation of lawyers has grown accustomed to. But just as search engines are tools to help with legal research, innovative solutions that provide a Q&A approach to legal research cannot replace the fundamental need for lawyers to know their law, be familiar with the authorities that they intend to rely on and (equally) those cases that are not favourable to their case.

I was also involved in research that used an LLM in conjunction with a trusted repository to check draft contracts. So, instead of just using LLMs to generate, we can use them to verify.

As you can see, there is a green field ahead of us. LLMs have the potential to change the way we practise. Lawyers should, therefore, try to understand, make efforts to experiment, and discover how we might benefit from this new technology.

 

TO CONTACT EDITORIAL TEAM, PLEASE EMAIL ALBEDITOR@THOMSONREUTERS.COM

Related Articles

Q&A: “LLMs have the potential to change the way we practice”

by Sarah Wong |

Yeong Zee Kin became the chief executive of the Singapore Academy of Law in April, following a career geared towards technology regulation and data protection in both public and private sectors. The former deputy personal data protection commissioner shares his thoughts on AI, and how Singapore’s legal profession can evolve, making the best use of it.