ALB JANUARY FEBRUARY 2024 (ASIA EDITION)

6 ASIAN LEGAL BUSINESS – JANUARY-FEBRUARY 2024 WWW.LEGALBUSINESSONLINE.COM BRI EFS SINGAPORE LOOKS TO TAKE LEAD IN ETHICAL AI GOVERNANCE Ever since generative artificial intelligence (Gen AI) has taken large parts of the world by storm last year, policymakers and regulators globally have been playing catch-up with the rapidly evolving technology that is poised to reshape existing ways of work and life. In Singapore, which has always been walking a tightrope between innovation and regulations when it comes to emerging technologies, the aspiring regional tech hub has been taking the lead in crafting guidelines governing the use of AI with a focus on personal data protection and ethical application. “Singapore’s regulators take a measured and pragmatic approach towards addressing AI-related issues,” say Lim Chong Kin, head of the telecommunications, media and technology practice at Drew & Napier; and Cheryl Seah, a director of the same practice group at the Singapore Big Four firm. While noting that the government has not yet taken the step to make legislative amendments, the duo point out that different governmental agencies and departments have crafted a series of guidelines. For example, the Infocomm Media Development Authority introduced the Model AI Governance Framework as far back as January 2019 to ensure the responsible implementation of AI development and use by organisations. Even the Ministry of Health has caught up and introduced the Artificial Intelligence in Healthcare Guidelines. Lim and Seah note that Singapore’s regulators have enjoyed a close partnership with the industry, as they believe that no single entity (government, industry or research institute) holds all the answers on how best to regulate the use of AI. “Many of Singapore’s key AI documents – e.g. the Model AI Governance Framework, as well as AI Verify (an AI Governance Testing Framework and Toolkit) – were developed in consultation with the industry,” say Lim and Seah, adding that a series of public consultations were also conducted to seek public feedback on the use of AI in biomedical research, and how personal data may be used to develop and deploy AI systems. However, given the varying nature and requirements of different industries, it’s challenging to build an AI testing framework that can factor in the full spectrum of risks and accommodate an exhaustive range of applications. Due to the nascent nature of this technology, even defining what AI is, and hence what constitutes an AI system, can be no easy feat. Other challenges in AI regulations include ensuring that “it is not prohibitive for businesses (especially small businesses) to comply with the testing processes (especially if testing is mandated before the AI system can be put on the market),” say Lim and Seah. “And if external auditors are to have a role in AI testing processes, to ensure that they are qualified/accredited. Regulators will thus need to develop deep expertise in this area too.” One of the reasons that regulators are tasked with charting AI governance frameworks with an acute sense of urgency is the key risks stemming from the use of AI applications, which has been on the rise exponentially. Lim and Seah highlight intellectual property (IP) as one of the key areas where these risks associated with generative AI are drawing scrutiny and sparking controversies. Take, for instance, copyrighted material used to train the AI model without the consent of the copyright holders. “Singapore’s Copyright Act 2021 has provisions concerning fair use (section 190) as well as for computational data analysis (section 244), although some academics have taken the view that section 244 will not apply to AI that has a generative rather than analytical function,” Lim and Seah note. No court Image: Ivan Kurmyshov/Shutterstock.com

RkJQdWJsaXNoZXIy MjA0NzE4Mw==