Ever since generative artificial intelligence (Gen AI) took large parts of the world by storm last year, policymakers and regulators globally have been playing catch-up with the rapidly evolving technology that is poised to reshape existing ways of work and life.
In Singapore, which has always been walking a tightrope between innovation and regulations when it comes to emerging technologies, the aspiring regional tech hub has been taking the lead in crafting guidelines governing the use of AI with a focus on personal data protection and ethical application.
“Singapore’s regulators take a measured and pragmatic approach towards addressing AI-related issues,” say Lim Chong Kin, head of the telecommunications, media and technology practice at Drew & Napier; and Cheryl Seah, a director of the same practice group at the Singapore Big Four firm.
While noting that the government has not yet taken the step to make legislative amendments, the duo point out that different governmental agencies and departments have crafted a series of guidelines.
For example, the Infocomm Media Development Authority introduced the Model AI Governance Framework as far back as January 2019 to ensure the responsible implementation of AI development and use by organisations. Even the Ministry of Health has caught up and introduced the Artificial Intelligence in Healthcare Guidelines.
Lim and Seah note that Singapore’s regulators have enjoyed a close partnership with the industry, as they believe that no single entity (government, industry or research institute) holds all the answers on how best to regulate the use of AI.
“Many of Singapore’s key AI documents – e.g. the Model AI Governance Framework, as well as AI Verify (an AI Governance Testing Framework and Toolkit) – were developed in consultation with the industry,” say Lim and Seah, adding that a series of public consultations were also conducted to seek public feedback on the use of AI in biomedical research, and how personal data may be used to develop and deploy AI systems.
However, given the varying nature and requirements of different industries, it’s challenging to build an AI testing framework that can factor in the full spectrum of risks and accommodate an exhaustive range of applications. Due to the nascent nature of this technology, even defining what AI is, and hence what constitutes an AI system, can be no easy feat.
Other challenges in AI regulations include ensuring that “it is not prohibitive for businesses (especially small businesses) to comply with the testing processes (especially if testing is mandated before the AI system can be put on the market),” say Lim and Seah. “And if external auditors are to have a role in AI testing processes, to ensure that they are qualified/accredited. Regulators will thus need to develop deep expertise in this area too.”
One of the reasons that regulators are tasked with charting AI governance frameworks with an acute sense of urgency is the key risks stemming from the use of AI applications, which has been on the rise exponentially.
Lim and Seah highlight intellectual property (IP) as one of the key areas where these risks associated with generative AI are drawing scrutiny and sparking controversies. Take, for instance, copyrighted material used to train the AI model without the consent of the copyright holders.
“Singapore’s Copyright Act 2021 has provisions concerning fair use (section 190) as well as for computational data analysis (section 244), although some academics have taken the view that section 244 will not apply to AI that has a generative rather than analytical function,” Lim and Seah note. No court decisions have been made yet locally, nor have Singaporean regulators laid out their position in a definitive manner on this matter.
The pair also cite the Personal Data Protection Act (PDPA) as an important page in Singapore’s AI regulatory tool book. The PDPA imposes certain obligations on organisations when it comes to the collection, use, and disclosure of personal data, regardless of technology.
In addition, when the question of liability arises in scenarios where the application does not perform as expected, causing physical harm, financial loss, or intangible harms like discrimination, Lim and Seah believe existing tort and contract law principles have the answers.
“The unique features of AI - it is a black box and can learn from experience without being explicitly programmed - may pose some challenges to these principles, but the common law develops incrementally and flexibly, and we are confident that our courts will be able to deal with it,” they say.
“Singapore already had a case (Quoine Ptd Ltd v B2C2 Ltd [2020] SGCA(I) 02) which dealt with algorithms executing contracts without involving humans, although the program in that case was deterministic (i.e. it will always produce the same output with the same input and not develop its own responses to varying conditions). It would be interesting to see how the principles apply to AI where it is non-deterministic,” add Lim and Seah.
At the end of the day, the pair is confident that regulators are flexible and can simply just amend the legislation as the situation evolves. “Ultimately, the responsible use of AI is what is important, more so than how the use is regulated,” they say.