ALB OCTOBER 2023 (CHINA EDITION)

5 ASIAN LEGAL BUSINESS CHINA • 亚洲法律杂志-中国版 WWW.LEGALBUSINESSONLINE.COM/CHINA BIG STORY In late August of this year, eight companies and institutions, including Baidu, ByteDance, and SenseTime, officially launched their large model services to the Chinese public. This marked the initial rollout of large models approved for registration under China’s Interim Administrative Measures for Generative Artificial Intelligence Services, commonly known as the “Generative AI Measures.” At the time of this launch, it had been just two weeks since the Generative AI Measures came into effect and less than six months before the generative AI frenzy captivated the world. China’s rapid regulatory response to this burgeoning field is truly remarkable. Notably, China holds the distinction of being the first country to implement rules governing generative AI. In contrast, the European Union’s draft rules for the implementation of its Artificial Intelligence Act are still awaiting resolution of key issues through negotiations. Meanwhile, the United States primarily relies on selfregulation by leading generative AI players, and the development of preliminary regulations remains in the planning stage. BALANCING SECURITY AND DEVELOPMENT Discussing China’s swift implementation of generative AI regulations, Hilda Li, a partner at Shihui Partners, highlights two primary factors. “Generative AI has a greater impact on human society compared to many previous AI technologies, owing to its technical features, notably its robust interactive nature and versatility. Consequently, the associated risks are more acute and pressing.” Li continues, “Moreover, China already possesses tools for regulating generative AI. For instance, there are existing rules, designs, and concrete practices like registration, security assessment, and more in the domains of algorithm recommendations and deep synthesis management.” However, in this nascent field, Chinese legislation is navigating to strike a balance, ensuring the safety of content generated by large models without stifling innovation. The Generative AI Measures reflect legislative wisdom in this regard. Li explains further, “During the initial public consultation stage of the Generative AI Measures in April this year, the version was notably more stringent. We assisted numerous leading AI companies in providing feedback on suggested revisions. Many of these suggestions were ultimately adopted, underscoring the regulatory authorities’ appreciation for and respect of industry insights.” Furthermore, Li adds, “Upon closer examination of the provisions, one can observe that the Generative AI Measures introduced very few new obligations. The majority of provisions refine existing legal frameworks within the generative AI domain. Particularly intriguing is the inclusion of the word ‘interim’ in the title of the measures. This is relatively rare in the recent science and technology legislation adopted by the national cyberspace administration. It signifies the regulatory authorities’ stance of vigilant monitoring concerning emerging technologies like generative AI, suggesting potential adjustments to the measures as the technology evolves.” IDENTIFYING AND RESPONDING TO RISK Regarding discussions in the market about the initial set of large models that have obtained “licenses” under the Generative AI Measures, Li provides insights. She notes that while the measures have established a robust regulatory framework, “strictly speaking, the term ‘examination and approval’ is not explicitly used in the text. Instead, a ‘renvoi’ approach is taken. Essentially, the security assessment and algorithm registration for generative AI primarily adhere to the ProviHILDA LI sions on Security Assessment of Internet Information Services with Public Opinion Attributes or Social Mobilization Capabilities and the Provisions on Algorithms, among others.” In reality, when the aforementioned eight companies and institutions made their products publicly accessible, none explicitly highlighted that their large models had “successfully completed registration.” Li also mentions that besides regulatory provisions, “there are templates and essential assessment points. Businesses require guidance on relevant applications that encompass a multitude of specified elements. Companies must transparently and comprehensively elucidate the fundamental information about their algorithm models to regulatory authorities while assessing and managing risks.” As the registration and risk assessment processes are primarily conducted by enterprises themselves, legal experts well-versed in regulatory approaches within this domain act as significant gatekeepers throughout this procedure. Li further explains that Shihui Partners currently offers two versions of advice to clients in the generative AI field. “One is a comprehensive version featuring detailed lists and regulations. This version needs to be customized based on specific circumstances.” The other version, referred to as the “simplified version,” involves enterprises gaining an understanding of how regulatory authorities evaluate risks. Subsequently, they manage these risks by implementing internal rules and technical measures. Li emphasizes, “Having risks is not alarming. What is concerning is not having any mechanisms in place to isolate risks and respond swiftly. Therefore, compliance efforts should focus on risk isolation and prompt responses.” KEY ROLE PLAYED BY LAWYERS Within the framework of the Generative AI Measures, in addition to aiding relevant companies in entering the market swiftly and securely, initiating the commercialization of large models, external lawyers can contribute in a versatile manner.

RkJQdWJsaXNoZXIy MjA0NzE4Mw==