What signals are released by the first generative AI regulatory document jointly issued by seven departments?

  The explosive AI industry officially ushered in the first regulatory document.

  Following the public consultation on the Management Measures for Generative Artificial Intelligence Services (Draft for Comment) by the Information Office in April this year, on July 13th, seven departments including the Information Office officially issued the Interim Measures for the Management of Generative Artificial Intelligence Services (hereinafter referred to as the "Measures"), which came into effect on August 15th, 2023.

  The relevant person in charge of the National Internet Information Office said that the promulgation of the Measures aims to promote the healthy development and standardized application of generative artificial intelligence, safeguard national security and social public interests, and protect the legitimate rights and interests of citizens, legal persons and other organizations.

  Generative artificial intelligence refers to the technology of generating text, pictures, sounds, videos, codes and other contents based on algorithms, models and rules. Generative artificial intelligence represented by ChatGPT released by OpenAI is stimulating a new round of "AI arms race" including domestic and foreign technology giants and entrepreneurs such as Microsoft, Google, Meta, Baidu and Alibaba.

  There are 24 newly issued Measures, which set forth relevant requirements from algorithm design and filing, training data and models of generative artificial intelligence service providers, to user privacy, trade secret protection, supervision and inspection, and legal responsibilities. At the same time, the "Measures" clarify the support and encouragement attitude towards the generative AI industry.

  Wu Shen Kuo, doctoral supervisor of the School of Law of Beijing Normal University and deputy director of the internet society of china Research Center, told the First Financial Reporter that the regulatory documents related to generative artificial intelligence were issued at a high speed, which reflected the synchronous promotion with the development and application of technology, and reflected the increasingly mature, agile and efficient evolution of network supervision and digital supervision in China.

  Many practitioners told CBN that the Measures emphasized the enforceability of landing and embodied the basic ideas of risk prevention, risk response and risk management. Its implementation is of great significance to promoting industrial development and creating a good innovation ecology for the development of generative artificial intelligence.

  What regulatory signals are released?

  Wu Shen Kuo told the First Financial Reporter that the Measures promulgated today are particularly prominent in three aspects: First, it emphasizes the classification and classification management mechanism of AIGC and highlights the idea of introducing relevant regulatory mechanisms for different risk types; Second, pay attention to the cultivation of AIGC industrial ecology, especially on the sharing of computing power resources and the construction of pre-training public data platform; Third, the method emphasizes exchanges and cooperation at home and abroad. In terms of the scope of application, it distinguishes between overseas and domestic service providers and business types that do not provide services at home, and clarifies the scope of application of the method.

  Comparing with the previous Draft for Comment, CBN found that the "Measures" released today added many related encouragements for generative artificial intelligence services.

  For example, in Chapter II, Articles 5 and 6 of Technology Development and Governance, it is mentioned that it is necessary to encourage the innovative application of generative artificial intelligence technology in various industries and fields, generate positive and healthy high-quality content, explore and optimize application scenarios, and build an application ecosystem. In addition, it also encourages independent innovation of basic technologies such as algorithms, frameworks, chips and supporting software platforms of generative artificial intelligence, conducts international exchanges and cooperation on an equal and mutually beneficial basis, and participates in the formulation of international rules related to generative artificial intelligence.

  The "Measures" also mentioned that effective measures should be taken to encourage the innovative development of generative artificial intelligence, and inclusive, prudent and classified supervision should be implemented for generative artificial intelligence services.

  In terms of computing power that the industry is particularly concerned about, the Measures also mention promoting the construction of a generative artificial intelligence infrastructure and a public training data resource platform. Promote the collaborative sharing of computing power resources and improve the utilization efficiency of computing power resources. Promote the orderly and open classification of public data and expand high-quality public training data resources. Encourage the use of safe and reliable chips, software, tools, computing power and data resources.

  In terms of service provider access, the Measures also mentioned that foreign investment in generative artificial intelligence services should comply with the provisions of relevant laws and administrative regulations on foreign investment.

  Compared with the Draft for Comment, the statement of the Measures has been adjusted to some extent.

  "Serious nonsense" is a problem that the industry criticizes generative artificial intelligence more. It is mentioned in the "Draft for Comment" that the content generated by generative artificial intelligence should be true and accurate, and measures should be taken to prevent the generation of false information; This time, the "Measures" were updated to take effective measures to improve the transparency of the generated artificial intelligence services and improve the accuracy and reliability of the generated content based on the characteristics of service types.

  In addition, the "Measures" also mentioned that departments such as network information, development and reform, education, science and technology, industry and information technology, public security, radio and television, press and publication should strengthen the management of generative artificial intelligence services according to their respective responsibilities.

  People in the industry pointed out to reporters that the "Measures" embodies a certain fault-tolerant mechanism, which is more in line with reality and enhances the feasibility of implementation.

  "Many application fields can tolerate imperfect big models. For example, a hero in the game has a longer beard and a shorter beard. If he makes a mistake, it may be harmless to make mistakes occasionally; However, some areas are very critical and cannot tolerate mistakes, such as news search, government websites or medical education. These areas need to solve the problem of big model mistakes in the future. " There are big model practitioners who say this.

  For the protection of minors, the contents of the Measures have also been emphasized.

  Previously, the "Draft for Comment" mentioned that appropriate measures should be taken to prevent users from relying too much on or indulging in generating content; The "Measures" were updated to take effective measures to prevent underage users from relying too much on or indulging in generative artificial intelligence services.

  Regarding the supervision of generative artificial intelligence, the Measures also mentioned that the relevant competent departments should supervise and inspect generative artificial intelligence services according to their duties, and the providers should cooperate according to law, explain the source, scale, type, labeling rules and algorithm mechanism of training data as required, and provide necessary technical and data support and assistance.

  For the protection of personal privacy and business secrets, there are many related contents in the Measures. For example, the relevant institutions and personnel involved in the safety assessment, supervision and inspection of generative artificial intelligence services should keep confidential the state secrets, business secrets, personal privacy and personal information they know in performing their duties according to law, and must not disclose or illegally provide them to others.

  What is the geometry of industrial influence?

  Li Shuzhen, vice president of China Electronic Cloud, saw the first reaction of the Measures: "At that time!" . He is deeply touched by the article 6 "encouraging independent innovation of basic technologies such as algorithms, frameworks, chips and supporting software platforms of generative artificial intelligence". "Now NVIDIA is hard to find a card, and we can no longer let the big model get stuck in the computing system that is being reshaped." He told CBN.

  In addition, he also said that the "Measures" will standardize the application and scene of the "big model" industry, so that AI technology can better serve the economy and high-quality industrial development.

  Tian Feng, president of Shangtang Science and Technology Intelligent Industry Research Institute, commented on the First Financial Reporter that the Measures have the leading role of global AI2.0 governance and the enforceability, and are of great significance to promoting industrial development.

  Chen Yunwen, CEO of Daguan Data, told the First Financial Reporter: "The development of the industry will be gradually standardized. The introduction of this "Measures" has played a guiding role in the service of generative large models. Generative AI technology is very new and hot, and the next development depends on this system. "

  Another practitioner said that this "Measures" not only attaches importance to risk prevention, but also embodies a certain fault-tolerant and error-correcting mechanism, and strives to achieve a dynamic balance between standardization and development.

  The topics such as safety, credibility and supervision mentioned in the Measures have also attracted the attention and discussion of many large model practitioners.

  Li Yanhong, chairman of Baidu, said not long ago that only by establishing and improving laws, regulations, institutional systems and ethics to ensure the healthy development of artificial intelligence can we create a good innovation ecology.

  Zhou Hongyi, the founder of 360, mentioned that it is necessary to build a proprietary large model that is "safe, credible, controllable and easy to use". He pointed out that the key to realize the "safety and controllability" of the model is to adhere to the "auxiliary mode", position the big model as the assistant of enterprises and employees, and provide help as the "co-pilot" role, so that people’s will plays a key role in the whole decision-making loop.

  Zhang Yong, Chairman and CEO of Alibaba Cloud Intelligent Group, also said that "building safe and credible artificial intelligence" has gradually become an industry consensus, and relevant laws and regulations are being improved, which has cultivated a good soil and environment for the sustainable development of technology and industry. "There are a lot of uncertainties in innovation, some of which can be predicted in advance and nip in the bud; Some problems arise in development and need to be solved while developing and solved with development. "

  Global brewing regulatory measures

  Not only in China, the ChatGPT-like generative artificial intelligence (AIGC) model has triggered capital competition, and countries’ attention to AIGC compliance is promoting the introduction of corresponding regulatory measures.

  Europe has always been at the forefront of artificial intelligence supervision. In May this year, the European Parliament has approved the first comprehensive artificial intelligence bill. "We hope that the artificial intelligence system is accurate, reliable, safe and non-discriminatory, regardless of its source." European Commission President Ursula Ursula von der Leyen said on May 19th.

  Also at the summit of leaders of the Group of Seven (G7) held in Japan in May this year, the leaders of member countries acknowledged the need to govern artificial intelligence and immersion technology, and proposed to create a ministerial forum dedicated to the development of artificial intelligence before the end of this year to discuss issues surrounding generative artificial intelligence, such as copyright and combating false information.

  The British competition regulator also said in May this year that it will begin to examine the impact of artificial intelligence on consumers, enterprises and the economy, and whether new regulatory measures are needed.

  Ireland’s data protection agency said in April this year that generative artificial intelligence needs to be regulated, but the management agency must figure out how to regulate it correctly before rushing to implement the "really untenable" ban.

  The National Institute of Standards and Technology under the U.S. Department of Commerce said in June that it would set up a public working group composed of experts and volunteers of generative artificial intelligence to help seize the industry opportunities brought by artificial intelligence and formulate guidance to deal with its risks. The US Federal Trade Commission said in May that it is committed to using existing laws to control the risks of artificial intelligence.

  Japan’s privacy watchdog said in June that it had warned OpenAI not to collect sensitive data without public permission. Japan is expected to introduce regulatory measures before the end of 2023, which may be closer to the attitude of the United States than the strict regulatory measures planned by the European Union, because Japan hopes that this technology can promote economic growth and make it a leader in advanced chips.

  On June 12th, UN Secretary-General antonio guterres supported the proposal of some AI executives to establish an AI regulatory agency like the International Atomic Energy Agency. Guterres also announced plans to start the work of high-level artificial intelligence consulting institutions before the end of this year, regularly review the artificial intelligence governance arrangements and make recommendations.

  Facing the global supervision trend of generative artificial intelligence, Wu Shen Kuo told the First Financial Reporter that in the face of new technologies and new applications, it is necessary to continuously explore a set of agile and efficient supervision mechanisms and methods, so as to deal with and deal with all kinds of associated risks in a timely manner to the maximum extent. In addition, a set of economic, convenient and feasible compliance guidelines needs to be continuously constructed and improved, so that all parties have clear compliance standards and directions.

  "Good ecological governance also requires all parties to reach consensus to the maximum extent, and form common values and commonly recognized codes of conduct for governance of new technologies and new applications on a larger scale." He said.

  It can be said that every technological revolution is accompanied by great opportunities and risks. For generative artificial intelligence, only by establishing a flywheel between real user calls and model iterations can the AI model become smarter and smarter, and how to find a balance between policy supervision and technological development also tests the global regulatory agencies. (Reporter Fan Xuehan also contributed to this article)