[ad_1]
Editor”s note: The Cyberspace Administration of China is seeking the public’s views, from April 11 to May 10, on draft measures aimed at managing emerging foreign and domestic artificial intelligence (AI) services. These requirements cover a wide range of issues that are frequently debated in relation to the governance of generative AI, such as data protection and security. Three experts share their views on the issue with China Daily.
Time to specify AI service management norms
By Liu Feng
The era of generative artificial intelligence is here. ChatGPT attracted about 100 million users in just two months, becoming a phenomenon in no time and prompting major internet companies to accelerate the development of their own generative AI service platforms.
While Google has begun public testing for its “Bard” chatbot, Anthropic, a company founded by a former vice-president of OpenAI that developed ChatGPT, has released “Claude+”. In China, Baidu has launched “ERNIE Bot”, Alibaba has released “Tongyi Qianwen”, while Tencent, 360, Huawei and other internet giants have announced their layout in generative AI services.
Generative AI technology not only generates high-quality and diverse content such as texts, images and music, but also can be used in automatic programming, machine learning, data generation, game development, virtual reality and other fields. It has already started changing our way of life and work, creating new opportunities for the development of different industries. In terms of technology, generative AI is driving the development in fields such as natural language processing, computer vision and audio processing, offering new methods and ideas for solving complex scientific problems.
On the economic front, generative AI has provided new growth points and business models for various industries such as media, education, entertainment and advertising, helping boost economic growth. In terms of society, generative AI has provided people with new communication methods and entertainment sources including chatting, gaming and reading, enriching people’s lives.
However, as generative AI technology continues to advance and be applied to different fields, it poses significant potential risks and challenges. First, it can deepen the problem of information distortion and abuse. A recent Europol report, for example, said that the possibility of using tools like ChatGPT for crime is increasing. Such tools can be used for phishing and online fraud, as well as impersonating the speaking style of specific individuals or groups, prompting potential victims to trust criminals.
Second, there is the serious problem of algorithmic bias and prejudice. Unbalanced data sets, biased feature selection, or subjective human labeling can all lead to algorithmic bias, promoting social and economic inequality and influencing political decision-making.
Deep fakes, too, are a serious problem. Generative AI technology can create extremely realistic fake texts, images, audios and videos, fabricate false news, mislead the public and trigger political unrest.
And third, infringement on privacy is a major issue that requires greater attention. Generative AI systems could collect, store and use personal information, which could be improperly used or leaked, thereby infringing on personal privacy. More importantly, generative AI may affect human creativity and thinking ability, making people overly reliant on machine-generated content, possibly changing social relationships and values, reducing human communication (including emotional communication) and interaction, causing people to lose their subjective judgment and critical thinking.
We need to fully recognize these risks and challenges, and strive to establish targeted and comprehensive management service standards for generative AI at four levels, while balancing technological development and social interests, in order to ensure generative AI technology is safe and ethical, and serves humanity.
At the technical level, we need to strengthen the development of trustworthy AI technology, ensure it’s accurate and interpretable and can be controlled, and prevent it from generating erroneous and harmful content. We also need to strengthen the security protection and improve the tools to detect fake content produced by generative AI, and prevent it from being tampered with or abused.
At the legal level, it is necessary to formulate and improve laws, regulations and standard specifications related to generative AI, make clear the scope of its use and responsibilities, protect legal rights and interests of entities, and punish any illegal acts. Also, an effective judicial relief and dispute resolution mechanism needs to be established to promptly handle disputes and controversies caused by generative AI.
At the ethical level, it is important to establish and abide by ethics and codes of conduct related to generative AI, respect human dignity, autonomy and diversity, and maintain social fairness, justice, and stability. It is also necessary to spread ethical education on generative AI and raise the ethical awareness and literacy of its users and audiences and readers.
At the social level, there is a need to promote communication and cooperation between generative AI and various sectors of society, enhance mutual understanding and trust, and explore the ways of utilizing its potential and creating value. Attention should also be paid to the impact and challenges of generative AI on socioeconomic, cultural and educational fields, and proactively respond to the challenges and opportunities it gives rise to.
The author is a researcher at the Institutes of Science and Development, Chinese Academy of Sciences.
Striking a balance between security and tech innovation
By Chen Zhi
Generative artificial intelligence represented by ChatGPT has become popular worldwide in a very short time. Despite its disruptive technology, the value it brings has been welcomed by the global technology and industrial community, with internet giants around the world rushing to embrace it and many industries already using generative AI products as productivity tools.
However, the increasingly powerful AI systems have raised many concerns. On March 29, in a white paper on regulating the AI industry, the United Kingdom government proposed adopting a comprehensive regulatory approach to AI technology. Countries such as Italy and Canada, too, have highlighted the data security risks posed by ChatGPT and its parent company OpenAI.
As for China, the Cyberspace Administration of China issued the “Administrative Measures for Generative Artificial Intelligence Services (Draft for Comment)” to solicit public opinions from April 11 to May 10, which shows that China is striving to strike a balance between AI security and innovations through legislation.
When it comes to regulating disruptive innovations, often governments lack sufficient information on the security, ethical and other risks. In the early stages, they cannot decide whether to take immediate action and regulate the process.
However, both of these extremes have their flaws: not regulating the process creates significant risks, while excessive regulation can stifle innovation. International experiences show that governments generally adopt informal or guiding policies in the early stages. For example, the US Federal Communications Commission adopted a laissez-faire attitude toward the early development of the internet.
However, AI is vastly different from previous technological innovations, especially given the rapid and widespread development and application of large models. Due to its potential risks, governments and the public do not have enough time to prepare to deal with it, exposing them to real and imminent risks.
More important, the unpredictability of the original internet data that support AI content generation training, the questions users pose to AI content generation systems, and the output generated by the system itself make the risks that generative AI pose to society as a whole even more unpredictable. The key concerns about generative AI risks are falsehood, discrimination, privacy infringement, security, rights and ethics.
First, ChatGPT and other generative AI models can quickly produce large amounts of misleading information such as fake videos, images, voices and texts. People cannot discern whether they are real or not, leading to more and more false information being spread and giving rise to online fraud and cybercrime, which will have a huge impact on society.
Second, due to biased training data, large models may propagate violent, discriminatory, pornographic, drug-related and criminal activities, and even provide suggestions on how to commit dangerous acts. This will have serious consequences on society, and could even destabilize it.
Third, users input information when using ChatGPT, which may be used as further training data for ChatGPT’s iteration, resulting in data leakage. There have already been cases where models and their services have infringed upon users’ privacy, with companies such as Microsoft and Amazon warning employees not to share confidential information with ChatGPT.
Fourth, the rapid development and application of large-scale generative models like ChatGPT have raised concerns about their impact on existing intellectual property rights systems. Is ChatGPT a mere tool or a content producer? And can users be seen as participating in the creation process?
There have been many cases of academic misconduct, plagiarism and IPR infringement. These questions require careful consideration, while regulations are needed to ensure the fair and proper use of generative AI in order to protect IPR.
Based on the above risks and their potential impacts, it is inevitable that generative AI, especially ChatGPT, will be brought under regulation. Security and trust are the core requirements of AI regulation, so AI large models and systems should be designed keeping ethics and safety in mind.
While the European Union has already decided to implement strict regulations to manage generative AI, the United States appears to support innovations. China’s draft of the “Administrative Measures for Generative Artificial Intelligence Services (Draft for Comment)” reflects the concept of balancing safety and development, and provides detailed provisions on content legality, data compliance, information protection, and risk prevention and control.
But due to the complexity of and difficulty in tracing data sources, there are some doubts about the enforceability of certain provisions such as obtaining consent from data subjects for training models with their personal information. Some experts and industry insiders also say early and excessive regulations may increase industry costs and hinder the pace of innovations.
Therefore, regulations should appropriately accommodate generative AI technologies and products, continuously adjust and minutely calibrate and standardize AI technologies to strike a balance between the development of AI and ensuring safety and order.
The author is director of and a researcher with the Institute of Innovation and Development, Chinese Academy of Science and Technology for Development.
Draft aims to promote healthy development of artificial intelligence
By Kang Tianxiong
The Cyberspace Administration of China is seeking the public’s views, from April 11 to May 10, on draft measures aimed at managing emerging foreign and domestic artificial intelligence (AI) services. The 21-article draft on managing generative AI services aims to make clear the definitions, regulations, supervision, legal basis and scope of AI application. The highlight of the draft is that it focuses on the subjects and their responsibilities.
The draft lists several responsibilities and obligations for research and development, training, output platforms and content control of AI products, allowing government departments to supervise and regulate the AI industry, which is likely to bear heavy transaction and social costs.
As part of China’s potential public law, the draft represents the views of the regulatory authorities. The goal of the draft is to allow AI service providers to collect data and generate content strictly in line with related laws, especially the Copyright Law, and to shoulder administrative responsibilities and bear civil or criminal liability in case they fail to do so.
First, AI product developers and service providers cannot be identified as data creators during the “input” process because they only engage in data collection, reading and training. So they should not bear the obligation to review the legality of the AI-generated content. As the review of data input involves precautionary and intervention principles, the uncertainties of the outcome of AI will negatively affect innovations of new texts, images, voices, videos and codes powered by AI algorithms, models and rules.
Second, a provider of generative AI services plays the role of a platform and content co-creator, and it is not easy to define the legal nature of its data output. If the provider only copies, pastes and spreads the original content, then it can be described as an internet service platform that does not bear the obligation to review the legality of the content thanks to safe harbor provisions.
Third, data output means AI service providers creating contents independently or co-creating contents with users. For example, an AI-generated article was ruled to have violated copyright laws in 2020, as a court in Shenzhen, Guangdong province, said the defendant had infringed on copyright by disseminating the AI-written piece without the authorization from the plaintiff and, therefore, should bear civil liability for the action.
However, users of generative AI products have failed to get the attention they deserve. According to the Copyright Law, any two entities engaged in intentional collaboration aimed at combining their contributions into a unified work are considered joint copyright owners of the work.
Although a generative AI product builds a typical collaborative creative model that involves interaction between the user and AI, the user’s behavior is not taken into consideration probably because his or her data output can be considered reasonable in line with the Copyright Law, which does not need to be supervised by the authorities.
In order to promote the AI industry and create a healthy business environment, the government may request such entities to bear civil liability only when they use the copyright works for commercial purposes.
The draft emphasizes the obligations and responsibilities of AI service providers, such as how to handle complaints, provide data and deal with misdeeds. As for the regulatory authorities, they will be allowed to deliver administrative punishment including giving a warning, criticizing and imposing administrative penalties.
Although AI service providers should shoulder more obligations and social costs because cyber and data security and personal information need to be safeguarded, the draft should also discuss how much responsibility these market entities need to bear and the scope of the administrative supervisors’ powers. These adjustments will be conducive to the development of the market of generative AI products, but perhaps the government wants to ensure AI market openness since it mentions “boosting the healthy development of generative AI” at the beginning of the draft.
We hope the final version of the draft will lower the legal and operative costs for the industry, and help create a healthy business environment and open market through supervision.
The author is an associate professor at the Civil and Commercial Law School, Intellectual Property School at Southwest University of Political Science and Law.
The views don’t necessarily represent those of China Daily.
If you have a specific expertise and would like to contribute to China Daily, please contact us at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.
[ad_2]
Source link