ASIABUSINESSOPINION

China takes aim at actual AI risks, not potential ones

Microsoft CEO Satya Nadella said in 2020 that China cares deeply about the ethics of artificial intelligence (AI) and emphasizes the need for global rules and regulations surrounding the technology. Nadella spoke at the World Economic Forum in Davos and stated that both the United States and China should have a set of principles governing the implications of AI in societies worldwide. After the end of pandemic, we can hear different calls coming for China for AI regulation, emphasizing the importance of finding a balanced approach that considers both potential harms and social opportunities.

Director of International Center of AI Ethics and Governance, Institute of Automation of the Chinese Academy of Sciences and an expert in the UNESCO Ad Hoc Expert Group on AI, Professor Zeng Yi, emphasizes the need for prioritizing safety and ethics in the development and deployment of artificial intelligence (AI). In a recent interview, he discussed the importance of aligning AI with human values through legal frameworks and technical means.

One approach highlighted by Prof. Zeng involves using evaluation methods to assess the ethical and moral risks associated with large AI models. By incorporating evaluations into the system, AI models can avoid providing answers that may have a negative impact on humans, mitigating potential harm.

However, he also acknowledged the limitations of this approach. Although evaluation methods and rules can help constrain AI models, they do not guarantee true comprehension. This leaves the models vulnerable to manipulation, as they lack a genuine understanding of the content. Exploiting this vulnerability can lead to unintended responses, indicating that the defenses are not entirely effective.

To overcome this limitation, Prof. Zeng proposed that AI should possess a genuine moral foundation, moral intuition, and the ability to reason morally. True “Ethical AI” should understand why humans adopt specific moral norms and be capable of making ethical judgments autonomously.

Furthermore, he emphasized the constant evolution of human ethics and the importance of AI mirroring this evolution. By evolving alongside humans, AI can provide insights into the future of humanity and contribute to the progress of human ethics. This symbiotic relationship requires continuous evolution of AI systems to shape and advance the framework of human ethics and morals.

Addressing the GPT series of AI models, Prof. Zeng highlighted their unpredictability compared to previous AI systems. GPT models utilize internet data, making it difficult to anticipate the information they may include or how they will respond to various inputs. This unpredictability introduces significant uncertainties and risks when widely implemented in society.

While AI has tremendous potential for societal advancement, Prof. Zeng stressed the need to consider and control its potential risks. He emphasized addressing safety risks associated with AI to ensure its steady and healthy progression without endangering human well-being. While long-term concerns about superintelligence capture attention, he emphasized the urgency of focusing on short-term risks, as present-day AI lacks true understanding and can make unpredictable errors that pose threats to human survival.

Additionally, he expressed concerns about AI exploiting human weaknesses, exacerbating hostilities, prejudices, and misunderstandings among humans. The threat of lethal autonomous weapons based on AI further magnifies risks to human lives. He called for proactive measures to mitigate these dangers and emphasized the importance of addressing risks related to synthetic disinformation and reduced social trust.

Prof. Zeng highlighted the need to raise awareness among stakeholders, including developers, users, governments, the public, and the media. Collaboration and knowledge sharing are key to safeguarding the responsible and healthy development of AI.

To achieve ethical and safe AI development, he proposed the establishment of an international committee on AI safety, involving all countries. This collaborative effort would enable the sharing of expertise and best practices, ensuring that AI development aligns with global safety standards. By prioritizing global safety alongside the benefits of AI, humanity can harness its transformative power responsibly and sustainably.

China is seeking to regulate generative AI at domestic level. The draft Measures for Generative Artificial Intelligence Services, released in April 2023 by the Cyberspace Administration of China, aim to balance AI innovation with ethical considerations. The measures will cover all generative AI services offered to users in mainland China, regardless of whether the service is based locally or abroad. China has been at the forefront of AI regulation in recent years. The government has implemented a number of initiatives to foster and supervise the growth of the AI sector, including Made in China 2025, the Action Outline for Promoting the Development of Big Data (2015), and the Next Generation Artificial Intelligence Development Plan (2017).

China has also been proactive in enacting laws that govern the ethics of AI businesses and algorithms. For example, the Personal Information Protection Law (PIPL) was enacted in 2021 and requires AI companies to obtain user consent before collecting or using personal data.As part of its broader efforts to regulate the technology industry, it is conceivable that the Chinese government may impose regulations on AI-based language models, such as ChatGPT. These regulations could be designed to prevent the use of AI for harmful purposes, such as spreading misinformation or creating deepfakes.

China’s approach to AI regulation is one of balance. The government is committed to fostering innovation in the AI sector, while also ensuring that AI is used in a responsible and ethical manner. On the other hand, key Chinese scientists like Prof. Zeng are well aware of future concerns related to AGI and not only keeping a close eye on its development but also calling for a global collaboration.

Show More

Ammar Younas

Ammar Younas is an ANSO scholar at School of Humanities, University of Chinese Academy of Sciences. He is based at Institute of Automation, Chinese Academy of Sciences. He studied Chinese Law as Chinese Government Scholar at Tsinghua University School of Law in Beijing, China. Ammar also holds degrees in Medicine, Jurisprudence, Finance, Political Marketing, International and Comparative Politics and Human Rights from Kyrgyzstan, Italy, and Lebanon. His research interests include but not limited to Societal Impact of Artificial Intelligence (AI), Regulation of AI & Emerging Technologies, and Central Asian Law.

Related Articles

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker