paint-brush
Global AI Governance: China's Step Towards International Cooperationby@leslie
153 reads

Global AI Governance: China's Step Towards International Cooperation

by Diana KersusOctober 19th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Amidst divergent AI governance strategies, global powers navigate cooperation and competition, shaping a future of technological solidarity.
featured image - Global AI Governance: China's Step Towards International Cooperation
Diana Kersus HackerNoon profile picture

In an era of technological innovation, artificial intelligence (AI) becomes the cornerstone of progress. However, with the increasing capabilities of AI, the necessity for its global governance also grows. Chairman of the PRC, Xi Jinping, contributed to this area by introducing the "Global Initiative on AI Governance" (GAIGI) at the third "One Belt, One Road" International Cooperation Forum.


Promoting Responsible Development of AI

The central axis of GAIGI is the promotion of responsible development of AI and international cooperation. Through this initiative, China aims to create favorable conditions for the development of AI technologies, which helps address many global challenges.


People at the Center of Attention

GAIGI emphasizes the importance of a people-oriented approach. The development and application of AI should be aimed at improving living conditions and respecting the interests of different countries to ensure fair and inclusive use of AI.


Bridging the Gap in the AI Sphere

It is important for China to reduce the digital divide in the AI sphere between developed and developing countries. GAIGI aims to support developing countries in AI technology development to promote global development and innovation.


Three-component Foundation: Development, Safety, and AI Governance

GAIGI is structured around three fundamental aspects: development, safety, and AI governance. These components provide the creation of a comprehensive structure for the responsible development of AI technologies.


Striving to Improve the Quality of Life

China emphasizes that AI should be used to stimulate human progress and improve the quality of life for people worldwide.


GAIGI is a kind of statement on how world powers can work together to ensure responsible and effective use of AI on a global scale. This initiative underscores China’s role as a global leader in the field of artificial intelligence, as well as its aspiration for international cooperation and common progress.

G7 is working on its own AI management guidelines


Despite the fact that GAIGI is one of the largest multilateral forums on AI with the participation of more than 150 countries, its impact on AI regulation in the Group of Seven (G7) countries may not be immediately obvious due to differences in AI regulation approaches among various states and regions.


Japan: As the host country of the G7 summit in 2023, Japan took the initiative in discussing AI regulation. The Japanese approach to AI regulation was aimed at maximizing the positive impact of AI on society rather than suppressing it due to overestimated risks. This approach can offer important lessons for global AI regulation, including the G7 countries^1.


Common Principles and Standards of the Group of Seven: G7 leaders called for the development and adoption of technical standards to ensure "trust" in AI and also discussed opportunities for harmonizing norms and terminology for AI regulation^2^3.


Risk-oriented Approach: G7 leaders agreed that AI regulation should be based on risk assessment, reflecting a broader trend in international AI regulation^4.


International Cooperation: The G7 emphasized the importance of international cooperation in protecting intellectual property rights and promoting transparency in the field of generative AI, highlighting the need for international technical standards to ensure compatibility of AI governance systems^5.


Despite the differences in approaches to AI regulation, the G7 countries and GAIGI strive for similar goals in ensuring responsible and effective use of AI. GAIGI may possibly provide a platform for knowledge and practice exchange with the G7, especially in the context of advancing global standards and principles of AI governance.

Could this divide the world into two camps on AI governance?

A possible division of the world into two camps on the issues of AI governance may serve as a result of differences in regulatory approaches and fundamental principles among various countries and regions.

Here are several potential consequences of such a division:


Competition and Incompatibility: If major players in the international arena, such as the G7 countries and GAIGI participants, adopt different approaches to AI regulation, this may lead to competition and incompatibility between technological systems and standards. This, in turn, may hinder international cooperation and knowledge exchange in the field of AI.


Hindrance to Global Cooperation: Differences in approaches to AI regulation can create obstacles for global cooperation and standardization in the AI field. This may impede the creation of universal initiatives and international standards.


Risks to Innovation: The division into two camps can be a barrier to innovation, as companies and researchers may face obstacles when attempting to cooperate or exchange technologies and data across borders.


Impact on International Relations: Differences in approaches to AI governance can reflect on international relations, intensifying tension and misunderstanding between countries with different approaches to AI regulation.


Political and Economic Risks: The division may also entail political and economic risks, including the possibility of trade wars or sanctions against technological companies.


Despite these challenges, there is potential for creating international forums and initiatives aimed at bringing together different sides and developing common principles and standards for AI governance. This may include exchanging best practices, jointly developing technical standards, and cooperating in the field of research and development in AI.

Southeast Asia has made joint efforts to regulate AI

The Southeast Asian region has been making strides toward a collaborative approach to regulating Artificial Intelligence (AI). Here's a breakdown based on the information gathered:

1. ASEAN's Approach to AI Regulation

  • Southeast Asian countries, under the umbrella of the Association of Southeast Asian Nations (ASEAN), are crafting a business-friendly approach to AI regulation, which appears to be in contrast to the European Union's (EU) more stringent regulatory framework​.
  • A draft titled "Guide to AI Ethics and Governance" has been circulated among technology companies for feedback and is expected to be finalized by the end of January 2024 during the ASEAN Digital Ministers Meeting. Companies like Meta, IBM, and Google have received this draft​.
  • Unlike the EU's AI Act, the ASEAN "AI guide" emphasizes understanding countries' cultural differences and doesn't prescribe unacceptable risk categories. It is designed to be voluntary and aims to guide domestic regulations in the respective countries​.

2. Promotion of Innovation and Business-Friendly Policies

  • The guide encourages governments to support companies through research and development funding. It also sets up an ASEAN digital ministers working group on AI implementation​1​.
  • Technology executives in the region appreciate ASEAN's relatively hands-off approach as it is seen to limit the compliance burden in a region where existing local laws are already complex. This approach is also perceived to allow for more innovation​.

3. Alignment with Other Global AI Frameworks

The draft aligns closely with other leading AI frameworks, such as the United States’ NIST AI Risk Management Framework, according to Stephen Braim, IBM Asia's vice president of government affairs.

4. Periodic Review and Adjustments

The guide is meant to be periodically reviewed to ensure that it remains relevant and effective in guiding AI governance and ethics.

5. Risks and Guardrails

While it advises companies to put in place an AI risk assessment structure and AI governance training, the specifics are left to companies and local regulators. It also acknowledges the risks of AI being used for misinformation, "deepfakes," and impersonation but leaves it to individual countries to work out the best way to respond.

6. Regional Diversity and Autonomy

With almost 700 million people and over a thousand ethnic groups and cultures, Southeast Asian countries have widely divergent rules governing censorship, misinformation, public content, and hate speech that could potentially affect AI regulation. The autonomy granted to member states to make their own policy determinations puts them on a distinctly different track compared to the EU.

7. Engagement with Broader Global AI Regulatory Discussions

Although the EU has struggled to create a global consensus on AI regulation, it continues to hold talks with Southeast Asian states to align over broader principles, recognizing the importance of similar principles even if full harmonization is not achieved.


This synthesis illustrates that Southeast Asia, through ASEAN, is embarking on a collective yet flexible approach to AI regulation that appreciates the region's cultural diversity and promotes a conducive environment for innovation while also engaging in broader global discussions on AI governance.

Conclusion

The vista of global AI governance unfolds into a tableau of diverse regulatory approaches, each echoing the geopolitical, economic, and cultural contours of different regions. The divergence in AI governance strategies among major global players like G7, GAIGI, and ASEAN potentially heralds a scenario of competition and incompatibility, which may impede the seamless flow of knowledge and technology across borders and could foster a climate of political and economic uncertainty.


Yet amidst this complex narrative lies the potential for forging international forums and initiatives that could serve as crucibles for melding diverse perspectives, developing common principles, and fostering a spirit of global cooperation in AI governance. This quest for a collaborative ethos in the global AI governance landscape not only underscores the significance of international cooperation but also heralds the promise of a harmonized global framework that navigates the labyrinth of AI ethics, safety, and governance, steering the world toward a future where AI serves as a beacon of progress, innovation, and global solidarity.