One Technology, Three Regulatory Paths: How the EU, the US, and China Govern AI (2025–Early 2026) – Part II
This post builds on Part I, which examined the regulatory approaches to AI in the EU, the US, and China. Part II turns to the underlying drivers of these divergences and their broader implications.
I. Underlying Drivers of Regulatory Divergence
The distinct AI regulatory paths of the EU, the US, and China are not random. They are shaped by core and interrelated drivers.
Value Priorities
The EU’s rights-centric framework is rooted in its commitment to fundamental human rights. Hogan and Lasek-Markey (2024) note that its rights-embedded design ties AI risk mitigation directly to protecting dignity, equality, and privacy. The US’s market-driven approach reflects its long-standing emphasis on private-sector innovation and limited government intervention. The March 2026 Framework’s explicit opposition to creating a new federal AI regulator epitomises this orientation. This reluctance to intervene is further evidenced by data from Stanford’s AI Index Report (2025), which highlights the ongoing lack of comprehensive federal legislation, a regulatory posture that analysts interpret as a deliberate choice to preserve industry autonomy. China’s focus on balancing development and security aligns with its national modernisation goals, with the core objective of maintaining social stability while advancing technological progress.
Tech Ecosystems
The EU’s Digital Omnibus reforms respond directly to an AI ecosystem structurally reliant on SMEs and startups, where overly rigid compliance risks disproportionate harm to global competitiveness. In the US, the escalating state-federal tension stems partly from Silicon Valley’s outsized influence. As Rangel and Hettinga (2024) note, tech lobbying has long constrained meaningful federal action and driven industry-friendly agendas at the national level. Building on this trajectory, the National AI Policy Framework reflects a continued, close alignment with major AI companies’ policy preferences. China’s approach fits its dynamic ecosystem, which mixes leading enterprises and startups. Xiao (2025) observes that incrementalism allows adaptation to rapid technological change without stifling innovation. At the same time, the enforcement actions against counterfeit AI services demonstrate a willingness to protect leading domestic AI companies like DeepSeek from market confusion.
Governance Institutions
The EU’s supranational structure enables unified rule-making across member states, with the AI Office providing a dedicated institutional anchor. However, the Digital Omnibus legislative process has revealed tensions between the Commission’s simplification ambitions and the Parliament’s commitment to maintaining robust protections. The US’s federalist system inherently creates fragmentation, and EO 14365’s assertion of federal preemption represents an attempt to resolve this tension through executive action, whose constitutional durability remains uncertain. China’s joint rulemaking helps align overarching policies, while the practical involvement of multiple departments in regular oversight can still introduce administrative friction, occasionally increasing coordination costs and compliance complexities for enterprises.
Overall, there is no “one-size-fits-all” AI governance playbook. Regulatory divergence reflects adaptation to each jurisdiction’s unique context. It does not preclude future convergence on core risk-mitigation principles, though it increasingly shapes a landscape in which regulatory models also serve as instruments of geopolitical competition.
II. Shared Characteristics, Global Trends, and Emerging Tensions
Despite divergent approaches, the three regulatory frameworks converge on several key trends, while also revealing new tensions that merit attention.
Risk-Based Regulation as a Foundation
All three jurisdictions have adopted risk stratification as a core governance tool, focusing regulatory resources on high-risk applications while avoiding overburdening low-risk innovations. The EU’s AIA provides the most granular implementation, with legally defined risk tiers that serve as a reference for global regulatory design. In the US, state and federal initiatives diverge in their focus but share a use-case-driven philosophy. States prominently target high-risk societal impacts, such as algorithmic discrimination in employment and housing. At the federal level, meanwhile, the White House explicitly endorses a risk-proportionate approach that addresses concrete harms, including AI-enabled fraud, non-consensual deepfakes, and child safety risks. China’s “small steps” approach similarly focuses on high-risk scenarios such as deepfakes, human-like AI, and generative content, with each regulatory instrument calibrated to the specific risks of its target application.
Transparency as a Core Safeguard
Transparency obligations are a common thread across all three jurisdictions, whether through the EU’s GPAI Code of Practice requirements for model documentation and training data summaries, the US’s emerging state-level transparency requirements, or China’s algorithmic filing system and dual-labeling regime. The convergence is particularly notable in content labeling. The EU’s forthcoming Article 50 transparency rules, China’s Measures for Labeling AI-Generated Synthetic Content, and various US state proposals all mandate disclosure when content is AI-generated. This reflects a shared recognition that transparency is a prerequisite for informed public engagement with AI systems, even as the scope and execution of these mandates vary significantly across jurisdictions.
From Idealism to Pragmatic Realism, and Its Limits
A striking shared trend is the shift toward pragmatic adjustment. The EU’s adaptive implementation timeline for the AIA, the US’s ongoing federal-state negotiations, and China’s scenario-specific oversight all reflect a move from theoretical frameworks toward practical, implementable rules. However, this pragmatic turn raises legitimate questions about regulatory capture and dilution. The EU’s extended timelines and reduced burdens for industry may be sensible responses to implementation bottlenecks, but they also carry the risk of leaving citizens less protected during a critical period of AI deployment. The US Framework’s pro-industry orientation, particularly on developer liability and copyright, reflects close alignment with tech sector preferences, raising concerns about the weakening of corporate accountability. Similarly, China’s approach, while effective at sustaining domestic innovation, may also create regulatory blind spots as novel AI applications emerge ahead of scenario-specific rules. Ultimately, all three jurisdictions face the same test: ensuring that being practical does not become an excuse for watering down essential protections.
Cross-Border Coordination amid Geopolitical Competition
International coordination efforts highlight distinct institutional preferences that reflect each jurisdiction’s strategic priorities. The US and the EU frequently align within advanced-economy forums, operationalizing the G7 Hiroshima AI Process through the OECD and expanding the global network of AI Safety Institutes. The EU also works through the Council of Europe framework, advancing the legally binding Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law in ways that reinforce the external reach of its rights-oriented approach. Meanwhile, China places a distinct emphasis on UN-led inclusive multilateralism, focusing on bridging the digital divide and ensuring technological equity. By advancing the “Group of Friends on AI Capacity Building” at the UN, China strategically aligns with the Global South, offering a narrative that prioritizes developmental needs alongside safety.
Yet beneath these cooperative frameworks, the three regulatory paths are increasingly functioning as instruments of geopolitical influence. The EU continues to leverage the “Brussels Effect” to export its regulatory standards, prompting multinational companies to adopt its baseline rules. Simultaneously, China is broadening its international engagement, not only through UN initiatives but also by promoting digital connectivity and technical cooperation with developing nations under the “Digital Silk Road” framework. The US’s March 2026 Framework frames regulatory competition as a strategic imperative for “ensuring American AI dominance,” positioning federal preemption and light-touch oversight as essential tools for maintaining American competitiveness. Concurrently, the US views both the EU’s perceived over-regulation and China’s state-led model as challenges to its technological leadership.
Conclusion
AI governance across the EU, the US, and China demonstrates that regulatory models are increasingly moving toward pragmatic adaptation. During 2025 and early 2026, each jurisdiction has distinctly refined its framework. The EU has focused on legislative simplification, the US has advanced its national policy framework amid internal debates, and China has continued to deploy scenario-specific rules. Despite these structural divergences, a shared reliance on risk-based governance and transparency unites these regimes. However, the pragmatic turn across all three jurisdictions warrants continued scrutiny. Policymakers should remain vigilant to prevent regulatory adaptation from becoming a euphemism for regulatory retreat or industry capture. Ultimately, balancing innovation with fundamental rights remains a fragile, moving target that requires constant recalibration to ensure that public interests are not compromised by rapid commercial deployment.
Looking ahead, the next phase of global AI governance faces the dual challenges of rapid technological acceleration and geopolitical fragmentation. Regulators must evolve their frameworks to address the emergence of agentic AI systems, resolve domestic and cross-border copyright discrepancies, and mitigate the escalating environmental footprint of AI computing. Furthermore, while AI regulation is increasingly intertwined with global strategic interests and geopolitical competition, policymakers recognize that cross-border cooperation remains an ongoing necessity. To prevent a fractured global ecosystem that could disrupt transnational innovation and trade, the international community is focusing its efforts on fostering regulatory interoperability. Such interoperability can be actively cultivated through the alignment of technical standards, shared conceptual frameworks, and coordinated compliance methodologies.
About the Author

Xiaotong Sun is a PhD researcher in AI governance and law at the University of Turku and a Marie Skłodowska-Curie Fellow under the EU Horizon Europe programme. Her research focuses on the legal and governance challenges of AI, particularly regarding human oversight. Prior to her doctoral studies, she gained practical experience working for three years as a policy researcher at a Beijing-based think tank, specializing in China’s AI regulatory landscape.
Heading image: AI-generated image (created with ChatGPT image generator, 2026).
