One Technology, Three Regulatory Paths: How the EU, the US, and China Govern AI (2025–Early 2026) – Part I
As artificial intelligence (AI) reshapes economies, societies, and individual rights worldwide, the question of how to govern it has moved to the centre of policy agendas. Different jurisdictions face different technological realities, political traditions, and societal expectations, and no universal model has emerged. Instead, three distinct regulatory paths have taken shape. This post examines those paths by focusing on the European Union (EU), the United States (US), and China. All three share the goal of balancing AI’s innovation potential with its risks, but their regulatory philosophies, institutional designs, and priority areas diverge in ways that reflect deep differences in political values, economic structures, and societal priorities.
Anu Bradford (2024) offers a useful starting point, framing the US as a market-driven regime, the EU as a rights-oriented regime, and China as a state-led regime. Wenlong Li (2026) argues that these typologies, while valuable, do not fully capture the internal complexity and dynamic evolution of each regime, highlighting the tension between competing domestic stakeholders, the gap between formal texts and on-the-ground enforcement, and the reciprocal interplay between regulatory design and industrial development. The period from 2025 through early 2026 illustrates both insights: each regime has continued along the broad lines Bradford identifies, while exhibiting the adaptive dynamics Li emphasises.
I. Three Distinct Regulatory Paths
1. The EU: Rights-Centered Fundamentals with Adaptive Adjustments
The EU’s commitment to fundamental rights remains firmly anchored in the Charter of Fundamental Rights of the European Union (CFREU) and the Artificial Intelligence Act (AIA). What has evolved dramatically during 2025 and into early 2026 is the EU’s approach to operationalizing these rights, marked by implementation milestones, institutional capacity-building, and a significant simplification initiative.
Phased Implementation and the GPAI Code of Practice. The AIA’s phased rollout has begun to give concrete shape to the EU’s regulatory ambitions. Prohibited AI practices entered into application from February 2, 2025. Governance rules and obligations for general-purpose AI (GPAI) models became applicable on August 2, 2025. The final GPAI Code of Practice was published on July 10, 2025, developed through a multistakeholder process involving nearly 1,000 participants and covering transparency, copyright, and safety and security. The Code provides a structured pathway for providers of the most advanced models to demonstrate compliance, with enforcement beginning from August 2, 2026.
The Digital Omnibus and Simplification. In November 2025, the Commission launched the Digital Omnibus reform package, proposing targeted amendments to the AIA alongside broader adjustments to the GDPR, NIS2 Directive, and other digital rules, projected to cut administrative costs for EU businesses by up to €5 billion by 2029. Most notably, the Omnibus proposes deferring the application of high-risk AI system requirements, originally set for August 2, 2026, to December 2, 2027 for Annex III systems and to August 2, 2028 for Annex I product-embedded systems, responding to the reality that CEN-CENELEC’s harmonised standards are unlikely to be completed before late 2026.
The legislative process has moved rapidly. The Council adopted its general approach on March 13, 2026, and the Parliament approved changes in plenary on March 26, opening the trilogue negotiations. Importantly, both co-legislators pushed back on some simplifications, reinstating certain obligations and adding a new prohibition targeting AI-generated non-consensual intimate imagery. This illustrates that even as the framework adapts, co-legislators guard against erosion of fundamental rights protections.
Harmonised Standards. The development of harmonised standards is a critical dimension. CEN-CENELEC working groups are developing standards in ten key areas, with prEN 18286 on quality management entering public enquiry in October 2025. Yet the process has faced critique for opacity, potential capture by well-resourced technology companies, and questions about whether it is fit for purpose in fast-moving digital sectors. The Commission retains the option under Article 41 of the AIA to adopt “Common Specifications” as interim guidance should standards development face significant delays.
2. The US: Market-Driven Core with Escalating Federal–State Tension
Bradford’s (2024) classification of the US as a market-driven AI regulatory regime remains valid at the federal level. No comprehensive AI legislation has been enacted. However, the period from late 2025 through early 2026 has seen a significant escalation of the federal–state regulatory dynamic.
Executive Order 14365 and the Push for Federal Preemption. In December 2025, President Trump signed Executive Order 14365, directing the Secretary of Commerce to identify “onerous” state AI laws and the Attorney General to establish an AI Litigation Task Force to challenge them. The order came after the Senate had voted 99–1 to strip a proposed ten-year moratorium on state AI regulations from the Republican budget reconciliation bill, demonstrating strong bipartisan resistance. Senator Markey countered with the States’ Right to Regulate AI Act, highlighting a fundamental tension in the US federal system.
National Policy Framework for Artificial Intelligence. On March 20, 2026, the White House released the National Policy Framework for Artificial Intelligence, outlining seven policy pillars and calling on Congress to legislate. The Framework places federal preemption at its centre. States would retain authority over consumer protection and procurement but would not be permitted to regulate AI model development or impose liability on developers for third-party misuse. This represents a significant encroachment, given that nearly 40 states pursued AI-related legislation in 2025. It discourages creating any new federal AI regulatory body, and on copyright, asserts that training AI models on copyrighted material does not violate copyright laws. However, this position leaves fundamental questions unresolved, given that over 70 infringement lawsuits were filed by the end of 2025.
The “New California Effect.” Despite federal preemption efforts, state-level innovation continues. California’s regulatory influence, what Li (2026) terms the “New California Effect,” stems from the state’s pioneering role in AI and data governance, exemplified most notably by the CCPA and CPRA, alongside a growing body of targeted AI legislation, such as SB 53 on risk-mitigation duties for large AI developers, AB 2602 on AI-generated digital replicas, and rules applying antidiscrimination law to AI-assisted employment decisions. Whether this experimentation survives federal preemption remains a consequential open question, especially considering Democrats raising concerns regarding accountability in a deeply divided Congress where Republicans hold thin majorities.
3. China: State-Led Coordination with Multilayered Adaptive Governance
China’s AI regulatory trajectory follows a pragmatic, incremental approach, characterized by “small steps and targeted cuts.” From 2025 into early 2026, this approach has matured considerably through legislative upgrades, evolving enforcement, and standardization.
Legislative Foundation: The Cybersecurity Law Amendment. The October 2025 amendment to the Cybersecurity Law, effective January 1, 2026, brings AI into national law for the first time, affirming state support for AI R&D while mandating AI ethical guidelines and strengthened risk assessment.
Incremental Regulatory Expansion through Multi-Departmental Coordination. China leverages a collaborative approach led by the Cyberspace Administration of China (CAC). The core foundation was established between 2022 and 2023 with the Algorithm Recommendation Provisions, the Deep Synthesis Provisions, and the Interim Measures for Generative AI Services (jointly formulated by seven departments). This inter-agency synergy has continued into 2025–2026 with several targeted rules. The Measures for the Labeling of AI-Generated Synthetic Content (September 2025), jointly issued by four departments (the CAC, MIIT, MPS, and NRTA), establish a dual-labeling system of explicit and implicit labels that require clear text-, sound-, or image-based prompts to support transparency and traceability. The draft Provisional Measures on Human-like Interactive AI Services (December 2025) addresses emerging ethical and safety risks associated with AI companions and virtual idols, establishing strict requirements regarding emotional manipulation, minor protection, and independent user consent for utilizing interaction data. The Trial Guideline on the Ethics Review and Service of AI (April 2026), jointly issued by ten departments, requires AI-related entities to establish internal ethics review committees and link ethical evaluation to algorithm filing obligations. High-risk AI activities, such as systems capable of influencing public opinion, affecting human emotions, or making autonomous decisions in safety-critical areas, are subject to mandatory expert-level review organized by government departments.
Enforcement and Standardization. AI-related enforcement has intensified meaningfully during this period. The CAC’s “Qinglang” (Clean Cyberspace) campaigns have expanded from algorithmic transparency to cover AI misuse, deep synthesis, and data security. Local regulators have also taken action against unfiled AI services. In February 2026, the State Administration for Market Regulation (SAMR) released five typical cases of unfair competition in the AI sector, including enforcement against entities that created counterfeit DeepSeek websites and mini-programs. These cases show that traditional unfair competition concerns are being actively transposed into the AI domain. The 2025 revision of the Anti-Unfair Competition Law further reinforces this direction, explicitly prohibiting the use of data, algorithms, or platform rules for unfair competitive practices and introducing extraterritorial jurisdiction provisions. On standardization, China had formulated over 50 standards for the AI sector by early 2026. MIIT established a dedicated AI Standardization Technical Committee, facilitating synergy between national safety baselines and agile industry standards, including a recent standard for embodied AI that provides timely governance guidance ahead of formal legislation.
Part II of this post will examine the underlying drivers of this regulatory divergence, identify shared characteristics and emerging tensions across the three jurisdictions, and consider the prospects for cross-border coordination amid intensifying geopolitical competition.
About the Author

Xiaotong Sun is a PhD researcher in AI governance and law at the University of Turku and a Marie Skłodowska-Curie Fellow under the EU Horizon Europe programme. Her research focuses on the legal and governance challenges of AI, particularly regarding human oversight. Prior to her doctoral studies, she gained practical experience working for three years as a policy researcher at a Beijing-based think tank, specializing in China’s AI regulatory landscape.
Heading image: AI-generated image (created with ChatGPT image generator, 2026).
