Regulating AI Without Strangling Innovation

Global AI regulation is diverging, with the U.S. prioritizing innovation, the EU enforcing rights-based safeguards, and China accelerating strategic control, writes Adriana Hoyos.

Listen to this Article

Artificial intelligence has moved from an emerging technology to a transformative general-purpose capability driving global economic and geopolitical competition. Governments are racing not only to develop and deploy advanced AI systems but also to regulate them to mitigate harm, while enabling innovation and incentivizing economic growth.

In 2025, the landscape of AI regulation has entered a new phase, marked by a divergence of models, shifting political dynamics, and accelerating technological convergence – with groundbreaking developments in quantum computing and robotics adding new layers of complexity.

Nowhere is this more evident than in the United States, where the return of President Trump to the White House has brought a significant realignment in federal AI policy. In contrast to previous approaches emphasizing precautionary principles, the new administration has championed a bold vision centered on accelerating technological leadership and boosting domestic innovation. Federal AI strategy now focuses on minimizing regulatory friction while enhancing America’s global competitiveness in advanced technologies like AI, quantum computing, and robotics. The White House recently released its AI Action Plan in a pivot to pro-innovation policy and deregulation.

The newly formed White House Office of Strategic AI Innovation, under the Commerce Department, leads national coordination across private and public sectors. Its mission is clear: maximize U.S. global AI leadership, foster a fertile ecosystem for technological advancement, and streamline federal involvement in AI governance. Regulatory agencies like the Federal Trade Commission and the National Institute of Standards and Technology have been directed to prioritize enforcement in clear-cut cases of harm – such as fraud or bias – while allowing room for responsible experimentation and deployment.

Federal investment in AI research and development has surged, particularly in high-impact areas like health, defense, and energy. According to the Office of Management and Budget memorandum M-25-21 issued in April 2025, all federal agencies are now required to designate Chief AI Officers, establish AI Governance Boards, and complete annual AI use case inventories.

The memorandum also mandates that agencies prioritize the secure use of AI to improve services and operational efficiency, as long as public trust, safety, and civil rights protections are maintained. The federal government currently lists more than 1,200 active AI use cases across departments, ranging from fraud detection at the Internal Revenue Service to AI-enabled veterans services at the Veterans Affairs Department (VA).

Most states, meanwhile, continue to pursue their own regulatory agendas. In 2025, over 500 AI-related bills were introduced across 42 states, addressing concerns around facial recognition, deepfakes, hiring algorithms, and biometric privacy. This growing patchwork of state laws has sparked renewed calls for federal preemption, as companies seek clarity and consistency to scale new technologies nationwide.

While the U.S. promotes an innovation-first model, the European Union continues to build out its rights-based framework. The EU AI Act – enacted in 2024 and entering full effect by 2026, classifies AI systems by risk level and applies stringent requirements for high-risk applications. In 2025, new technical standards and transparency obligations were introduced in Europe for foundation models and general-purpose AI, requiring detailed documentation of training data, performance metrics, and deployment contexts. Last month, the Commission published the GPAI Code of Practice and, crucially, a mandatory template for public summaries of training content plus guidelines clarifying obligations ahead of the August 2025 application of GPAI rules. These documents shape transparency (copyright & data sourcing), safety and security, and model documentation expectations.

Although the EU’s approach prioritizes safety and human rights, concerns persist around compliance costs – particularly for startups and SMEs. To mitigate these effects, the European Innovation Council launched a series of regulatory sandboxes, allowing developers to experiment with reduced oversight. Still, some stakeholders worry that excessive front-loading of regulation could reduce the bloc’s global influence in AI innovation.

On the other hand, China continues to ramp up investment in AI infrastructure and quantum technologies through major public funding initiatives and sovereign investment vehicles, while welcoming a significant increase in related patents.

These varying regulatory models reflect deeper philosophical and institutional priorities.

China’s Ministry of Science and Technology unveiled a $15 billion expansion of its National Quantum Initiative, aiming to achieve post-quantum encryption and AI integration by 2027. In this model, regulation functions not only as a protective mechanism but as a lever for national strategic advancement, enabling rapid deployment in critical sectors such as infrastructure, surveillance, and education.

On April 25, 2025, the State Administration for Market Regulation and the Standardization Administration of China jointly released three national standards aimed at enhancing the security and governance of generative AI. These standards are scheduled to take effect on November 1 this year and represent a significant development in China’s approach to regulating emerging AI technologies. In addition, China finalized the Final Measures for Labelling AI-Generated (synthetic) content, effective September 2025, paired with national generative-AI standards, effective November 2025.

The first of these standards, titled Cybersecurity Technology and Generative Artificial Intelligence Data Annotation Security Specification, sets forth specific security requirements for the data labeling processes that are integral to training generative AI models. It emphasizes the need for secure handling and accurate annotation of training data to mitigate potential vulnerabilities that may arise during model development.

The second standard, Cybersecurity Technology, Security Specification for Generative Artificial Intelligence Pre-training and Fine-tuning Data, outlines both the necessary requirements and the evaluation criteria for ensuring the integrity and security of datasets used during the pre-training and fine-tuning stages of AI development. This is crucial for maintaining the reliability and trustworthiness of generative models, particularly as they are scaled for broader deployment.

The third standard, Cybersecurity Technology, Basic Security Requirements for Generative Artificial Intelligence Service, focuses on the operational phase of generative AI. It establishes comprehensive security measures for AI services, including protocols for user data protection, detailed assessments of data security practices, and mechanisms to safeguard the underlying training models and datasets from misuse or unauthorized access.

Together, these standards underscore China’s intent to integrate stringent cybersecurity oversight into every stage of generative AI development and deployment, aligning technological advancement with national security and ethical governance objectives.

These varying regulatory models reflect deeper philosophical and institutional priorities. The U.S. now emphasizes technological dynamism, cross-sector collaboration, and adaptive oversight. Europe advances legal certainty and ethical safeguards. China integrates control and acceleration to serve state-centric goals.

Meanwhile, South Korea’s AI Basic Act, effective January 2026, is Asia’s first comprehensive AI law – risk-based, with a high-impact AI focus and institutional build-out. Spain’s national data protection authority, AEPD, clarified in July 2025 that it can already act against prohibited AI systems processing personal data, as the supervisory/sanctioning regime for Article 5 took effect a month ago.

Quantum computing, once an abstract theoretical pursuit, now poses immediate regulatory challenges.  The global momentum behind quantum science and technology continues to accelerate, with total investments worldwide now surpassing $55.7 billion. McKinsey projects that the global quantum technology market will expand significantly.

The three main areas of quantum technology – quantum computing, quantum communication, and quantum sensing, are projected to collectively generate over $100 billion in global revenue by 2035. Quantum computing is expected to dominate this growth, expanding from $4 billion in revenue in 2024 to an estimated $72 billion by 2035. While quantum technology will have broad industry impact, the chemicals, life sciences, finance, and mobility sectors are anticipated to experience the most significant advancements and growth.

The U.S. National Institute of Standards and Technology finalized four post-quantum cryptography algorithms in March 2025, with federal agencies mandated to transition by 2027. These efforts underscore growing concerns about cybersecurity and digital resilience in a quantum era.

Internationally, there is no unified framework governing quantum technology or its convergence with AI however, cooperation is increasing. As quantum systems begin to intersect with AI in domains like materials science and advanced modeling, governance structures will need to evolve rapidly.

The US and EU recently launched a joint initiative on quantum security standards, particularly for post-quantum cryptography (PQC) to protect critical infrastructure, aiming to transition to more robust encryption methods that are resistant to attacks from future quantum computers. Other relevant examples are Japan, Canada, and South Korea, which are forming similar alliances.

Robotics, particularly when paired with AI, continues to reshape key industries, from agriculture to logistics and healthcare. In 2025, autonomous machines are increasingly common in warehouses, hospitals, and farms. Yet regulatory frameworks remain fragmented. The EU has moved to incorporate robotics into its AI Act’s high-risk categories, while the U.S. has prioritized voluntary standards and industry-led best practices.

The legal uncertainty surrounding robotics underscores the urgent need to update liability and insurance frameworks specifically designed for intelligent autonomous systems. Looking forward, a more dynamic regulatory approach is taking shape in the United States. This approach involves policies that evolve alongside technological advancements, integrating technical standards, risk assessments, and feedback mechanisms. It also includes the use of regulatory sandboxes, which enable controlled real-world testing of emerging AI systems to better understand their potential impacts before wider deployment.

Furthermore, promoting international interoperability is crucial to ensure rules are compatible across jurisdictions, fostering cross-border innovation and governance. Collaboration through public-private partnerships is another important aspect, bringing together governments, industry, academia, and civil society to co-create regulations that strike a balance between safety and progress.

These strategies are especially vital in high-stakes sectors like defense, healthcare, and finance, where both risks and opportunities are substantial. Reflecting this trend, earlier this year, the Basel Committee on Banking Supervision introduced new principles for AI model governance in financial institutions, focusing on auditability, explainability, and ethical use.

Global collaboration remains crucial. The G7’s Hiroshima AI Process continues to provide a forum for dialogue on trustworthy AI, while the Global Partnership on AI (GPAI) has expanded its membership to include more nations from Africa, Asia, and Latin America. The focus is shifting toward shared principles, voluntary codes of conduct, and joint efforts to ensure alignment across systems.

As global regulatory models continue to evolve, clear philosophical differences emerge. Europe favors a rights-based regime with strict compliance obligations. China treats regulation as a tool for state-led acceleration. The United States, under the Trump Administration’s AI Action Plan, has pursued a lighter-touch model that emphasizes accountability only in cases of clear harm. Though distinct, all three face a common challenge: avoiding regulatory overreach that stifles innovation and weakens long-term competitiveness.

Ultimately, the regulation of artificial intelligence – and adjacent technologies like quantum computing and robotics – will shape not only the pace of innovation but the character of future societies. The challenge is not choosing between regulation and progress but designing frameworks that deliver both. In this new era, regulation should not hold back progress. Rather, it should serve as a strategic accelerator of safe, responsible, and globally competitive innovation.

 

© IE Insights.