{"id":1425261,"date":"2025-09-03T10:40:07","date_gmt":"2025-09-03T08:40:07","guid":{"rendered":"https:\/\/www.ie.edu\/insights\/?post_type=articles&#038;p=1425261"},"modified":"2025-09-03T10:40:07","modified_gmt":"2025-09-03T08:40:07","slug":"regulating-ai-without-strangling-innovation","status":"publish","type":"articles","link":"https:\/\/www.ie.edu\/insights\/articles\/regulating-ai-without-strangling-innovation\/","title":{"rendered":"Regulating AI Without Strangling Innovation"},"featured_media":1425263,"template":"","meta":{"_has_post_settings":[]},"schools":[],"areas":[508],"subjects":[422],"class_list":["post-1425261","articles","type-articles","status-publish","has-post-thumbnail","hentry","areas-artificial-intelligence","subjects-innovation-and-technology"],"custom-fields":{"wpcf-article-leadin":["Global AI regulation is diverging, with the U.S. prioritizing innovation, the EU enforcing rights-based safeguards, and China accelerating strategic control, writes Adriana Hoyos."],"wpcf-article-body":["Artificial intelligence has moved from an emerging technology to a transformative general-purpose capability driving global economic and geopolitical competition. Governments are racing not only to develop and deploy advanced AI systems but also to regulate them to mitigate harm, while enabling innovation and incentivizing economic growth.\r\n\r\nIn 2025, the landscape of AI regulation has entered a new phase, marked by a divergence of models, shifting political dynamics, and accelerating technological convergence \u2013 with groundbreaking developments in quantum computing and robotics adding new layers of complexity.\r\n\r\nNowhere is this more evident than in the United States, where the return of President Trump to the White House has brought a significant realignment in federal AI policy. In contrast to previous approaches emphasizing precautionary principles, the new administration has championed a bold vision centered on accelerating technological leadership and boosting domestic innovation. Federal AI strategy now focuses on minimizing regulatory friction while enhancing America\u2019s global competitiveness in advanced technologies like AI, quantum computing, and robotics. The White House recently released its <a href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2025\/07\/Americas-AI-Action-Plan.pdf\" target=\"_blank\" rel=\"noopener\">AI Action Plan<\/a> in a pivot to pro-innovation policy and deregulation.\r\n\r\nThe newly formed White House Office of Strategic AI Innovation, under the Commerce Department, leads national coordination across private and public sectors. Its mission is clear: maximize U.S. global AI leadership, foster a fertile ecosystem for technological advancement, and streamline federal involvement in AI governance. Regulatory agencies like the Federal Trade Commission and the National Institute of Standards and Technology have been directed to prioritize enforcement in clear-cut cases of harm \u2013 such as fraud or bias \u2013 while allowing room for responsible experimentation and deployment.\r\n\r\nFederal investment in AI research and development has surged, particularly in high-impact areas like health, defense, and energy. According to the Office of Management and Budget <a href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2025\/02\/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf\" target=\"_blank\" rel=\"noopener\">memorandum M-25-21<\/a> issued in April 2025, all federal agencies are now required to designate Chief AI Officers, establish AI Governance Boards, and complete annual AI use case inventories.\r\n\r\nThe memorandum also mandates that agencies prioritize the secure use of AI to improve services and operational efficiency, as long as public trust, safety, and civil rights protections are maintained. The federal government currently lists more than 1,200 active AI use cases across departments, ranging from fraud detection at the Internal Revenue Service to <a href=\"https:\/\/department.va.gov\/ai\/ai-use-case-inventory\/\" target=\"_blank\" rel=\"noopener\">AI-enabled veterans services at the Veterans Affairs Department (VA<\/a>).\r\n\r\nMost states, meanwhile, continue to pursue their own regulatory agendas. In 2025, over 500 AI-related bills were introduced across 42 states, addressing concerns around facial recognition, deepfakes, hiring algorithms, and biometric privacy. This growing patchwork of state laws has sparked renewed calls for federal preemption, as companies seek clarity and consistency to scale new technologies nationwide.\r\n\r\nWhile the U.S. promotes an innovation-first model, the European Union continues to build out its rights-based framework. The <a href=\"https:\/\/artificialintelligenceact.eu\/\" target=\"_blank\" rel=\"noopener\">EU AI Act<\/a> \u2013 enacted in 2024 and entering full effect by 2026, classifies AI systems by risk level and applies stringent requirements for high-risk applications. In 2025, new technical standards and transparency obligations were introduced in Europe for foundation models and general-purpose AI, requiring detailed documentation of training data, performance metrics, and deployment contexts. Last month, the Commission published the <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/policies\/contents-code-gpai\" target=\"_blank\" rel=\"noopener\">GPAI Code of Practice<\/a> and, crucially, a mandatory template for public summaries of training content plus guidelines clarifying obligations ahead of the August 2025 application of GPAI rules. These documents shape transparency (copyright &amp; data sourcing), safety and security, and model documentation expectations.\r\n\r\nAlthough the EU\u2019s approach prioritizes safety and human rights, concerns persist around compliance costs \u2013 particularly for startups and SMEs. To mitigate these effects, the European Innovation Council launched a series of regulatory sandboxes, allowing developers to experiment with reduced oversight. Still, some stakeholders worry that excessive front-loading of regulation could reduce the bloc\u2019s global influence in AI innovation.\r\n\r\nOn the other hand, China continues to ramp up investment in AI infrastructure and quantum technologies through major public funding initiatives and sovereign investment vehicles, while welcoming a significant increase in related patents.\r\n<blockquote>These varying regulatory models reflect deeper philosophical and institutional priorities.<\/blockquote>\r\nChina\u2019s Ministry of Science and Technology unveiled a $15 billion expansion of its National Quantum Initiative, aiming to achieve post-quantum encryption and AI integration by 2027. In this model, regulation functions not only as a protective mechanism but as a lever for national strategic advancement, enabling rapid deployment in critical sectors such as infrastructure, surveillance, and education.\r\n\r\nOn April 25, 2025, the State Administration for Market Regulation and the Standardization Administration of China jointly <a href=\"https:\/\/www.cambridge.org\/core\/journals\/cambridge-forum-on-ai-law-and-governance\/article\/navigating-chinas-regulatory-approach-to-generative-artificial-intelligence-and-large-language-models\/969B2055997BF42DE693B7A1A1B4E8BA\" target=\"_blank\" rel=\"noopener\">released three national standards aimed at enhancing the security and governance of generative AI<\/a>. These standards are scheduled to take effect on November 1 this year and represent a significant development in China's approach to regulating emerging AI technologies. In addition, China finalized the <a href=\"https:\/\/www.technologyslegaledge.com\/2025\/03\/china-released-new-measures-for-labelling-ai-generated-and-synthetic-content\/\" target=\"_blank\" rel=\"noopener\">Final Measures for Labelling AI-Generated (synthetic) content<\/a>, effective September 2025, paired with national generative-AI standards, effective November 2025.\r\n\r\nThe first of these standards, titled <em>Cybersecurity Technology and Generative Artificial Intelligence Data Annotation Security Specification<\/em>, sets forth specific security requirements for the data labeling processes that are integral to training generative AI models. It emphasizes the need for secure handling and accurate annotation of training data to mitigate potential vulnerabilities that may arise during model development.\r\n\r\nThe second standard, <em>Cybersecurity Technology, Security Specification for Generative Artificial Intelligence Pre-training and Fine-tuning Data<\/em>, outlines both the necessary requirements and the evaluation criteria for ensuring the integrity and security of datasets used during the pre-training and fine-tuning stages of AI development. This is crucial for maintaining the reliability and trustworthiness of generative models, particularly as they are scaled for broader deployment.\r\n\r\nThe third standard, <em>Cybersecurity Technology, Basic Security Requirements for Generative Artificial Intelligence Service<\/em>, focuses on the operational phase of generative AI. It establishes comprehensive security measures for AI services, including protocols for user data protection, detailed assessments of data security practices, and mechanisms to safeguard the underlying training models and datasets from misuse or unauthorized access.\r\n\r\nTogether, these standards underscore China's intent to integrate stringent cybersecurity oversight into every stage of generative AI development and deployment, aligning technological advancement with national security and ethical governance objectives.\r\n\r\nThese varying regulatory models reflect deeper philosophical and institutional priorities. The U.S. now emphasizes technological dynamism, cross-sector collaboration, and adaptive oversight. Europe advances legal certainty and ethical safeguards. China integrates control and acceleration to serve state-centric goals.\r\n\r\nMeanwhile, South Korea\u2019s <a href=\"https:\/\/www.trade.gov\/market-intelligence\/south-korea-artificial-intelligence-ai-basic-act\" target=\"_blank\" rel=\"noopener\">AI Basic Act<\/a>, effective January 2026, is Asia\u2019s first comprehensive AI law \u2013 risk-based, with a high-impact AI focus and institutional build-out. Spain\u2019s national data protection authority, AEPD, clarified in July 2025 that it can already <a href=\"https:\/\/www.aepd.es\/prensa-y-comunicacion\/notas-de-prensa\/la-aepd-recuerda-que-ya-puede-actuar-ante-sistemas-de-ia?\" target=\"_blank\" rel=\"noopener\">act against prohibited AI systems processing personal data<\/a>, as the supervisory\/sanctioning regime for Article 5 took effect a month ago.\r\n\r\nQuantum computing, once an abstract theoretical pursuit, now poses immediate regulatory challenges. \u00a0The global momentum behind quantum science and technology continues to accelerate, with total investments worldwide now surpassing $55.7 billion. McKinsey projects that the global quantum technology market <a href=\"https:\/\/www.mckinsey.com\/capabilities\/mckinsey-digital\/our-insights\/the-year-of-quantum-from-concept-to-reality-in-2025\" target=\"_blank\" rel=\"noopener\">will expand significantly<\/a>.\r\n\r\nThe three main areas of quantum technology \u2013 quantum computing, quantum communication, and quantum sensing, are projected to collectively generate over $100 billion in global revenue by 2035. Quantum computing is expected to dominate this growth, expanding from $4 billion in revenue in 2024 to an estimated $72 billion by 2035. While quantum technology will have broad industry impact, the chemicals, life sciences, finance, and mobility sectors are anticipated to experience the most significant advancements and growth.\r\n\r\nThe U.S. National Institute of Standards and Technology finalized <a href=\"https:\/\/www.nist.gov\/news-events\/news\/2025\/03\/nist-selects-hqc-fifth-algorithm-post-quantum-encryption\" target=\"_blank\" rel=\"noopener\">four post-quantum cryptography algorithms<\/a> in March 2025, with federal agencies mandated to transition by 2027. These efforts underscore growing concerns about cybersecurity and digital resilience in a quantum era.\r\n\r\nInternationally, there is no unified framework governing quantum technology or its convergence with AI however, cooperation is increasing. As quantum systems begin to intersect with AI in domains like materials science and advanced modeling, governance structures will need to evolve rapidly.\r\n\r\nThe US and EU recently launched a joint initiative on quantum security standards, particularly for post-quantum cryptography (PQC) to protect critical infrastructure, aiming to transition to more robust encryption methods that are resistant to attacks from future quantum computers. Other relevant examples are Japan, Canada, and South Korea, which are forming similar alliances.\r\n\r\nRobotics, particularly when paired with AI, continues to reshape key industries, from <a href=\"https:\/\/www.weforum.org\/stories\/2025\/06\/robots-medical-industry-healthcare\/\" target=\"_blank\" rel=\"noopener\">agriculture to logistics and healthcare<\/a>. In 2025, autonomous machines are increasingly common in warehouses, hospitals, and farms. Yet regulatory frameworks remain fragmented. The EU has moved to incorporate robotics into its AI Act\u2019s high-risk categories, while the U.S. has prioritized voluntary standards and industry-led best practices.\r\n\r\nThe legal uncertainty surrounding robotics underscores the urgent need to update liability and insurance frameworks specifically designed for intelligent autonomous systems. Looking forward, <a href=\"https:\/\/www.congress.gov\/crs_external_products\/R\/PDF\/R48555\/R48555.2.pdf\" target=\"_blank\" rel=\"noopener\">a more dynamic regulatory approach is taking shape in the United States<\/a>. This approach involves policies that evolve alongside technological advancements, integrating technical standards, risk assessments, and feedback mechanisms. It also includes the use of regulatory sandboxes, which enable controlled real-world testing of emerging AI systems to better understand their potential impacts before wider deployment.\r\n\r\nFurthermore, promoting international interoperability is crucial to ensure rules are compatible across jurisdictions, fostering cross-border innovation and governance. Collaboration through public-private partnerships is another important aspect, bringing together governments, industry, academia, and civil society to co-create regulations that strike a balance between safety and progress.\r\n\r\nThese strategies are especially vital in high-stakes sectors like defense, healthcare, and finance, where both risks and opportunities are substantial. Reflecting this trend, earlier this year, <a href=\"https:\/\/www.bis.org\/publ\/othp90.pdf\" target=\"_blank\" rel=\"noopener\">the Basel Committee on Banking Supervision introduced new principles for AI model governance<\/a> in financial institutions, focusing on auditability, explainability, and ethical use.\r\n\r\nGlobal collaboration remains crucial. The <a href=\"https:\/\/www.soumu.go.jp\/hiroshimaaiprocess\/en\/index.html\" target=\"_blank\" rel=\"noopener\">G7\u2019s Hiroshima AI Process<\/a> continues to provide a forum for dialogue on trustworthy AI, while the Global Partnership on AI (GPAI) has expanded its membership to include more nations from Africa, Asia, and Latin America. The focus is shifting toward shared principles, voluntary codes of conduct, and joint efforts to ensure alignment across systems.\r\n\r\nAs global regulatory models continue to evolve, clear philosophical differences emerge. Europe favors a rights-based regime with strict compliance obligations. China treats regulation as a tool for state-led acceleration. The United States, under the Trump Administration\u2019s AI Action Plan, has pursued a lighter-touch model that emphasizes accountability only in cases of clear harm. Though distinct, all three face a common challenge: avoiding regulatory overreach that stifles innovation and weakens long-term competitiveness.\r\n\r\nUltimately, the regulation of artificial intelligence \u2013 and adjacent technologies like quantum computing and robotics \u2013 will shape not only the pace of innovation but the character of future societies. The challenge is not choosing between regulation and progress but designing frameworks that deliver both. In this new era, regulation should not hold back progress. Rather, it should serve as a strategic accelerator of safe, responsible, and globally competitive innovation.\r\n\r\n&nbsp;\r\n\r\n\u00a9 IE Insights."],"wpcf-audio-article":["https:\/\/www.ie.edu\/insights\/wp-content\/uploads\/2025\/09\/Regulating_AI_Without_Strangling_Innovation_1756888525713.mp3"],"wpcf-article-extract":["Global AI regulation is diverging, with the U.S. prioritizing innovation, the EU enforcing rights-based safeguards, and China accelerating strategic control, writes Adriana Hoyos."],"wpcf-article-extract-enable":["1"]},"_links":{"self":[{"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/articles\/1425261","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/articles"}],"about":[{"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/types\/articles"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/media\/1425263"}],"wp:attachment":[{"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/media?parent=1425261"}],"wp:term":[{"taxonomy":"schools","embeddable":true,"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/schools?post=1425261"},{"taxonomy":"areas","embeddable":true,"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/areas?post=1425261"},{"taxonomy":"subjects","embeddable":true,"href":"https:\/\/www.ie.edu\/insights\/wp-json\/wp\/v2\/subjects?post=1425261"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}