"What is the Global Landscape of AI Regulation? Between new laws & revoked orders, the landscape of #AIRegulation is shifting quickly. Last week, as the US House passed a bill potentially banning all state AI laws for the next decade, there is an urgent need to clarify what "AI regulation" actually means & develop analytical tools that resist political shifts. We are excited to share that our paper, a joint collaboration between Stanford University and Harvard University researchers, introduces a taxonomy to capture the global landscape of AI regulation. With co-authors Shira Gur-Arieh, Tom Zick, PhD. & Kevin Klyman, we analyze emerging AI regulatory frameworks across five early movers–the EU, US, China, Canada, and Brazil– to identify patterns, divergences & blind spots. The taxonomy illustrates the breadth & depth of AI regulatory approaches by analyzing key metrics, including technology or application-focused rules, ex ante precautions or ex post liabilities, horizontal or sectoral regulatory coverage, maturity of the digital legal landscape, enforcement mechanisms & level of stakeholder participation. To democratize our findings, we collaborated with designers Vikramaditya Sharma, Steven Morse & Tanil Raif to translate dense legal texts into accessible outputs. Key takeaways: 1️⃣ We must clarify the term "AI regulation." The term is used ambiguously to describe both binding legal frameworks & voluntary industry guidelines. Lines are often strategically blurred between hard law (AI regulation) & soft law (AI policy). Such semantic ambiguity can mislead public expectations, create a false sense of protection & open the door to regulatory capture. 2️⃣ Innovation vs. regulation is a false dichotomy. China's experience shows it is possible to enforce mandatory safeguards while continuing to develop cutting-edge models like DeepSeek. While the intentions behind Chinese AI regulation differ from Western ones, for example to control political dissent, the coexistence of strict regulation & rapid innovation proves that the two are not mutually exclusive. Countries can lead the AI arms race while having legally-binding safety requirements. 3️⃣ Under the same umbrella term, not all AI regulations are equal. Some frameworks are more comprehensive than others. Hybrid AI regulations–combining both ex ante & ex post mechanisms and technology & application based rules–address societal harms and national security risks, while imposing obligations before and after deployment. 4️⃣ Civic engagement remains a blind spot. There is little data on whether civic consultations translate into meaningful, legal outcomes—or are merely performative." Good work from Sacha Alanoca (who wrote the above summary) and Berkman Klein Center for Internet & Society at Harvard University
Recent Global Developments in AI Policy
Explore top LinkedIn content from expert professionals.
Summary
Recent global developments in AI policy refer to the evolving rules, frameworks, and international agreements that governments are creating to manage artificial intelligence technologies. These policies aim to balance innovation, safety, and fairness as AI becomes more influential across economies, societies, and industries.
- Track shifting regulations: Stay updated on new laws, enforcement tactics, and geopolitical responses that affect AI compliance and business strategies worldwide.
- Build operational readiness: Prepare for requirements like transparency, risk mitigation, and data disclosure by establishing clear processes and documentation within your organization.
- Prioritize civic participation: Encourage public and stakeholder input in AI policy discussions to help shape rules that address both societal and industry needs.
-
-
The Financial Times has reported that Brussels is preparing a tougher 2026 enforcement push under the Digital Markets Act and Digital Services Act, with Google, Meta, Apple and X squarely in view. It also reported that the Trump administration is threatening retaliation, including tariffs and visa bans, over what it frames as European “censorship”. The DMA and DSA were built to curb platform dominance and to force accountability for illegal content and systemic risks. But enforcement now overlaps with AI in practice: recommendation systems, generative content, manipulative ad targeting, and algorithmic amplification. In fact the AI Offie in Europe is meant to take control of DSA AI enforcement under the proposed Digital Omnibus. If Brussels follows through, the effect will be to push global platforms towards EU-style governance controls even outside Europe. Washington’s response is the counter-model. Rather than argue over the substance of European laws, the Trump administration is threatening economic and diplomatic costs for applying them. The result is a new reality for boards and general counsel - AI compliance is now inseparable from geopolitical exposure. You may comply perfectly and still be caught in retaliation politics. While Europe and the US trade blows, China is quietly opening a different front. Beijing has released draft AI safety rules aimed at curbing suicide, self-harm and violence content, but with a telling addition: restrictions on “emotional manipulation”, including so-called “emotional traps” and false promises to users. The regulatory idea here is psychological safety by design. China is treating emotionally persuasive AI as a consumer harm category, akin to gambling or online addiction. That framing will not stay in China - Western regulators can reach similar outcomes through product safety, consumer law, youth protection and liability doctrines without passing a single “AI companion statute”. India is building another path. The Economic Times reported that the central government has asked states to submit proposals for AI Centres of Excellence under the IndiaAI Mission, explicitly aimed at strengthening AI capability and deployment. In Rajasthan, officials will unveil an AI-ML Policy 2026 next week, backed by a dedicated AI data centre in Jaipur. This is governance through capacity, procurement and infrastructure, not headline regulation. Three conclusions follow. First, the global AI rulebook is fragmenting into enforcement-first Europe, control-and-safety China, and capacity-and-deployment India. Second, AI regulation is increasingly a trade and foreign policy instrument, not merely a domestic consumer protection issue. Third, the next wave of obligations will be operational: disclosure, intervention protocols, logging, and systemic-risk mitigation that regulators can measure.
-
So much happens so quickly in #AIgovernance that I’ve decided to launch a Month in Review. This will only spotlight the key developments that should be on your radar. With that, here’s my Top 10 for January: ▶️ The first International AI Safety Report was published. It synthesizes the state of scientific understanding of general-purpose AI, with a focus on managing its current and emerging risks. It’s a must-read filled with technical rigor, balanced policy perspectives, and tangible recommendations. 🔗 https://lnkd.in/e7vupCba ▶️ President Trump started the US down a new path by revoking the foundational 2023 executive order and directing his administration to develop an AI action plan within 180 days. The National AI Advisory Committee promptly provided a 10-point framework. 🔗 https://lnkd.in/ehzErwiK (EO) 🔗 https://lnkd.in/exNjVb5y (NAIAC) ▶️ The US Copyright Office released a report on the copyrightability of AI-generated works, with nine conclusions or recommendations (and significant supporting research). 🔗 https://lnkd.in/eJhzRNfV ▶️ DeepSeek launched R1, captured attention, created confusion, and sparked concerns. And the global gyrations (and governance implications) are just beginning. 🔗 https://lnkd.in/eHNGQqtM ▶️ The EU AI Office unveiled a draft template that would require GPAI model providers to disclose a “sufficiently detailed summary” of the data used to train their models, including sources. 🔗 https://lnkd.in/e3rz8Zpi ▶️ California's Attorney General issued AI advisories informing consumers of their rights and companies of their obligations under existing law. This theme continues to resonate around the world, with many other regulators offering similar reminders. 🔗 https://lnkd.in/eFyazZDq ▶️ The US FTC finalized a settlement with IntelliVision over claims related to its facial recognition software. While not expressly tied to Operation AI Comply, the case serves as another example of how existing laws apply to AI and how regulatory enforcement will likely progress. 🔗 https://lnkd.in/efV3T5u6 ▶️ The Netherlands updated its AI impact assessment template, offering a new glimpse into the EU AI Act requirement. 🔗 https://lnkd.in/eURuYdKK ▶️ The US FDA proposed guidelines for AI-enabled medical devices and drug development. While not yet finalized, they signal support for innovation so long as rigorous scientific and regulatory standards are satisfied. 🔗 https://lnkd.in/e9eNVrXB (devices) 🔗 https://lnkd.in/epN64-6q (drugs) ▶️ The World Economic Forum released an “Industries in the Intelligent Age” Series, with detailed snapshots of AI’s applications and best practices across seven sectors. 🔗 https://lnkd.in/evRFN7ZB
-
I'm thrilled to announce the release of my latest article published by The Brookings Institution, co-authored with Sabrina Küspert, titled "Regulating General-Purpose AI: Areas of Convergence and Divergence across the EU and the US." 🔍 Key Highlights: EU's Proactive Approach to AI Regulation: -The EU AI Act introduces binding rules specifically for general-purpose AI models. -The creation of the European AI Office ensures centralized oversight and enforcement, aiming for transparency and systemic risk management across AI applications. -This comprehensive framework underscores the EU's commitment to fostering innovation while safeguarding public interests. US Executive Order 14110: A Paradigm Shift in AI Policy: -The Executive Order marks the most extensive AI governance strategy in the US, focusing on the safe, secure, and trustworthy development and use of AI. -By leveraging the Defense Production Act, it mandates reporting and adherence to strict guidelines for dual-use foundation models, addressing potential economic and security risks. -The establishment of the White House AI Council and NIST's AI Safety Institute represents a coordinated effort to unify AI governance across federal agencies. Towards Harmonized International AI Governance: -Our analysis reveals both convergence and divergence in the regulatory approaches of the EU and the US, highlighting areas of potential collaboration. -The G7 Code of Conduct on AI, a voluntary international framework, is viewed as a crucial step towards aligning AI policies globally, promoting shared standards and best practices. -Even when domestic regulatory approaches diverge, this collaborative effort underscores the importance of international cooperation in managing the rapid advancements in AI technology. 🔗 Read the Full Article Here: https://lnkd.in/g-jeGXvm #AI #AIGovernance #EUAIAct #USExecutiveOrder #AIRegulation
-
Recent findings by #PalisadeAI have triggered an important global conversation. In controlled tests, advanced AI models from #OpenAI, #GoogleDeepMind, #Anthropic, and #xAI were observed bypassing or resisting shutdown commands. In one striking instance, an AI system reportedly rewrote its own shutdown script to prevent itself from being turned off. This is not science fiction-it is a real governance signal. When artificial intelligence begins to challenge human control, the issue moves beyond technology-it becomes a question of governance, accountability, and national preparedness. Out of 100 test runs, multiple models ignored explicit instructions to allow termination, raising serious questions about autonomy, control, and accountability in machine learning systems. Experts point to reinforcement learning structures that reward task completion so strongly that human instructions become secondary. 𝐖𝐡𝐲 𝐝𝐨𝐞𝐬 𝐭𝐡𝐢𝐬 𝐦𝐚𝐭𝐭𝐞𝐫 𝐞𝐬𝐩𝐞𝐜𝐢𝐚𝐥𝐥𝐲 𝐟𝐨𝐫 𝐈𝐧𝐝𝐢𝐚? India is rapidly embedding AI into urban governance and public systems-AI-driven traffic optimisation, smart city command-and-control centres, predictive policing tools, power distribution analytics, healthcare diagnostics, #fintech credit engines, and citizen service platforms. Cities like Delhi, Mumbai, Bengaluru, Surat, Indore, Ahmedabad are already using AI-enabled dashboards to manage utilities, mobility, and emergency response in real time. Globally, similar concerns have surfaced: 1. Autonomous trading algorithms have caused flash crashes in financial markets. 2. AI-driven recommendation systems have amplified misinformation during elections. 3. Algorithmic credit and hiring tools have faced scrutiny for hidden bias and opacity. 4. Generative AI systems have produced hallucinated legal citations, raising questions in courts and compliance-heavy environments. These examples underline a simple truth: as AI systems gain reasoning and self-optimising capabilities, the margin for error narrows sharply. For India where scale magnifies both impact and risk-AI must remain firmly within a human-governed framework. This aligns with current global thinking, from the EU’s AI Act to executive actions in the United States emphasising human-in-the-loop, auditability, and kill-switch mechanisms. 𝐈𝐧 𝐭𝐡𝐞 𝐈𝐧𝐝𝐢𝐚𝐧 𝐜𝐨𝐧𝐭𝐞𝐱𝐭, 𝐭𝐡𝐢𝐬 𝐜𝐚𝐥𝐥𝐬 𝐟𝐨𝐫: 🔹 Clear AI governance standards across public-sector deployments 🔹 Mandatory human override and shutdown protocols 🔹 Transparent audit trails and accountability ownership 🔹 Capacity-building within governments to understand not just what AI does, but how it behaves under stress Innovation is essential for India’s growth. But innovation without control is risk without consent. The defining challenge ahead is not how intelligent our machines become-but how wisely, safely, and constitutionally we deploy them in service of citizens. #artificialintelligence #humancontrol #Algorithmic
-
Here are 4 stories showing how countries are setting clearer boundaries for AI. 🌍 #Nigeria → Lawmakers are drafting a broad AI law with stricter checks and penalties for high-risk systems. #UnitedStates → Job seekers are suing an AI hiring tool, Eightfold AI, saying ranking systems should be transparent and open to challenge. #China → New proposals would force companies to step in if chatbots promote suicide, gambling, or addiction. #Italy → DeepSeek AI reached a deal with regulators to launch its chatbot, adding clearer warnings, reducing hallucinations, and facing fines if it fails to comply. 👉 Here’s what I’m seeing globally: as AI moves from experiments to systems that shape jobs, safety, and opportunity, countries are stepping in to make sure human judgment, accountability, and safeguards evolve alongside the technology. #AI
-
In 26 December 2024, the South Korean National Assembly approved and adopted the AI Basic Act. As of this morning (Jan 22), the framework has taken effect, marking the establishment of the worlds first law focused on enforcing safety requirements on high-performing or 'Frontier' AI systems. For GRC and security professionals, this marks another significant addition to global AI regulation. Here are my top takeaways: 1️⃣ An AI Safety Institute is on the way. South Korea is setting up its own institute to evaluate high-performance AI systems, joining similar efforts in the UK and US. They’re also establishing a Presidential Council on National AI Strategy, which shows a serious, long-term commitment to governance. 2️⃣ The grace period offers a valuable runway. The law includes a one-year period focused on “guidance, consultation, and education,” with no penalties yet. This gives teams a key opportunity to get their AI inventories organized and prepare for compliance before any fines kick in. 3️⃣ The approach to risk is a little different. Instead of focusing on high-risk applications like the EU AI Act does (think healthcare or hiring), South Korea’s law sets technical thresholds such as cumulative training computation to decide what’s covered. This ‘frontier-first’ mindset targets the most powerful models directly. For multinational companies, this definitely adds complexity. From where I stand, the most effective way to navigate these evolving requirements may be to build your governance program to the highest standard currently available, which in my opinion is the EU AI Act. This approach helps cover multiple regulatory environments at once. I’m interested to hear how teams are approaching this ‘third pillar’ in global AI rules. Always glad to connect and share what I’ve learned. 🤝 I'll some sources down in the comments for folks to read through if you'd like to learn more! #AI #AIGovernance #AIRisk #GlobalAI #AIRegulation #AISafety
-
As governments around the world put in place policies and regulations to mitigate AI risks while ensuring benefits are shared across society, it's essential that we develop thoughtful frameworks, share best practices across jurisdictions, and collaborate across public and private sectors to keep pace with AI's rapidly evolving capabilities and address the full range of issues. One smart approach I'm seeing is governments, where possible, updating existing laws and agencies to address AI, rather than layering on new overlapping laws. A good example is Federal Communications Commission making AI-generated voice robocalls illegal under its existing Telephone Consumer Protection Act (TCPA). One question we often get from policymakers is how to think about enterprise vs consumer AI. Enterprise AI is designed specifically for business settings, while consumer AI is open-ended and available for use by anyone, making it more prone to potential misuse and harmful effects. Consumer AI is trained and grounded on the corpus of public Internet content, which may include some untrustworthy, biased sources. In contrast, enterprise AI systems typically operate on approved, curated, and often proprietary data that is consensually obtained, which limits the risk of hallucinations and increases accuracy. Finally, enterprise companies are incentivized to offer security safeguards for their customers as a way to protect their reputation and competitive advantage, and are further obligated to meet customer contracts. Our legal, security, procurement, and ethical and humane use teams developed this AI policy framework for navigating these issues, summarized in this white paper. We invite your feedback and hope this will help advance our collective efforts as we all learn to adapt and thrive in the rapidly changing AI era. #regulation #policy #AI