In 1942, Soviet physicist Georgy Flerov noticed something odd: the scientific world had gone quiet. For years, papers on nuclear fission and neutron behaviour had been appearing regularly. Then, suddenly, silence. No new publications, no data, no discussion. Flerov took this absence as the loudest signal of all. He warned Stalin that the Americans were almost certainly working on a secret bomb. Stalin believed him, and the USSR’s atomic programme began. The pattern was clear: when scientists stop talking, something serious is happening. There is growing, documented evidence of rising secrecy in frontier AI development. The signals are not speculative. They are measurable and accumulating. According to Stanford’s 2025 AI Index, the performance gap between leading open-weight and closed models narrowed sharply from 8% to just 1.7% on some benchmarks within a single year. Yet access to frontier systems remains increasingly restricted, offered only via APIs rather than as open-weight models. Google DeepMind now enforces a six-month embargo on strategic generative AI research, as reported by the Financial Times. Internal vetting procedures require multiple levels of approval for each paper. OpenAI’s GPT-4 technical report explicitly states that it contains no details about the model’s architecture, size, training compute, dataset construction, or training methods. This contrasts with earlier models such as GPT-2 and GPT-3, which disclosed substantially more technical information. The United States has enacted export controls on high-end AI chips, including Nvidia’s A100 and H100 processors, restricting their sale to China and other countries. These controls were first introduced in October 2022 and expanded in October 2023. China, in turn, requires state approval through security assessments and algorithm filings before companies can release generative AI services to the public, with these rules taking effect in August 2023. This pattern of increasing secrecy across models, publications, and state controls suggests a clear shift away from open science towards strategic competition in AI development. The combination of restricted model access, publication embargoes, extreme opacity around flagship systems, and government-led export controls mirrors historical indicators of strategic technology mobilisation. Unlike World War II, there is no single global project. Yet the decentralised actions of corporate labs and states together form a picture of geopolitical competition with classified dimensions. Taken together, these trends mirror the same warning signs Flerov observed in 1942: a sharp decline in open discourse in the face of strategic sensitivity. The difference is that this time there is no single government programme. The secrecy is decentralised, managed by corporate labs, enforced through commercial NDAs, guided by national security agendas, and shaped by geopolitical tension.
Export Controls in AI Security
Explore top LinkedIn content from expert professionals.
Summary
Export controls in AI security refer to government policies that restrict the transfer of advanced AI technologies, equipment, and knowledge to certain countries and entities for reasons of national security and global competition. These controls play a crucial role in regulating access to powerful AI hardware and intellectual property, aiming to prevent adversarial use and safeguard strategic advantages.
- Monitor global restrictions: Keep track of evolving export control rules as governments tighten access to advanced AI chips and model details, impacting who can build and use cutting-edge systems.
- Strengthen IP safeguards: Take proactive steps to secure your AI models and data from theft or illicit replication, as large-scale distillation campaigns attempt to copy valuable capabilities.
- Adapt to new workarounds: Recognize that restricted access often leads to creative solutions, such as synthetic data pipelines and offshore leasing, which can circumvent hardware limitations and influence global AI competition.
-
-
Why did NVIDIA, the darling of the AI market, drop 2.5% today? The Biden administration dropped the mic (and some weighty export controls) on AI chips and models—arguably the most aggressive attempt yet to regulate the flow of transformational tech. Let’s break it down: 🧩 A Three-Tier System of Access ⏩ 🥇 Top Tier: AI flows freely for 19 nations (G7 + allies like Japan, South Korea, and Taiwan). 🥈 Middle Tier: Most of the world faces caps but can negotiate for more chips by aligning with US policy interests 🥉 Bottom Tier: China and Russia? Completely locked out—no chips, no dice, no exceptions. 🔐 Locks on AI’s Crown Jewels ⏩ Firms must keep 75% of their AI computing power in the U.S. or allied nations, with no more than 7% in any other country. Data center operators like Microsoft and Google will need accreditation to trade AI tech freely, tightly aligning with U.S. security goals. 🤖 New AI Model Parameters ⏩ For the first time, restrictions extend to the very DNA of AI: model weights. Overseas data centers must implement strict safeguards to protect this intellectual property. Officially, it’s about national security: keeping AI away from adversaries like China and Russia. But unofficially? It’s about locking in dominance. It’s a strategic move to control the future of AI innovation and adoption. Pushback is already fierce. Nvidia has called the rules “misguided,” warning that global buyers will pivot to non-U.S. suppliers. Restricting friendly nations like Israel, Mexico, and Switzerland could also strain diplomatic ties. And let’s not forget the unintended consequence: Balkanization of the AI ecosystem. Countries and companies excluded from the U.S.-led framework may double down on domestic R&D or turn to less-restricted alternatives (hello, China). That could erode America’s soft power over time. This is the tech Cold War. Chips are the new oil. Code is the new currency. If these controls stick, the big question is whether they will cement U.S. dominance—or just fuel the competition.
-
As a former chip designer, this one was especially interesting to write. My latest piece for The Economist investigates the shadow supply chains keeping China in the AI race—despite increasingly strict U.S. export controls. Key takeaways: 👉 Chinese firms lease restricted chips through offshore data centres, especially in Malaysia 👉 Singapore, with few actual chip end-users, is now Nvidia’s second-biggest market 👉 Smugglers route chips via third countries using doctored paperwork and front companies 👉 U.S. enforcement is stretched: one officer covers all of South-East Asia 👉 Ideas like a “kill switch” are really bad 👉 I am sympathetic to Nvidia's view that these controls are not the right way to beat China, innovation is. Read the full story here: https://lnkd.in/eZmfaGNb #AI #Semiconductors #Nvidia #China #Geopolitics #Chips #TechPolicy #TheEconomist
-
A new paper from Tsinghua University, Microsoft Research Asia, and Wuhan University demonstrates why export controls matter and why they're not sufficient. The research team developed SynthSmith, a synthetic data pipeline that trained a 7 billion parameter coding model to outperform models twice its size—using zero real-world data. All of it running on 128 NVIDIA H20 chips and 32 H200s. Three observations: First, the study's lead author explicitly stated that computational constraints—not limitations of the pipeline itself—are what's preventing them from scaling to 100B+ parameter models. This is quiet validation that compute restrictions have real effect. When Chinese researchers cite GPU access as their binding constraint, it suggests the export control framework is creating meaningful friction. Second, the synthetic data trend deserves serious attention as a potential offset to that friction. Techniques that allow smaller models to match larger ones, or that reduce dependence on scarce real-world training data, represent exactly the kind of algorithmic efficiency gains that can partially compensate for hardware limitations. Expect more research in this direction. Third, and most directly: why is Microsoft Research Asia co-authoring papers that advance Chinese AI capabilities? The chips question gets the headlines, but corporate research collaboration may be the more significant channel for technology transfer. Export controls on hardware mean little if American companies are helping develop the methodologies that make the most of constrained resources. The AI competition isn't just about who has the most GPUs. It's increasingly about who can do more with less—and whether US policy addresses all the vectors that matter. Link to the article in the comments. #AICompetition #ExportControls #USChinaTech #TechPolicy
-
Anthropic just published something that deserves more attention than it will probably get. The company disclosed this week that three Chinese AI labs—DeepSeek, Moonshot AI, and MiniMax—ran coordinated, industrial-scale operations to secretly extract capabilities from Claude. The technique is called "distillation": generate massive volumes of outputs from a stronger model, then train your own weaker model on those outputs. Legitimate when you do it with your own models. Theft when you do it with someone else's. The numbers: across ~24,000 fraudulent accounts, the three labs generated over 16 million exchanges with Claude. They used commercial proxy services running "hydra cluster" architectures—sprawling account networks mixing distillation traffic with legitimate requests to defeat detection. When Anthropic banned accounts, new ones took their place. When Anthropic released a new model mid-campaign, MiniMax pivoted within 24 hours. That's not opportunism; that's a disciplined program. Attribution is unusually specific. Anthropic traced campaigns to individual researchers at DeepSeek through IP correlation, request metadata, and infrastructure indicators—cross-checked with industry partners who observed the same actors on their own platforms. The national security dimension matters most. Frontier labs build safeguards into their models to prevent assistance with bioweapons development, offensive cyber, and mass surveillance. Distilled models are unlikely to retain those safeguards. The dangerous capabilities propagate; the guardrails don't. This also lands squarely in the export control debate. Critics argue chip restrictions are futile because Chinese labs are advancing anyway—DeepSeek's January breakthrough being the prominent exhibit. Anthropic's counter: those advances depend significantly on illicitly extracted American capabilities. The rapid progress isn't purely indigenous innovation. And distillation at scale still requires the advanced chips the controls are designed to restrict. The strategic logic is familiar to anyone who has watched China's long history of technology acquisition by other means. When legitimate channels close, Beijing finds different ones. The mechanism is new. The pattern is not. The full disclosure is worth reading—the operational detail is unusually specific for a public document. https://lnkd.in/eqa_6ZaS
-
The US introduced a new AI Export Control Regulation today. 1. It limits the export of close-weight models 2. It limits export of most data center GPU (A100, H100 etc.) Below a quick analysis. Official "fact sheet" is here and (as you would expect) content free: https://lnkd.in/gQ7bMDQD The regulation regulates models trained > 10^26 operations. Quick math: - 70b Model : 70 billion × 6 × 15 trillion = 6*10^24 ✅ - 405b Model: 405 billion × 6 × 15 trillion = 3.6×10^25 ✅ So the cutoff is around 1T weights trained on 15T tokens for one epoch. There are no restrictions for open weight models. This will certainly help open source models as they are now much easier to handle. GPUs are covered if either their "TPP" exceeds 4,800 or TPP is above 1,600 and they exceed a specific density. TPP is defined as TOPs * Bit Legth * 2 w/ sparsity. So for example: - H100: 1,000 TOPS * 16 bit = 16,000 TPP 🚫 - A100: 312 TOPS * 16 bit = 4,990 TPP 🚫 Essentially all modern cards are covered. No license is required to export to: Australia, Belgium, Canada, Denmark, Finland, France, Germany, Ireland, Italy, Japan, the Netherlands, New Zealand, Norway, Republic of Korea, Spain, Sweden, Taiwan, the United Kingdom. Singapore, Switzerland, Israel and part of the EU are missing. Full text is at the link below. I highly recommend loading it into an LLM and use it to find stuff. Just don't trust the LLM to do the math correctly. GPT-4o failed miserably. Full text: https://lnkd.in/gP3CWA5p
-
🧵 Major US AI export control updates just dropped: The Framework for AI Diffusion aims to preserve US leadership while enabling the global diffusion of AI technology, supported by higher security standards. The new controls target three areas: AI Chips, AI models, and cloud computing. Key takeaways: 1. This is serious unilateral action from the US to shape the global AI ecosystem to align with its strategic interests. It will raise concerns – even from close partners – of US overreach and use of its market dominance. 2. However, this rule streamlines processes that were likely already happening case-by-case behind the scenes. Exporting advanced chips to regions like the Middle East already required a license, with black-box decisions causing delays and frustration from the private sector and international partners. (e.g. https://lnkd.in/dbAb5uZ4) 3. The rule introduces serious security requirements for foreign data centers who get streamlined AI chip access through the Validated End User program. Such security requirements make sense for data centers training or hosting leading AI models: there’s no point investing billions in training the most advanced models if China can just steal them for free. But these requirements are burdensome – so should be limited to only data centers training/hosting the most advanced models. 4. Chip smuggling is still a concern: without uplift in BIS capability and resourcing, it’s hard to see how this will be adequately addressed. 5. This is a major policy move, weeks before a change in Administration. While Trump may agree with the aims of promoting US AI leadership and curtailing China’s capabilities, we should expect to see the new Administration make its own mark in this space. For those interested in the finer details, I'll include an overview in the comments.
-
The Biden team’s parting shot on chips export controls (the “AI diffusion rule”) is a big one. A few thoughts: The AI diffusion rule is as much of a paradigm shift as the Oct. 2022 controls. The Oct. 2022 rules were about keeping China behind. Today’s rules are about keeping America ahead and using export controls as an instrument of industrial policy to keep as much AI infrastructure in America as possible. So, what is the rule? Basically, the United States is establishing a global regime to allow it to determine who gets advanced AI chips and under what terms. There are a lot of good things in the rule: facilitating exports to allied countries and trusted companies, setting security standards to prevent theft of AI models. The overall goal is to double down on U.S. leadership in AI infrastructure before there is a real challenger – this is an attempt at preemptive policy. But the big bet – and the most controversial part – is setting specific quotas for how much AI compute power has to stay in the United States. Rather than using export controls to prevent adversaries from gaining advanced tech and letting the market sort out distribution otherwise, export controls are now being used to dictate market outcomes and ensure that future AI infrastructure is built predominantly in the United States. That’s a huge change. The AI diffusion rule will be very complicated to implement. The USG will have to constantly monitor market conditions and adjust the rules to strike the right balance between permitting exports to a broad range of companies/countries and leaving a gap in the market that Chinese competitors can fill. We should always make sure that the incentives favor companies around the world wanting to buy U.S. tech. The AI diffusion rule makes this harder both explicitly (via the quotas) and implicitly (by making other countries wary of building out infrastructure dependent on U.S. tech that could be yanked away at any moment). Plus, Commerce will need a substantial enforcement effort to prevent smuggling and evasion. The outgoing Biden team’s chips control policy has been marked by two things: trying to get ahead of problems before they become problems and taking big swings. The AI diffusion rule hits the mark on both counts. It's now up to the Trump team to make sure the big bet pays off. Stay tuned for more analysis from Center for a New American Security (CNAS).
-
Yesterday the Biden Administration released the Final Rule on Artificial Intelligence Diffusion to strengthen U.S. security and economic strength in the age of AI. This rule aims to streamline licensing for chip orders, bolster U.S. AI leadership, and provide clarity to allies and partners about AI benefits while addressing potential national security risks. Key Mechanisms 1. Unrestricted chip sales to key allies: 18 key allies and partners can make large-scale purchases without restrictions. 2. License-free small orders: Chip orders up to 1,700 advanced GPUs don't require a license or count against national caps. 3. Universal Verified End User (UVEU) status: Entities meeting high security standards in allied countries can place up to 7% of their global AI computational capacity worldwide. 4. National Verified End User status: Entities in non-concern countries can purchase computational power equivalent to 320,000 advanced GPUs over two years. 5. Non-VEU entity purchases: Entities outside close allies can purchase up to 50,000 advanced GPUs per country. 6. Government-to-government arrangements: Countries aligning with U.S. export control and technology security efforts can double their chip caps. Measures Against Countries of Concern: - Preventing advanced semiconductor use for training advanced AI systems. - Restricting transfer of advanced closed-weight model weights to non-trusted actors. - Setting security standards for advanced closed-weight AI model weights. This rule builds on previous regulations from October 2022 and 2023, aiming to protect U.S. national security while promoting responsible AI technology diffusion globally. Links to the document: https://lnkd.in/eMhenvQX
-
The US just tightened AI chip export rules, dividing countries into three “tiers.” Why it matters: AI chips like NVIDIA’s A100 or H100 are the “brains” of AI. They enable faster processing of massive amounts of data, allowing AI to perform complex tasks like image recognition, language translation, and decision-making at incredible speeds. These restrictions are meant to preserve US technological leadership in AI and prevent advanced AI capabilities from being used for military purposes by rival nations. The tiers: ► Unrestricted access: 18 close US allies, including the UK, Japan, and Australia. ► Limited access: Over 100 countries like India and Israel, subject to export caps and licensing requirements. ► Restricted access: China, Russia, and North Korea face severe restrictions. So what does it mean to be in Tier 2? Let’s take India as an example: While short-term effects may be minimal, long-term challenges could arise. AI initiatives like India’s National AI Mission and large-scale data centers may face delays or scaling issues due to limited GPU availability. India’s response will likely involve boosting domestic semiconductor and GPU development, diversifying AI chip sources, and engaging in “AI chip diplomacy” to negotiate better access. Remember this term: AI Chip Diplomacy - it's shaping up to be one of the most crucial elements of modern international relations. 😎 #artificialintelligence #regulation