TL;DR: The US Commerce Department has drafted rules that would extend government licensing requirements to AI chip exports destined for any country in the world, not just the roughly 40 nations already subject to curbs. Under the tiered framework, orders exceeding 200,000 Nvidia GB300 or equivalent AI accelerators in a single country would trigger host-government involvement and security commitments. The rules are not yet final and could still be revised or shelved, but they represent the most aggressive assertion of American control over global AI infrastructure to date.
What you will learn
- Why the new proposal goes far beyond existing China-centric export controls
- How the three-tier licensing framework would classify AI chip orders
- What the 200,000 GB300 GPU threshold means in practice for hyperscalers
- Which countries would receive expedited approval and which would face blanket restrictions
- How the Commerce Department's Bureau of Industry and Security (BIS) plans to administer the regime
- What Nvidia, AMD, and data center operators stand to lose under the new rules
- The legal and diplomatic complications of extending US jurisdiction to allies in Europe, Asia, and the Gulf
- How this interacts with the January 2026 BIS revision covering China and Macau
- What happens to sovereign AI programs in countries like France, India, Saudi Arabia, and Japan
- Why the framing matters: this is a licensing regime, not a ban, and the distinction is consequential
The proposal: what's actually being drafted
The US Commerce Department's Bureau of Industry and Security has circulated a draft regulatory framework that would require American government approval before Nvidia, AMD, or any other US chip company could ship advanced AI accelerators to customers in any country on the planet. The reporting, confirmed across multiple outlets including Bloomberg, TechCrunch, Tom's Hardware, and US News, describes a regime that expands the current export-control architecture from covering roughly 40 countries — primarily China, Russia, and entities subject to arms embargoes — to a true worldwide licensing system.
That is a categorical shift. Under the existing rules that took effect in January 2026, BIS revised its export review policy specifically for advanced AI chips destined for China and Macau, adding scrutiny to transactions that previously moved under general licenses. The new draft goes substantially further: it would make no country presumptively exempt. A data center operator in Germany, a cloud provider in Japan, a sovereign AI initiative in Saudi Arabia — all would need US government sign-off before receiving large consignments of cutting-edge AI hardware.
The Commerce Department has not published the draft formally. Briefings have gone to industry stakeholders and to some allied governments. The framework is understood to be under active revision, meaning specific thresholds, tier definitions, and eligibility criteria may change. Nonetheless, the architecture of the proposal is clear enough that chipmakers, data center operators, and trade lawyers have begun modeling its effects — and the consensus is that if enacted close to the current draft, it would fundamentally alter the economics of global AI infrastructure buildout.
The policy rationale, according to people familiar with the discussions, is that the US government wants to prevent advanced AI compute from being consolidated in ways that could threaten national security, enable authoritarian surveillance at scale, or give adversaries indirect access to cutting-edge AI capabilities by routing purchases through third countries. The China controls alone proved insufficient: chips were making it into restricted destinations via intermediaries in Southeast Asia, the Gulf, and Eastern Europe. A global licensing regime, in theory, closes those loopholes by making every large sale a matter of explicit US approval regardless of the buyer's nationality.
How the tiered licensing system would work
The draft framework is built around a three-tier structure keyed to order size, with each tier carrying different review requirements and processing expectations.
The first tier covers orders at or below approximately 1,000 GB300 GPUs or their equivalent in compute capacity. These transactions would be subject to what the framework describes as a "simple review" — essentially a streamlined notification and verification process rather than a full license application. The expectation is that most commercial orders for private enterprises, research institutions, and small cloud operators would fall into this tier and could be processed quickly, with approvals likely running on an automated or semi-automated basis.
The second tier covers larger orders that exceed the simple-review threshold but remain below the level that triggers host-government involvement. These require what the draft terms "preclearance with conditions." Buyers would need to submit more detailed end-use documentation, identify the physical location and security posture of the intended deployment, and in some cases agree to audit rights that allow US officials or their designees to verify compliance. The conditions attached to preclearance can include restrictions on transferring the hardware to third parties, requirements to implement access controls on the resulting compute cluster, and in certain cases obligations to share information about the AI workloads being run.
The third tier — and the one drawing the most attention from industry — involves deployments that would result in more than 200,000 GB300 GPUs, or their compute equivalent, being concentrated in a single country. At that scale, the draft requires the host government itself to become a party to the transaction. The buyer's national government would need to submit formal security commitments to the US, agreeing to a set of conditions about how the infrastructure is used, who can access it, and what safeguards are in place against diversion or misuse. Only after those government-to-government commitments are made and accepted by BIS would the commercial sale be permitted to proceed.
What qualifies as a "large cluster" and why 200,000 GB300s matters
The 200,000 GB300 threshold is not an arbitrary number. The Nvidia GB300, the latest generation of Blackwell-architecture AI accelerators as of early 2026, represents roughly 1.5 to 2 times the compute density of its predecessor, the H100. A cluster of 200,000 GB300s would represent on the order of 100 to 120 exaflops of AI training compute — enough to train frontier models at the scale of the largest systems deployed today, and sufficient to run large-scale inference workloads simultaneously across multiple services.
In practical terms, that threshold captures the kind of infrastructure being built by hyperscalers and by national AI programs. Microsoft's announced AI infrastructure expansion in Europe, Amazon Web Services' planned data center investments in the Gulf, Google's sovereign cloud buildouts in regulated markets — deployments of that ambition routinely involve GPU counts in the hundreds of thousands. The threshold is calibrated to catch exactly those projects while leaving smaller commercial and enterprise deployments in the lower tiers.
The significance of the host-government trigger at 200,000 units is that it transforms what is nominally a commercial transaction into a diplomatic one. A company like Amazon or Microsoft cannot simply negotiate a purchase agreement with Nvidia and begin deploying; the government of the country where the data center sits must enter into a bilateral agreement with Washington. For allies with existing defense and intelligence relationships — the UK, Australia, Japan, South Korea — that conversation may be relatively smooth. For countries in the Gulf, Southeast Asia, or Latin America that have been building out AI capacity as part of sovereign technology initiatives, the requirement introduces a new layer of political negotiation into what had been a straightforward commercial relationship.
Countries and allies: who gets easy access vs. who gets blocked
The draft framework does not treat all countries equally. According to reporting from Capital Brief and US News, the Commerce Department intends to create a positive list of allied and partner nations whose buyers would be eligible for expedited review at all tier levels, subject to meeting the standard conditions. This list is expected to include NATO members, Five Eyes partners, Japan, South Korea, and a handful of other countries with deep US security relationships.
For these favored partners, the licensing regime would function largely as a compliance formality at the lower tiers. Large deployments would still require host-government engagement, but the expectation within the framework is that allied governments would agree to the security commitments without significant friction, given existing treaty relationships and intelligence-sharing arrangements.
At the other end of the spectrum, certain countries would face a presumption of denial. China and Macau already operate under the tightened January 2026 rules. Russia, Iran, North Korea, and other heavily sanctioned destinations are effectively blocked under existing Export Administration Regulations. Under the new framework, those restrictions would remain in place and be subsumed into the global licensing architecture.
The genuinely complicated cases involve the large middle group: countries like India, Saudi Arabia, the UAE, Brazil, Indonesia, and others that have been actively building AI infrastructure and are neither formal US security allies nor adversaries. Many of these governments have invested heavily in sovereign AI strategies. The UAE's Falcon program and Saudi Arabia's NEOM and ARAMCO AI initiatives have consumed hundreds of millions of dollars and have relied on access to US-origin hardware. Under the draft rules, those projects would require host-government security commitments to the United States — a political ask that may be acceptable to Riyadh and Abu Dhabi but is not without diplomatic sensitivity.
The US News reporting adds another dimension: the draft framework would require foreign firms seeking large AI chip deployments to make commitments to invest in US AI development. That condition transforms the regime from a pure security instrument into something that also serves US industrial policy goals — linking access to American AI hardware to flows of investment back into the US AI ecosystem.
Why this is bigger than the China-only restrictions that came before
The existing export-control architecture for AI chips, while significant, had a fundamental structural weakness: it applied to specific destinations. China and Macau faced direct restrictions. Roughly 40 other countries were in a heightened-scrutiny tier. But a company in Singapore, Malaysia, or the Netherlands could purchase Nvidia H100s or GB300s in volume without a formal US government license, and nothing in the existing rules prevented those chips from subsequently moving to restricted destinations through corporate structures, joint ventures, or leasing arrangements.
The proposed global licensing regime addresses this by making the US the licensing authority for every significant AI chip transaction worldwide, regardless of destination. There is no longer a privileged category of countries that can buy freely; there is only a gradient from easy-approval allies to presumptive-denial adversaries, with everything in between requiring explicit BIS determination.
That architecture has a precedent in US nuclear export controls, which similarly require government approval for certain nuclear materials and technologies regardless of where they are going. The AI chip proposal represents an attempt to apply analogous logic to compute infrastructure — treating advanced AI accelerators as dual-use items of sufficient strategic sensitivity to warrant a comprehensive licensing regime rather than a destination-specific one.
Morgan Lewis, in its analysis of the January 2026 BIS revisions to export review policy for China and Macau, noted that the Commerce Department had signaled intent to use the existing Export Administration Regulations framework more aggressively. The global licensing proposal is the logical extension of that signaling. It does not require new statutory authority — BIS already has broad discretion under the Export Administration Act — but it would represent a significant expansion of the agency's operational footprint and administrative capacity.
Industry reaction: Nvidia, AMD, data center operators
Neither Nvidia nor AMD has issued formal public statements on the draft rules as of early March 2026, consistent with standard practice for regulations that have not been formally published. Privately, however, the concerns are substantial. Nvidia's international revenue — which accounts for the majority of its total revenue — flows heavily through data center customers in Europe, the Gulf, and Asia Pacific. A global licensing requirement does not necessarily block those sales, but it introduces delays, compliance costs, and uncertainty into the sales cycle that customers factor into procurement decisions.
AMD's exposure is somewhat different. Its MI300X and MI350 series AI accelerators compete directly with Nvidia in the data center market, and AMD has been gaining ground in that segment. A global licensing regime that applies to both companies equally does not change the competitive dynamics between them, but it does create a market opportunity for non-US chipmakers — specifically Huawei — who would not be subject to US export licensing requirements.
This is not a hypothetical concern. As documented in reporting on DeepSeek V4's decision to lock out Nvidia and AMD in favor of Huawei hardware, there is already an emerging playbook for deploying frontier AI on non-US silicon. If US export licensing introduces friction for international buyers, some share of that demand will flow to alternatives that do not require Washington's approval. The draft framework's architects presumably believe the security benefits outweigh that competitive cost, but industry lobbyists will argue the opposite case vigorously.
Data center operators face a more operational concern: the preclearance process for large deployments introduces timeline uncertainty into projects that are already complicated by power procurement, construction permitting, and cooling infrastructure. Hyperscalers typically plan GPU procurement 12 to 24 months in advance. A licensing process that could take weeks or months adds risk to those timelines that ultimately gets priced into the cost of AI services for end users.
The legal and diplomatic complexity
The extraterritorial reach of the proposed rules will generate significant legal and diplomatic pushback. US export controls have always had extraterritorial elements — the Foreign Direct Product Rule, which was expanded to cover Huawei and later semiconductor equipment for China, is one example — but a global AI chip licensing regime would represent one of the most sweeping assertions of US jurisdiction over international commerce in recent memory.
The European Union, which has its own trade sovereignty concerns and has been building out a regulatory framework for AI through the AI Act, is likely to object to the implication that member states need US government approval to build their own AI infrastructure. Similar objections will come from India, which has been explicit about its desire for strategic autonomy in technology. Japan and South Korea, while close US allies, have domestic chipmakers and AI ambitions that complicate a simple alignment with Washington's preferences.
Trade law experts have noted that while the US has broad authority under the Export Administration Act to impose such controls, the WTO implications are not trivial. If the licensing regime is perceived as a disguised trade barrier — particularly given the provision requiring foreign firms to commit to US AI investment — affected countries could mount challenge proceedings. The US has historically managed similar tensions in semiconductors and other strategic sectors, but the scale of the proposed AI chip controls is unprecedented.
JDSupra's analysis of the 2026 AI policy and semiconductor outlook flagged this jurisdictional complexity as one of the key risks in the regulatory pipeline, noting that BIS rule-making in this space consistently underestimates the diplomatic friction it generates with partners who are nominally aligned but have independent industrial interests.
What happens if rules are finalized: impact on AI infrastructure race
If the draft rules are enacted close to their current form, the most immediate effect would be on the velocity of AI infrastructure buildout outside the United States and a small circle of close allies. Projects that are currently in procurement or early construction phases in markets like the Gulf, Southeast Asia, and parts of Europe would face delays as buyers and host governments navigate the new licensing process.
Over a longer horizon, the rules would create structural incentives for non-US countries to develop or procure AI infrastructure that does not depend on US-origin hardware. That outcome — accelerated decoupling of global AI infrastructure along geopolitical lines — is arguably the opposite of what the rules are intended to achieve. If the goal is to maintain US visibility and leverage over where advanced AI compute is deployed, a regime that pushes buyers toward Huawei's Ascend series or future Chinese AI accelerators achieves the opposite.
Nvidia's Rubin platform, unveiled at MWC 2026, represents the next generation of US AI silicon, and Jensen Huang's GTC 2026 keynote on March 16 is expected to detail the roadmap for physical AI and robotics that depends heavily on the continued global buildout of Nvidia-powered infrastructure. Export licensing constraints that reduce international demand for that hardware would have direct consequences for Nvidia's forward revenue projections and, by extension, for the pace of US AI hardware development funded by those revenues.
What this means for cloud providers, hyperscalers, and sovereign AI
The downstream implications for cloud providers and hyperscalers are significant and varied. For US-headquartered hyperscalers — Microsoft, Google, Amazon, Meta — the global licensing framework creates a new compliance layer on international expansion, but one that those companies are arguably better positioned to navigate than their international competitors. They have existing government relationships, security compliance teams, and the financial resources to absorb licensing delays. The bigger risk for them is competitive: if the licensing process is slower or more burdensome for international cloud providers in the EU or Asia, those providers are disadvantaged relative to US hyperscalers in their own domestic markets — an outcome that will not go unnoticed by regulators in Brussels or Tokyo.
For sovereign AI programs — national initiatives to build independent AI capabilities on domestically operated infrastructure — the rules represent a fundamental tension. Countries like France (with its Mistral ecosystem), India (with its IndiaAI mission), and Saudi Arabia (with its SDAIA programs) have made strategic commitments to developing AI capacity that is not entirely dependent on US commercial platforms. Under the draft framework, those programs would need to either negotiate government-to-government security agreements with the US, accept compute ceilings that limit their ambitions, or pursue non-US silicon alternatives.
The requirement that foreign governments make security commitments and, per US News reporting, commit to US AI investment as a condition of large deployments, effectively positions Washington as a senior partner in any sovereign AI initiative that wants access to frontier compute. Some governments will accept that bargain. Others will treat it as an unacceptable infringement on technological sovereignty and accelerate their search for alternatives.
The framework is not yet finalized. BIS has a history of issuing rules in draft form, receiving substantial industry and diplomatic feedback, and revising the final version materially — or, in some cases, withdrawing proposals that prove too contentious. The current administration has shown significant appetite for aggressive use of export controls as a tool of industrial and national security policy, but it has also demonstrated willingness to carve out exceptions when allied governments push back hard enough.
What is not in doubt is the direction of travel. The January 2026 China revisions, the current global licensing draft, and the broader trajectory of semiconductor export controls all point toward a world in which the United States government is increasingly positioned as the licensing authority for the global AI compute stack. Whether that authority is exercised through a formal worldwide licensing regime or through a patchwork of country-specific rules, the implication for every government, every data center operator, and every hyperscaler building AI infrastructure outside the United States is the same: Washington is now in the room.
Sources: TechCrunch (March 5, 2026), Tom's Hardware, Bloomberg / Capital Brief, US News, JDSupra 2026 AI Policy and Semiconductor Outlook, Morgan Lewis analysis of BIS revised export review policy for China and Macau.