TL;DR: NVIDIA committed $2 billion each to optical component makers Lumentum and Coherent through multi-year purchase agreements and future capacity rights, sending both stocks up sharply — Lumentum +12%, Coherent +15% — on the announcement. The move is a direct acknowledgment that AI data center scaling is hitting a connectivity wall: GPU clusters can only grow so large before the copper interconnects linking them become the dominant bottleneck. NVIDIA is locking down the photonic supply chain before that wall becomes a crisis.
What you will learn
- The $4 billion commitment: structure and terms
- Why connectivity is now the binding constraint
- What Lumentum and Coherent actually build
- Copper vs. optical: the physics of the bottleneck
- The optical interconnect market: a 5x growth projection by 2030
- Jensen Huang's AI factory vision and why photonics is foundational
- The Ayar Labs parallel: silicon photonics inside the package
- Who else is racing for the optical supply chain
- What this means for data center design
- Investment and market implications
- Frequently asked questions
The $4 billion commitment: structure and terms
The headline is $4 billion split equally — $2 billion to Lumentum, $2 billion to Coherent. But the structure of the commitment is what matters more than the number.
NVIDIA did not simply purchase stock in either company. The arrangement is a combination of multi-year purchase agreements and future capacity rights. NVIDIA is committing to buy a defined volume of optical components from each vendor over multiple years, with options to lock in additional production capacity as demand scales. This is a supply chain deal masquerading as an investment announcement.
The distinction is significant. A stock purchase is a bet on a company's share price. A multi-year purchase agreement is a guarantee of revenue that lets a manufacturer invest in new fabrication capacity with confidence. NVIDIA is not acquiring either company, nor is it primarily seeking financial returns on the investment. It is paying for certainty of supply in a component category it believes will become critically scarce as AI infrastructure scales.
Both companies responded with sharp equity moves. Lumentum shares rose 12% on the announcement. Coherent climbed 15%. The market read the signal clearly: NVIDIA has identified optical interconnects as a strategic chokepoint, and its willingness to commit $4 billion to secure supply is validation that the entire sector is about to enter sustained high demand.
The future capacity rights component is particularly telling. NVIDIA is not just buying what Lumentum and Coherent can produce today — it is reserving the right to buy what they will produce after they expand capacity. That expansion requires capital investment in fabrication equipment, cleanroom facilities, and engineering talent. The purchase commitment from NVIDIA provides the revenue certainty that makes those capital investments fundable. NVIDIA is, in effect, financing the build-out of optical component manufacturing capacity that the entire industry will benefit from, while securing preferential access to that capacity for itself.
Why connectivity is now the binding constraint
The GPU compute problem in AI has been solved well enough for now. NVIDIA's Blackwell B200 and the GB200 NVL72 rack configurations can deliver extraordinary training throughput. But a cluster of 72 B200 GPUs is only useful if all 72 can communicate with each other fast enough that none of them are waiting on data from the others.
Inside a single NVL72 rack, NVIDIA uses NVLink — its proprietary high-bandwidth interconnect — to connect GPUs directly at 1.8 terabytes per second of aggregate bandwidth per GPU. This works well within a rack. The problem is what happens when you connect racks to each other.
Between racks, data centers today rely on copper cables for most connections, with InfiniBand or Ethernet switching fabric. Copper has physical limits. Over distances greater than a few meters, high-frequency electrical signals attenuate — they lose strength and integrity. Copper cables at 400 Gbps or 800 Gbps per lane degrade signal quality over distances that are common in large data halls. They also generate heat: high-speed copper links at data center scale consume significant power just in signal transmission, separate from the power consumed by the computation itself.
As NVIDIA's AI factory configurations scale from hundreds to thousands of GPUs, the inter-rack and inter-pod connectivity becomes the governing constraint on cluster performance. A training job distributed across 10,000 GPUs requires all-reduce communication operations — where every GPU shares gradient information with every other GPU — that traverse the entire network fabric. If that fabric is copper-limited in bandwidth or latency, the GPUs sit idle waiting for communication to complete. The compute utilization drops below 50%. The expensive chips are wasted.
Optical interconnects solve this. Light does not attenuate the way electrical signals do over copper. Optical fibers carry data at the speed of light with minimal signal degradation over hundreds of meters. The power consumed per bit transmitted is lower than copper at the same bandwidth. Optical transceivers — the devices that convert electrical signals to light and back — are the enabling technology for fiber-based data center interconnects.
NVIDIA's $4 billion commitment to Lumentum and Coherent is a direct response to this physics reality. It is locking down supply of the optical components its customers need to build AI clusters that can scale beyond what copper will support.
What Lumentum and Coherent actually build
Understanding the investment requires knowing what these companies produce and why their specific products are difficult to source elsewhere.
Lumentum is primarily a laser and photonic component manufacturer. Its core product lines relevant to AI infrastructure include:
- Externally Modulated Lasers (EMLs): High-speed laser sources used in optical transceivers for data center interconnects. EMLs at 400G and 800G per lane are the current high end of commercial production. Lumentum is one of a small number of companies globally that can manufacture EMLs at scale with consistent yield.
- Vertical-Cavity Surface-Emitting Lasers (VCSELs): Lower-cost laser sources for shorter-range interconnects within data centers. Used in 100G and 400G short-reach optical links.
- Transceiver chips: Photonic integrated circuits that combine laser, modulator, and detector functions in a single package.
Coherent (formerly II-VI, which acquired the original Coherent Corporation in 2022) is a broader photonics and compound semiconductor company. Its relevant product lines include:
- Datacom transceivers: Pluggable optical modules at 400G, 800G, and emerging 1.6T speeds used in data center switch fabric and server connections.
- Silicon photonics modules: Co-packaged optics products that integrate photonic components directly with switching ASICs, reducing the power overhead of traditional pluggable transceivers.
- Compound semiconductor materials: Indium phosphide and gallium arsenide wafers used as the substrate for high-speed optical components. Coherent operates wafer fabs that supply the raw material for laser manufacturing across the industry.
The compound semiconductor manufacturing capability of Coherent is particularly strategic. Indium phosphide is the substrate material of choice for high-speed EML lasers operating at data center wavelengths. The global indium phosphide wafer supply is constrained — there are only a handful of fabs worldwide capable of producing it at the quality and volume required for data center applications. By securing a long-term purchase relationship with Coherent, NVIDIA is also gaining preferential access to the upstream material supply chain, not just finished transceiver modules.
Copper vs. optical: the physics of the bottleneck
The choice between copper and optical interconnects in data centers is not primarily a cost question — it is a physics question. At the bandwidth densities that AI training clusters require, copper reaches fundamental limits that money cannot solve.
Copper electrical signals at high frequencies suffer from skin effect and dielectric loss. As frequency increases, the signal propagates only in a thin layer on the conductor's surface, increasing effective resistance. At 56 GHz per lane — the frequency required for 400G-per-lane electrical signaling — signal loss over a 3-meter cable approaches the point where active re-timing circuits are required every 1–2 meters. Those re-timing circuits consume power, add latency, and introduce additional failure points.
Optical fiber carries light at ~193 THz (for telecom wavelengths) or ~850 nm wavelength (for datacom short-range applications). The fundamental loss in optical fiber is a fraction of a dB per kilometer — essentially zero over the distances relevant to data centers. A 100-meter optical fiber link at 400G per wavelength is physically identical in signal quality to a 1-meter link. Copper has no equivalent property.
The cost column is changing. The cost of optical transceivers has declined by roughly 30% per year over the past five years as manufacturing scales. At current trajectory, 800G optical transceivers reach cost parity with copper alternatives for distances above 2 meters around 2027. NVIDIA's $4 billion investment in production capacity accelerates that cost decline by guaranteeing volume that lets manufacturers invest in yield improvement and manufacturing efficiency.
The optical interconnect market: a 5x growth projection by 2030
The optical interconnect market for data centers was valued at approximately $8 billion in 2024. Industry analysts project it reaching $40–45 billion by 2030 — a roughly 5x increase in six years. That growth rate is almost entirely driven by AI infrastructure build-out.
The driver logic is straightforward. AI training clusters are growing from thousands of GPUs to tens of thousands to hundreds of thousands. Each GPU in a large cluster requires multiple high-bandwidth links — to the network switch fabric, to NVMe storage arrays, to adjacent GPU pods. As cluster size grows, the total port count grows superlinearly: doubling the number of GPUs requires more than double the network bandwidth because all-reduce communication patterns touch every node.
NVIDIA's $4 billion investment positions it to capture a meaningful fraction of that market expansion, both through the supply relationships it is securing and through the integrated optical networking products it sells alongside its GPU systems. The NVLink Switch fabric and the emerging co-packaged optics roadmap are direct expressions of NVIDIA's intent to own the full-stack AI factory architecture — compute, memory, and connectivity.
Jensen Huang's AI factory vision and why photonics is foundational
Jensen Huang introduced the concept of the AI factory in 2023 and has used it consistently since to describe a different way of thinking about data centers. A conventional data center is a place where applications run. An AI factory is an industrial facility that converts raw data and compute into trained models and inference outputs — essentially a manufacturing plant for intelligence.
The factory metaphor is more than marketing. It carries a specific implication about how bottlenecks work. In a physical factory, throughput is governed by the slowest stage of the production line. You can build the fastest stamping press in the world, but if the conveyor belt moving parts from station to station runs at half speed, the press sits idle half the time. The entire facility's output is capped by the bottleneck, regardless of how capable the other components are.
In an AI factory, the equivalent of the conveyor belt is the interconnect fabric — the network that moves model weights, activations, gradients, and data between GPU chips, between GPU pods, and between compute and storage. As GPU compute density increases with each Blackwell generation, the interconnect must keep pace or become the governing bottleneck.
NVIDIA's Blackwell architecture already addressed part of this problem at the intra-rack level. NVLink 4.0 delivers 1.8 TB/s of GPU-to-GPU bandwidth within the NVL72 rack configuration. The inter-rack problem — how 10 NVL72 racks talk to each other, how 100 racks talk to each other — is where optical interconnects become non-negotiable. A petabyte of model weights distributed across a 100,000-GPU cluster cannot be efficiently accessed over copper fabric. The latency and bandwidth constraints make all-reduce operations prohibitively slow.
Jensen Huang has described future AI supercomputers as systems consuming gigawatts of power within single facilities. At that scale, the optical interconnect is not a feature — it is structural. You cannot build a 1-gigawatt AI factory without a photonic backbone. NVIDIA's $4 billion commitment to Lumentum and Coherent is the infrastructure layer for Jensen's vision becoming physically realizable.
The Ayar Labs parallel: silicon photonics inside the package
NVIDIA's Lumentum and Coherent investments address the between-rack and between-pod connectivity problem — the long-reach optical links that replace copper cables across the data hall floor. A parallel development addresses an even shorter range: optical interconnects inside the chip package itself.
Ayar Labs, a startup that NVIDIA invested in as part of a $500 million funding round, is developing optical I/O chiplets that replace electrical copper connections between chips on a substrate with optical waveguides. The concept is called co-packaged optics or in-package optics, and it extends the optical advantage from the data center scale down to the millimeter scale.
Inside a modern GPU package, data moves between the compute die, memory dies (HBM), and the package's external interface over tiny copper traces on silicon interposer. Those traces face the same fundamental physics limitations as longer copper links — skin effect, crosstalk, power dissipation — just at smaller scale. At the bandwidth densities required for future GPU generations (HBM5 and beyond), the electrical interface may become the next bottleneck even inside the package.
Ayar Labs' photonic I/O chiplet uses waveguides etched in silicon to carry light between chips. It can achieve terabits per second of aggregate bandwidth between chiplets at dramatically lower power than equivalent electrical I/O. NVIDIA's investment in Ayar Labs is a bet that this technology will be production-ready for integration into future GPU generations, extending the optical advantage all the way down to the chip-to-chip communication layer.
Together, the Ayar Labs investment and the Lumentum/Coherent commitments form a coherent (no pun intended) optical stack strategy: photonics at the intra-package level, photonics at the rack-to-rack level, and photonics at the pod-to-pod level. The entire signal chain, from GPU die to the far end of the data center, becomes optical. Copper is relegated to the last centimeters of signal conversion at each endpoint.
Who else is racing for the optical supply chain
NVIDIA's $4 billion commitment is the largest single optical supply chain investment by any AI company, but it is not happening in isolation. The entire hyperscaler community is grappling with the same connectivity bottleneck and pursuing optical supply in parallel.
Microsoft has been quietly increasing its optical transceiver procurement for Azure AI regions. Its Project Olympus rack specification includes optical fabric requirements that go beyond current hyperscaler standard deployments. Microsoft has investment relationships with several optical component manufacturers but has not made commitments at NVIDIA's scale.
Google manufactures many of its optical components internally through its advanced connectivity team, reducing dependence on the merchant market. Its Jupiter data center network fabric uses custom silicon and optical components designed in-house. This gives Google supply certainty but creates a proprietary island — it cannot easily share component costs with other hyperscalers.
Amazon has taken a hybrid approach, sourcing optical components from the merchant market (including Coherent and Lumentum) while developing custom optical designs for specific high-density applications within AWS. Its custom silicon program at Annapurna Labs has not yet extended to full optical component manufacturing.
Meta is an aggressive optical adopter. Its 400G OSFP deployments at Altoona and other hyperscale campuses have been among the largest optical transceiver purchases in the market. Meta has been vocal about needing the optical supply chain to scale faster than it currently is.
NVIDIA's investment creates a dynamic where the company that makes the AI compute chips is also becoming a significant investor in the optical connectivity components. This vertical integration — or at least vertical influence — gives NVIDIA a competitive advantage in delivering complete "AI factory" solutions to customers who want to buy a full rack-scale system rather than assembling it from components.
What this means for data center design
The practical implication of AI clusters becoming optically interconnected changes how data centers are designed and built.
Distance flexibility: Optical interconnects allow compute pods to be placed further apart within a facility. With copper, GPU racks must be clustered tightly to keep cable lengths manageable. With optical fiber, pods can be distributed across a large hall or even across multiple buildings connected by dark fiber — each GPU still communicates at full bandwidth regardless of physical distance. This changes cooling and power distribution design: instead of cramming 10,000 GPUs into a dense hot aisle, operators can spread them across a facility for more effective thermal management.
Switch port economics: As optical transceivers become cheaper, the economics of large-scale switch fabrics improve. A 100,000-GPU cluster requires roughly 10,000 top-of-rack switches and potentially thousands of spine switches. At current 800G optical transceiver pricing, the network fabric for such a cluster costs tens of millions of dollars. As pricing declines under NVIDIA's volume commitment to Lumentum and Coherent, the network fraction of total cluster cost decreases.
Power per rack: High-speed copper links generate heat within the cable and at the connector. Optical links generate heat primarily at the transceiver (optical-electrical conversion), not in the cable itself. For very dense rack configurations — 100 kilowatts per rack and above — eliminating copper cable heat load simplifies thermal design. The heat is better localized at the transceiver, where directed cooling can address it efficiently.
Reliability: Optical fiber is immune to electromagnetic interference. Dense GPU clusters generate significant electromagnetic fields from high-frequency switching. Copper cables in proximity to high-power GPU racks require shielding and careful routing. Optical fiber has no such constraint — it can run adjacent to power cables and GPU cooling equipment without signal degradation.
Investment and market implications
The $4 billion commitment changes the strategic positioning of both Lumentum and Coherent in ways that extend beyond the revenue the deals guarantee.
For Lumentum: A $2 billion purchase commitment against a company with a ~$5 billion market cap is transformative. The guaranteed revenue reduces earnings volatility, lowers cost of capital for capacity expansion, and signals to other potential customers that Lumentum's technology is validated at the highest level. The stock's 12% move on the announcement likely understates the long-term revenue impact if NVIDIA's AI factory build-out proceeds at the pace Jensen Huang has described.
For Coherent: The $2 billion commitment adds to an already-larger revenue base (Coherent generates approximately $5 billion in annual revenue across its broader photonics and compound semiconductor business). The AI data center portion of Coherent's revenue has been growing rapidly but was not the dominant segment before this deal. The NVIDIA commitment accelerates the shift of Coherent's mix toward high-margin datacom products and provides capital certainty for the silicon photonics manufacturing investments the company has been making at its facilities in Pennsylvania and New Hampshire.
For the optical component market broadly: When NVIDIA commits $4 billion to two suppliers, it signals to every other buyer that optical interconnect supply will be contested. Companies that have not secured long-term supply relationships for high-speed transceivers — particularly at 800G and the emerging 1.6T speeds — are now more exposed. Microsoft, Amazon, and Meta will be watching NVIDIA's moves and reconsidering their own supply chain strategies accordingly.
For NVIDIA's competitive positioning: The optical supply investments reinforce NVIDIA's ambition to be a full-stack AI infrastructure vendor, not merely a GPU maker. A customer who buys NVIDIA GPUs, NVIDIA NVLink switches, and now optical interconnects supplied through NVIDIA's preferred vendors is deeply embedded in the NVIDIA ecosystem. The switching costs rise with each additional layer of the stack that NVIDIA touches.
Frequently asked questions
What exactly are Lumentum and Coherent being paid to produce?
Both companies will supply optical transceiver components and modules used to connect GPU servers within and between AI data center racks. Specifically: laser sources (primarily externally modulated lasers and VCSELs), photonic integrated circuits, and complete pluggable optical transceiver modules at 400G, 800G, and emerging 1.6T speeds. The components replace copper cables for distances beyond a few meters, enabling the high-bandwidth low-latency interconnects that large AI training clusters require.
Why is NVIDIA investing in optical components rather than building them itself?
Optical component manufacturing requires highly specialized semiconductor fabrication capabilities — compound semiconductor (InP, GaAs) wafer growth, photonic integrated circuit processing, and precision optical assembly — that are completely different from the silicon CMOS processes used to make GPU chips. Acquiring or building these capabilities from scratch would take a decade and cost many times the $4 billion committed. Securing supply through purchase agreements achieves NVIDIA's goal of guaranteed access without the capital intensity and technology risk of vertical integration into compound semiconductor manufacturing.
Does this investment make Lumentum or Coherent exclusive to NVIDIA?
No. The agreements are structured as multi-year purchase commitments and capacity rights, not exclusivity deals. Both Lumentum and Coherent will continue to sell to other customers, including hyperscalers, telecommunications companies, and other data center operators. NVIDIA's agreements give it preferential access to a defined volume of capacity, not a lock on the entire company's output. The non-exclusive structure also keeps both companies from becoming overly dependent on a single customer.
How does this relate to the Ayar Labs investment?
The Ayar Labs investment and the Lumentum/Coherent commitments address different physical layers of the same optical interconnect strategy. Ayar Labs is developing in-package optical I/O that replaces copper connections between chips on a substrate — operating at millimeter to centimeter distances. Lumentum and Coherent supply optical components for rack-to-rack and pod-to-pod connections at distances of meters to hundreds of meters. Together they represent NVIDIA's ambition to make the entire signal chain — from GPU die to the far end of the data center — optical rather than electrical.
When will the optical interconnect components become standard in AI data centers?
800G optical transceivers are already deployed in the most advanced AI data center builds. The transition from copper to optical at shorter reaches (within the rack, between adjacent racks) is accelerating but not yet complete. Mainstream adoption of optical for all inter-rack connections in AI clusters is expected between 2026 and 2028, with co-packaged in-package optics (Ayar Labs technology) likely reaching production-ready status for GPU integration around 2028–2030. NVIDIA's supply commitments accelerate the manufacturing scale-up that drives the cost reduction enabling mainstream adoption.
What does this mean for companies that make copper interconnect products?
The long-term trend toward optical at scale is a headwind for copper direct-attach cable (DAC) and active optical cable (AOC) manufacturers at the distances where optical is superior. However, copper retains advantages at very short distances (within a rack, from GPU to top-of-rack switch) where transceiver conversion overhead makes optical less efficient. The copper interconnect market will not disappear — it will consolidate to the short-reach segments where its cost advantage remains intact, while optical captures the medium and long-reach segments that AI cluster scaling demands.
How does NVIDIA's photonics strategy affect the competitive position against AMD and custom silicon?
AMD's GPU systems and the custom silicon programs at Google (TPUs), Amazon (Trainium), and Microsoft (Maia) all face the same optical interconnect bottleneck. None of them have made comparable supply commitments to optical component manufacturers. NVIDIA's early and large-scale commitment to Lumentum and Coherent gives it a supply chain advantage that could manifest as better cluster-level performance (more consistent optical component quality), lower component cost (volume discounts), and first access to next-generation optical products from both vendors. In a market where AI factory build timelines are measured in months, supply chain certainty is a competitive moat.