TL;DR: Ayar Labs closed a $500 million Series E at a $3.75 billion valuation on March 3, 2026, bringing total funding to $870 million. Neuberger Berman led the round; NVIDIA, AMD, Sequoia Capital, ARK Invest, and the Qatar Investment Authority participated. The MIT spinout is building silicon photonics — optical interconnects that replace copper wires inside AI servers with pulses of light — to solve the bandwidth bottleneck throttling AI cluster performance. The co-investment of two arch-rival chipmakers in the same startup is without precedent in the semiconductor industry and functions as a joint endorsement that copper-based interconnects are hitting a physical ceiling that only optics can raise.
What you will learn
- The round: $500M at a $3.75B valuation
- Why NVIDIA and AMD are co-investing as rivals
- The copper bottleneck: why AI clusters are starving for bandwidth
- What silicon photonics actually is
- Ayar Labs' technology: TeraPHY and SuperNova
- MIT origins and the academic foundation
- The Taiwan office: TSMC proximity as strategy
- How optical interconnects change AI economics
- The competitive landscape: who else is working on this
- What $870M in total funding signals about the timeline
- Implications for AI infrastructure investment
The round: $500M at a $3.75B valuation
The Series E figure is large enough to demand context.
$500 million is a Series E for a company that makes interconnect chips — not AI models, not cloud platforms, not enterprise software. Ayar Labs is solving a plumbing problem inside data center servers. The fact that this problem is now worth nearly $4 billion at Series E valuation is a measure of how acute the bottleneck has become.
Neuberger Berman led the round. Neuberger is a $500 billion asset manager with a long-term, conviction-driven investment style — not a fund that leads early-stage semiconductor bets for speculative returns. Their decision to anchor this round reflects an institutional view that silicon photonics is transitioning from research prototype to production infrastructure within a near-term horizon.
The strategic investor list is the more striking story. NVIDIA and AMD — companies that have competed bitterly for every dollar of AI chip spend since 2022 — wrote checks into the same cap table. Add Sequoia Capital, one of the most selective growth-stage investors in technology, plus ARK Invest and the Qatar Investment Authority, and you have a syndicate that spans public market investors, sovereign wealth, and the two companies whose own products Ayar Labs' technology is designed to serve.
The prior rounds totaled $370 million. This single Series E is larger than everything Ayar Labs had raised in its entire history up to this point. That acceleration in capital deployment — and the strategic character of the investors — indicates a company that has cleared the proof-of-concept phase and is moving into manufacturing scale-up.
Why NVIDIA and AMD are co-investing as rivals
The co-investment deserves its own section because it is genuinely unusual.
NVIDIA and AMD compete on every significant metric in the AI chip market: performance per dollar, memory bandwidth, software ecosystem, foundry relationships, and enterprise sales. When NVIDIA releases a new GPU architecture, AMD's stock moves in response. The two companies do not typically sit across the same term sheet. They do not typically call the same portfolio company's board. They certainly do not typically co-invest in the infrastructure that makes both of their products work better.
That they are doing exactly that here is an implicit acknowledgment by both companies that the interconnect bottleneck is real, that solving it is in their shared interest, and that neither company has the specific expertise or IP to solve it alone.
The logic from NVIDIA's perspective: NVIDIA's H100 and Blackwell chips are limited in practice not by raw compute but by how fast data can move between them. An AI training cluster with 10,000 H100s is only as fast as the interconnect linking those chips. InfiniBand and NVLink — NVIDIA's current interconnect strategies — are electrical. As cluster sizes grow toward 100,000 chips and beyond (the scale required for frontier model training by 2027–2028), electrical interconnects hit physical limits: resistance, signal degradation, and heat dissipation at the cable lengths required. Optical interconnects eliminate these limits. A successful Ayar Labs accelerates the total performance envelope of every NVIDIA chip already sold.
The logic from AMD's perspective: identical. AMD's MI300X and successor chips face the same interconnect constraints. AMD's path to competing with NVIDIA at the largest cluster scales depends on having interconnects that can keep up with its accelerators. AMD does not have NVIDIA's NVLink ecosystem. It has an even stronger incentive to see a vendor-neutral optical interconnect standard succeed — because that standard would level the playing field at the infrastructure layer that currently gives NVIDIA a systems-level advantage.
Both companies co-investing in Ayar Labs is a signal that they have agreed, implicitly, to not compete on the interconnect layer. The interconnect becomes shared infrastructure — like the TCP/IP stack or the PCIe bus — while they compete on the compute layer above it.
The copper bottleneck: why AI clusters are starving for bandwidth
The technical problem Ayar Labs is solving requires some grounding in how AI clusters actually work.
A frontier model training run — GPT-4, Gemini Ultra, Claude 3 Opus — does not fit on a single chip. The models are too large. The training requires distributing computation across thousands of accelerator chips, with each chip responsible for a fraction of the model's parameters. For this to work, the chips must communicate constantly: sharing gradient updates, synchronizing weights, exchanging intermediate activations. The efficiency of this communication directly determines the efficiency of the training run.
The interconnects — the physical cables and switching infrastructure connecting the chips — are the communication medium. For decades, these interconnects were copper electrical cables. Copper works at short distances and moderate data rates. For the data center interconnects of the 2010s, it was entirely adequate.
AI clusters of 2024–2026 have broken copper's ability to keep up. The numbers are specific and sobering.
A single NVIDIA H100 chip has 900 GB/s of memory bandwidth. An H100 cluster of 10,000 chips needs to move data between chips at aggregate rates that exceed what copper cabling can sustain across the distances involved in a physical data center — tens of meters between server racks. Copper signal integrity degrades sharply beyond a few meters at the data rates required. Copper cables also generate heat: a data center full of high-speed copper interconnects has a meaningful thermal load from the cables themselves.
The consequence is that today's largest AI clusters are running below their theoretical peak throughput. The GPUs are faster than the network connecting them. Engineers compensate with algorithms that minimize communication (gradient checkpointing, tensor parallelism strategies, pipeline parallelism) — but these are workarounds for a hardware constraint, not solutions to it.
Optical interconnects replace the copper wire with a fiber that carries light. Light travels through fiber with negligible signal degradation at data center distances. It generates no heat from resistance. It can carry multiple data streams simultaneously on different wavelengths (wavelength division multiplexing). The bandwidth ceiling is not a physical law — it is an engineering problem that has been solved in long-haul fiber optics for thirty years.
The challenge Ayar Labs is solving is not "how do you send data on light" — that is understood. The challenge is "how do you build optical transceivers that are small enough, cheap enough, and power-efficient enough to replace copper inside a server chassis" — at the scale AI clusters require.
What silicon photonics actually is
Silicon photonics is the integration of optical components — waveguides, modulators, photodetectors — onto silicon chips using the same semiconductor manufacturing processes (CMOS) used to make conventional computer chips.
The term breaks down as follows. "Silicon" refers to the substrate and the manufacturing process: standard silicon wafer fabrication at TSMC or similar foundries. "Photonics" refers to the manipulation of light rather than electrons: routing light through waveguides, modulating it to encode data, and detecting it to decode data.
The key insight of silicon photonics is that you can build optical components using standard chip-making equipment. You do not need exotic materials or specialized manufacturing lines. The waveguides that channel light are etched into silicon using the same lithography tools that etch transistors. The modulators that encode data onto light signals are built from silicon structures whose optical properties change with applied voltage — the same kind of voltage manipulation that controls transistors.
This manufacturing compatibility is what makes silicon photonics economically viable for data center interconnects. Traditional optical components — the kind used in long-haul fiber optic cables — are made from indium phosphide, a compound semiconductor that requires different, more expensive manufacturing lines. Indium phosphide transceivers exist and work well, but they are expensive to produce at the volumes AI data centers require and difficult to integrate tightly with silicon chips.
Silicon photonics changes the cost equation by making optical components manufacturable at semiconductor foundry scale. A silicon photonics chip can be made on the same wafer line as any other chip. Volumes that would make indium phosphide transceivers prohibitively expensive become cost-effective in silicon photonics.
The remaining challenge in silicon photonics — and it is a significant one — is the light source. Silicon does not emit light efficiently; it is a poor laser material. Most silicon photonics systems use an external laser (often indium phosphide) whose light is coupled into the silicon chip. Managing this coupling efficiently, and integrating it into a dense package alongside conventional silicon chips, is a core engineering challenge.
Ayar Labs' technology: TeraPHY and SuperNova
Ayar Labs has developed two products that together constitute their approach to solving the optical interconnect problem.
TeraPHY is the optical I/O chiplet. A chiplet is a small semiconductor die designed to be integrated alongside other chips in a single package — rather than being a standalone chip. TeraPHY integrates optical transceivers (transmitters and receivers) into a chiplet that can be co-packaged directly with an AI accelerator. The physical proximity is key: by placing the optical interface inside the same package as the compute chip, Ayar Labs eliminates the electrical bottleneck that occurs when data must travel from the chip to an external optical transceiver via conventional electrical traces on a circuit board.
The TeraPHY chiplet converts electrical signals from the AI accelerator into light signals for transmission, and light signals received from the network back into electrical signals — all within millimeters of the compute chip itself. This co-packaged optical approach is distinct from pluggable optical transceivers that sit in a separate module at the edge of the server. Pluggable transceivers work, but they reintroduce an electrical path (from chip to transceiver) that has its own bandwidth limitations.
SuperNova is the light source: a multi-wavelength laser that provides the optical carrier for TeraPHY's transmission. The SuperNova generates multiple wavelengths of light simultaneously (a comb of colors), enabling wavelength division multiplexing — sending multiple independent data streams on the same fiber using different colors of light. This dramatically increases the effective bandwidth of a single fiber connection.
The combination of TeraPHY and SuperNova is designed to deliver terabit-per-second bandwidth per chip at power levels competitive with high-speed copper alternatives. The key performance claims, based on Ayar Labs' published specifications: 2 Tbps aggregate bandwidth per TeraPHY chiplet, with power efficiency targets in the range of 2 picojoules per bit — a metric that compares favorably with the energy consumption of equivalent copper-based high-speed SerDes interfaces.
The co-packaging approach is what differentiates Ayar Labs from companies that make pluggable optical modules. Nearly every data center already uses pluggable optical transceivers for rack-to-rack connections. Ayar Labs is targeting the connection that was previously always copper: the one inside the server, between compute chips.
MIT origins and the academic foundation
Ayar Labs was founded in 2015 as a spinout from MIT and UC Berkeley research on silicon photonics integration.
The founding team came out of the research groups working on electronic-photonic co-integration — specifically the work on photonic networks-on-chip and silicon photonic transceivers developed at MIT's Research Laboratory of Electronics. The academic foundation is significant because silicon photonics is not an obvious engineering path: it requires deep expertise in both photonics physics and semiconductor process engineering, a combination that rarely exists outside research universities and national labs.
The MIT connection also explains why Ayar Labs' technology is foundationally different from companies that built optical interconnects using conventional photonic approaches. The research that informed TeraPHY specifically addressed the co-packaging integration challenge — how to place a photonic chiplet in close physical proximity to a digital chip without the coupling losses and signal integrity issues that had previously made co-packaged optics impractical at scale.
That research took roughly a decade to translate from academic publication to investor-backed product. The $870 million in total funding represents the cost of that translation: building semiconductor products requires extensive tape-outs (wafer fabrication runs for prototyping), testing, reliability qualification, and foundry relationships — all of which are capital-intensive before a single production unit ships.
The MIT pedigree also provides credibility in a market where technology claims about photonics performance can be difficult for non-specialists to evaluate. When NVIDIA and AMD diligence an optical interconnect company, they bring their own chip architecture teams to evaluate the technical claims. The fact that both companies chose to invest — after that diligence — is stronger validation than any published benchmark.
The Taiwan office: TSMC proximity as strategy
The announcement of a new Taiwan production office is a detail that deserves more attention than it typically receives in coverage of fundraising rounds.
Taiwan is where TSMC manufactures chips. TSMC is, by a significant margin, the world's most advanced semiconductor foundry. Nearly every high-performance chip in the AI supply chain — NVIDIA's H100 and Blackwell, AMD's MI300X, Apple's M-series, and Ayar Labs' TeraPHY chiplet — is manufactured by TSMC.
The practical reason for being close to TSMC is the same reason any fabless semiconductor company opens an office near its foundry: tape-out coordination, process development, yield engineering, and failure analysis all happen faster and more effectively when engineers can meet with TSMC counterparts in person and can physically access silicon that comes back from a fabrication run.
But the strategic reason is more important. TSMC is developing its own advanced packaging technologies — specifically TSMC's SoIC (System on Integrated Chip) and CoWoS (Chip on Wafer on Substrate) processes — that are precisely the technologies needed to co-package Ayar Labs' TeraPHY chiplet with NVIDIA or AMD chips. Integrating a photonic chiplet with an AI accelerator in the same package requires packaging expertise that only TSMC and a small number of other facilities can provide at production scale.
Being physically present in Taiwan signals that Ayar Labs is past the laboratory phase and is actively working with TSMC on the packaging integration required to move TeraPHY from prototype to production. The Taiwan office is not a support function. It is the team responsible for making the product manufacturable.
How optical interconnects change AI economics
The downstream financial implication of a successful optical interconnect transition is a restructuring of how AI infrastructure costs are allocated.
Today's AI cluster costs are dominated by three factors: compute chips (GPUs/accelerators), networking (InfiniBand switches, copper cables, NICs), and power (electricity for chips and cooling). Of these, networking infrastructure at scale can represent 20–30% of total system cost and a comparable fraction of operating power consumption.
Optical interconnects change two of these three cost factors.
On the capital cost side: optical interconnects are not currently cheaper than copper at equivalent bandwidth. The economics are about what you get for the cost, not the raw price. An optical interconnect at $X delivers dramatically higher bandwidth and lower latency than a copper interconnect at $X. The relevant comparison is cost per gigabit-second of bandwidth — and optical wins decisively at the bandwidth levels AI clusters require. As production volumes scale (driven by the AI build-out that Ayar Labs' investors are funding), the cost per unit of optical I/O is expected to approach copper alternatives.
On the operating cost side: optical interconnects consume less power for equivalent data throughput because they eliminate the resistive losses in copper. A copper cable moving 400 Gbps dissipates meaningful power as heat. An optical fiber carrying the same data dissipates almost none. At data center scale — hundreds of megawatts of compute — the power saved by optical interconnects is measurable in megawatts, which translates directly to lower electricity costs and reduced cooling infrastructure requirements.
The performance implication is the most significant: if optical interconnects remove the bandwidth constraint on AI cluster communication, training runs that currently require 10,000 chips to achieve a given throughput might achieve the same throughput with 7,000–8,000 chips that are better utilized. Fewer chips, better utilized, consuming less power per unit of output — the economics compound in AI operators' favor at every scale.
The implication for model development is structural: larger models become tractable on the same budget. Communication overhead — currently a major limiting factor in scaling AI training runs — diminishes when the interconnect is no longer the bottleneck. The models that will be trained on clusters equipped with optical interconnects in 2027–2029 may be architecturally different from those trained today, because model architects will be designing for a communication-abundant environment rather than a bandwidth-constrained one.
The competitive landscape: who else is working on this
Ayar Labs is not alone in pursuing optical interconnects for AI data centers, but the competitive landscape is more fragmented than the funding round might suggest.
Intel has been developing silicon photonics for over a decade, primarily for data center networking. Intel's silicon photonics products are mature at the rack-to-rack level (pluggable transceivers) but have not achieved the co-packaged integration that Ayar Labs is targeting. Intel's photonics division is focused on a different product category: replacing pluggable copper DAC cables with pluggable optical transceivers at standard form factors, rather than integrating optics inside the chip package.
Broadcom is a significant player in optical components for data center networking. Broadcom's photonics products serve the switch and transceiver market. Like Intel, Broadcom is primarily in the pluggable transceiver segment rather than co-packaged optics.
Lightmatter is a well-funded startup (raised over $400 million through 2025) pursuing photonic computing — using light not just for interconnects but for the computation itself. Lightmatter's Passage interconnect product competes more directly with Ayar Labs, though the approaches differ. Lightmatter is also developing photonic processors that perform matrix multiplication using light, a fundamentally more ambitious (and risky) technical bet.
Ranovus and Marvell have also developed silicon photonics products for data center applications, with varying degrees of co-packaging integration.
What distinguishes Ayar Labs in this landscape is the combination of: the co-packaged optics approach (chiplet integration rather than pluggable modules), the TSMC manufacturing relationship for production-grade integration, and now the explicit strategic backing of both NVIDIA and AMD. The NVIDIA and AMD investments function as customer validation at the highest level of the ecosystem — the chipmakers whose products TeraPHY must integrate with have decided that Ayar Labs' approach is the one they want to build around.
What $870M in total funding signals about the timeline
The funding trajectory of Ayar Labs tells a story about where silicon photonics sits in its commercialization arc.
Early rounds funded research and development: proving that TeraPHY could be manufactured in a silicon foundry, that the optical coupling worked at the required performance levels, and that the co-packaging integration was achievable. This is expensive work — each tape-out costs millions of dollars, and multiple tape-outs are required to optimize a chip design.
The $500 million Series E is not R&D funding. A round of this size, at this valuation, from these investors, is manufacturing scale-up capital. It is the money required to move from prototype to production: building out the supply chain, qualifying the manufacturing process for high-volume output, hiring the application engineering teams that will help NVIDIA and AMD integrate TeraPHY into their chip packages, and opening the Taiwan office to manage the TSMC relationship at production scale.
The 18-to-36-month window from a manufacturing scale-up round to first production shipments is standard for semiconductor products. If the Series E closed in March 2026, a plausible timeline for TeraPHY in production AI systems is late 2027 to early 2028 — which coincides with the expected deployment timeframe for NVIDIA's post-Blackwell architectures and AMD's MI400-series successors.
This timeline alignment is not accidental. NVIDIA and AMD invest strategically, not speculatively. Their co-investment in Ayar Labs is structured around a specific product roadmap that has them integrating TeraPHY into their next-generation chip packages. The capital is flowing now because the integration work must begin now to meet the 2027–2028 chip design schedule.
The $3.75 billion valuation reflects what investors believe Ayar Labs is worth if TeraPHY reaches production in next-generation AI chips at the volumes the AI infrastructure build-out implies. If the 100,000-chip AI clusters expected in 2028–2029 each require optical interconnects, the total addressable market for a co-packaged optical I/O solution is measured in tens of billions of dollars annually.
Implications for AI infrastructure investment
The Ayar Labs Series E is a data point in a broader thesis: the physical infrastructure of AI is being rebuilt from scratch, and the rebuilding extends below the chip level to the materials and physics of how chips communicate.
For AI infrastructure investors, the implication is that hardware investment in AI is not converging to a single dominant architecture — it is differentiating into specialized layers. Compute (GPUs, LPUs, TPUs), memory (HBM, LPDDR, CXL-attached), interconnect (copper NVLink, optical chiplets), and packaging (2D, 2.5D, 3D integration) are each becoming distinct investment categories with distinct technical leaders and competitive dynamics.
The NVIDIA and AMD co-investment in Ayar Labs is a template for how chipmakers will secure their supply chains in this differentiated stack. Neither company can afford to let a competitor lock up the interconnect technology that their chips depend on. By co-investing, they ensure that the interconnect layer remains accessible to both — and they signal to the market that co-packaged optics is the direction their roadmaps are heading.
For enterprise buyers of AI infrastructure, the practical implication is a watch-and-wait posture on near-term investments. Clusters built on copper interconnects in 2026 will not be obsolete — they will remain fully functional. But clusters built in 2028 on optical interconnect architectures will have meaningfully better performance-per-dollar at the largest scales. Organizations planning multi-year AI infrastructure investments should model a hardware refresh cycle that aligns with the expected arrival of optical interconnect-equipped AI systems.
The Ayar Labs round is, at its core, a bet that physics wins. Copper has limits that are not engineering problems — they are fundamental properties of electrons moving through resistive material. Light does not share those limits. The question has never been whether optical interconnects would eventually replace copper in AI clusters. The question has always been when, and who. With $870 million raised, NVIDIA and AMD on the cap table, and a Taiwan office opening beside TSMC, the answer to both questions is becoming clearer.
Frequently asked questions
What exactly does Ayar Labs make?
Ayar Labs makes optical interconnect chiplets — semiconductor dies that convert electrical signals from AI chips into light signals for transmission, and receive light signals and convert them back to electrical signals. Their primary product, TeraPHY, is designed to be co-packaged directly with AI accelerator chips from NVIDIA and AMD. The companion product, SuperNova, is a multi-wavelength laser that powers the optical transmission. Together, they replace high-speed copper connections inside AI server packages with optical connections, delivering higher bandwidth at lower power consumption.
Why are NVIDIA and AMD both investing in the same company?
Because both companies' products face the same interconnect bottleneck, and neither company has the silicon photonics expertise to solve it internally at the required timeline. By co-investing in Ayar Labs, NVIDIA and AMD are effectively agreeing to share the interconnect infrastructure layer while continuing to compete on the compute layer. This mirrors the way both companies use TSMC's foundry services without competing with each other at the manufacturing level. The co-investment also signals to the supply chain that co-packaged optics is a direction both major chipmakers are committing to — which de-risks Ayar Labs' path to production.
What is the difference between co-packaged optics and pluggable optical transceivers?
Pluggable optical transceivers — the kind already used widely in data centers — are discrete modules that plug into a port on a switch or network interface card. They handle fiber-to-fiber connections between racks or between buildings. Co-packaged optics integrates the optical transceiver directly into the same package as the compute chip, converting the electrical signal to light within millimeters of where it originates. This eliminates the electrical trace that runs from the chip to a pluggable transceiver, which has its own bandwidth and power overhead. Co-packaged optics targets the connection that was previously always copper: inside the server, between chips in the same package or on the same board.
When will products using Ayar Labs' technology appear in AI systems?
No production date has been publicly confirmed. Based on the stage of investment (manufacturing scale-up Series E), the Taiwan office opening for TSMC integration work, and the typical semiconductor design-to-production timeline of 18–36 months, products incorporating TeraPHY in production AI systems are plausible by late 2027 to early 2028. This aligns with the expected next-generation chip architectures from NVIDIA and AMD. Ayar Labs has not published a specific production availability date.
Is Ayar Labs profitable?
As a venture-backed hardware company at Series E, Ayar Labs is in the scale-up phase and has not disclosed profitability. Hardware semiconductor companies at this stage typically operate at a loss while building out manufacturing capacity and customer relationships. The $870 million raised to date is the capital required to reach the production volumes at which unit economics turn favorable. Profitability is a function of when TeraPHY achieves production scale in NVIDIA and AMD system designs — a milestone that the Series E is designed to fund.
How does this relate to the broader copper-to-optics transition in data centers?
The data center networking industry has been transitioning from copper to optical fiber for rack-to-rack and building-to-building connections for over a decade — that transition is largely complete in hyperscale data centers. What Ayar Labs is targeting is the final frontier of that transition: the copper connections inside the server chassis itself, between chips in the same package. This last segment resisted optical replacement because the integration challenges were extreme at those distances and densities. Ayar Labs' co-packaged optical chiplet approach is specifically designed to solve those integration challenges, completing the copper-to-optics transition at every layer of the data center connectivity stack.
What happens to Ayar Labs if NVIDIA or AMD develop competing optical interconnect technology internally?
The strategic investment structure provides some protection: NVIDIA and AMD have financial stakes in Ayar Labs' success, creating aligned incentives rather than pure competitive pressure. Additionally, the silicon photonics expertise required to build TeraPHY is concentrated in a small number of research and engineering teams worldwide — the decade of MIT-rooted research that underlies Ayar Labs' technology is not easily replicated on a short timeline. That said, the risk of internal development at large chipmakers is real, and Ayar Labs' best defense is executing production integration ahead of any internal alternative reaching maturity. The Series E capital is, in part, a race against that clock.