TL;DR: Nexthop AI has closed an oversubscribed $500M Series B led by Lightspeed Venture Partners and a16z at a $4.2B valuation, launching three new AI-optimized switches — the NH-4010, NH-4220, and NH-5010 — purpose-built for AI cluster fabrics. Founded by Anshul Sadana, ex-COO of Arista Networks, the company is targeting a data center networking market expected to hit $100B by 2031 as hyperscalers collectively spend an estimated $650B on data center infrastructure in 2026.
$500M raised. $4.2B valuation. Three new switches announced the same day. Nexthop AI just put the AI networking market on notice — and the investor list reads like every major firm has concluded that the bottleneck holding back AI at scale is not compute. It is the network connecting it.
What you will learn
Why Lightspeed and a16z led the oversubscribed round
On March 10, 2026, Nexthop AI announced it had closed a $500 million Series B that was oversubscribed — meaning demand from investors exceeded the round size before it closed. Lightspeed Venture Partners led the round, with Andreessen Horowitz and Altimeter Capital participating alongside returning investors. The round values the company at $4.2 billion.
Oversubscribed at this scale is a specific signal. It means Nexthop set a target, hit it, had more capital offered than needed, and still closed at the round terms it wanted. At a $4.2 billion post-money valuation for a company in the infrastructure-heavy, capital-intensive networking hardware space, the institutional consensus is clear: Nexthop AI has a credible path to becoming a dominant player in a market that is growing faster than anyone can build for.
The investor thesis is not complicated to articulate, even if it is difficult to execute. Hyperscalers — Microsoft, Google, Amazon, Meta — are collectively spending hundreds of billions of dollars building AI data centers in 2026. Those data centers are full of NVIDIA GPUs and AMD Instinct accelerators. The GPUs sit idle if the network connecting them cannot move data fast enough to keep them fed. Traditional networking hardware was not designed for AI workloads. Nexthop AI is building the hardware that was.
Lightspeed's history in enterprise infrastructure bets — from Arista itself to Nutanix to HashiCorp — makes the lead position legible. a16z's infrastructure portfolio includes a consistent thread of bets on companies that build the rails beneath AI rather than the AI itself. Altimeter's presence signals public-market-caliber institutional conviction. The composition of this round is not accidental.
Read the full announcement on Business Wire and a16z's investment thesis.
Anshul Sadana: the Arista veteran betting everything on AI networking
Nexthop AI's founding story starts at Arista Networks — one of the most successful enterprise networking companies of the past two decades. Anshul Sadana served as Chief Operating Officer at Arista, where he spent over a decade helping build the company from a challenger to a category leader in data center networking. Arista went public in 2014 at a $3 billion valuation; it is worth over $100 billion today.
Sadana left Arista to start Nexthop AI with a specific thesis: the networking architecture that worked for cloud-era workloads — east-west traffic between microservices, large-scale web applications, distributed databases — is structurally mismatched with the demands of AI training and inference at scale.
He is not wrong. AI training requires moving massive tensors between thousands of GPU nodes in highly synchronized patterns. The all-reduce collective operation — where every GPU in a cluster communicates its gradient updates with every other GPU simultaneously — places demands on network latency, bandwidth, and congestion management that are categorically different from anything data center networking was originally designed to handle.
Sadana's advantage is not just his technical insight. It is his decade-plus of executive operating experience at the company his new startup most needs to displace. He knows Arista's product roadmap, its customer relationships, its organizational decision-making, and its weaknesses. That institutional knowledge is part of what investors are pricing in at $4.2 billion.
The founding team at Nexthop AI is reportedly populated with other Arista, Cisco, and Broadcom alumni — engineers and product leaders who have built production networking hardware and who understand the systems integration complexity that separates a working lab demo from a product a hyperscaler will put in its critical path.
The NH-4010, NH-4220, and NH-5010: what each switch does differently
The Series B announcement came bundled with the launch of three new products — the NH-4010, NH-4220, and NH-5010 — each targeting a distinct layer of the AI data center network stack.
NH-4010 is positioned as the access-layer switch for AI clusters. It handles the final connection between individual GPU servers and the broader fabric. The design emphasis is on extremely low and predictable latency at the first hop — because any jitter at the access layer compounds through the rest of the network. The NH-4010 is optimized for RDMA (Remote Direct Memory Access) workloads, which is the protocol most commonly used for high-performance AI training communication. It supports modern RDMA Congestion Notification (RoCEv2) and enhanced ECN marking to prevent the buffer buildup that kills AI training throughput.
NH-4220 is the aggregation-layer switch — the middle tier that collects traffic from multiple access-layer switches and routes it up toward the core. This is where the traffic patterns become complex and where traditional switches tend to fail AI workloads. The NH-4220 implements Nexthop's proprietary congestion control algorithms at the aggregation layer, designed specifically for the many-to-many communication patterns that collective operations generate. It also supports enhanced Equal-Cost Multi-Path (ECMP) routing to distribute AI traffic evenly across available paths — critical for avoiding hotspots in large clusters.
NH-5010 is the spine-layer switch — the core of the AI network fabric. It is built for the highest bandwidth density and lowest latency of the three products, handling the full-bisection bandwidth that a properly designed AI cluster requires. Nexthop has emphasized the NH-5010's power efficiency relative to competitive spine switches: the company claims significantly better performance-per-watt than traditional alternatives in the same market segment. For data centers paying millions of dollars per month in power costs, that efficiency advantage translates directly to operating economics.
All three switches run Nexthop's own NOS (Network Operating System), which is designed to expose AI-workload-specific telemetry to cluster orchestration software — letting training jobs see network conditions and adapt their communication patterns accordingly. This tight integration between the network fabric and the AI compute layer is a design philosophy the incumbent vendors have not matched.
SiliconAngle's coverage of the product launch provides additional technical detail on the switch specifications.
The AI networking problem that incumbents have not solved
To understand why Nexthop AI raised $500 million rather than $50 million, you need to understand what is actually broken about current data center networking for AI workloads.
Modern AI training runs on clusters of hundreds or thousands of GPUs connected by high-speed networking. The training process requires GPUs to exchange gradient updates continuously — a communication pattern called all-reduce where every GPU must send data to and receive data from every other GPU in the cluster. The total communication volume scales as the square of the cluster size. A cluster of 1,000 GPUs generates a communication problem that is 1 million times more complex than a single GPU pair.
Traditional data center switches were built for client-server and microservices traffic: bursty, unpredictable, tolerant of milliseconds of latency. AI training traffic is none of those things. It is highly regular, time-synchronized, extremely latency-sensitive, and generates massive many-to-many communication patterns that traditional switches were not designed to handle efficiently.
The consequence of a mismatched network is GPU underutilization. If the network cannot keep GPUs fed with data, the most expensive components in the data center sit idle — waiting for the network to catch up. Hyperscalers and cloud AI providers have spent years throwing hardware and engineering at this problem: building custom networks like Google's Jupiter and Meta's RAICHU, implementing RoCE and InfiniBand hybrid fabrics, and developing proprietary congestion control algorithms.
The problem is that these solutions are custom-built by the hyperscalers for their own environments and are not available as off-the-shelf products for the rest of the market. Enterprise AI teams, mid-size cloud providers, and AI model companies that are not Google-scale do not have the engineering resources to build custom network fabrics. They need a vendor who has already solved the problem and packaged it into deployable hardware.
That gap — between what hyperscalers have built for themselves and what the rest of the market can actually buy — is the market Nexthop AI is targeting with the NH-4010, NH-4220, and NH-5010.
Competitive landscape: Arista, Cisco, and Broadcom
Nexthop AI enters a market dominated by companies with billions in revenue, decades of customer relationships, and deep integration into data center operations worldwide. The competition is formidable in ways that matter operationally — and vulnerable in ways that matter strategically.
Arista is the most direct competitor and the company Nexthop AI is most explicitly designed to displace. Arista's success was built on a clean-sheet re-architecture of data center networking for the cloud era — exactly the playbook Nexthop is running for the AI era. The irony of Sadana leading a startup built on this model against his former employer will not be lost on either company's leadership.
Arista's vulnerability is path dependence. Its EOS operating system and installed base are built around the assumptions of cloud-era networking. Retrofitting AI-workload-specific features onto an architecture designed for different traffic patterns is structurally harder than building for AI from scratch. Nexthop AI has no legacy to protect.
Cisco's challenge is scale and organizational inertia. The company has been declaring AI networking priorities for several product cycles without shipping hardware that meaningfully changes the economics of AI data center networking. Its Nexus line remains the dominant choice in many enterprise data centers, but dominant incumbent status in a disrupted market is a lagging indicator of strength, not a leading one.
Broadcom occupies a different layer. Its Tomahawk 5 ASIC powers a significant portion of current high-end data center switches, including hardware from Arista and others. Broadcom sells the silicon; others build the system around it. Nexthop AI designs its own ASICs — which is both a massive capital commitment and a source of differentiation if its silicon is meaningfully better for AI workloads than what Broadcom sells to everyone equally.
The ASIC question is worth watching closely. If Nexthop's chip design team has produced silicon that is materially better for AI cluster communication than Tomahawk 5 — on latency, power efficiency, or congestion management — that becomes a durable technical moat. If the advantage is primarily in software and system integration, the moat is more defensible in the short term but more vulnerable to copying in the medium term.
The $650B data center spending wave and where networking fits
The scale of current AI data center investment is difficult to overstate. Microsoft, Google, Amazon, and Meta collectively announced capital expenditure plans exceeding $300 billion for AI infrastructure in 2025, with the pace accelerating into 2026. Total industry data center spending — including AI-focused build-outs from cloud providers, sovereign AI initiatives, and enterprise AI infrastructure — is estimated at approximately $650 billion in 2026 alone.
Networking infrastructure typically represents 10–15% of total data center capital expenditure. Applied to the $650 billion total, that implies a networking TAM in the $65–100 billion range for 2026. The AI networking market — the subset specifically optimized for AI cluster fabrics rather than general-purpose data center traffic — is expected to reach $100 billion by 2031 as AI workloads increasingly dominate data center build-outs.
The growth driver is straightforward: every GPU server that gets racked requires networking to connect it to the cluster. As the number of GPU servers explodes — driven by NVIDIA's Blackwell architecture, AMD's MI400 series, and hyperscaler custom silicon — the networking demand scales with it. The constraint on AI cluster performance is increasingly the network fabric, not the compute itself, which means customers are motivated to upgrade networking in parallel with compute.
Nexthop AI's timing aligns with the largest capital deployment cycle in data center history. The $500 million Series B is, in part, a bet on capturing share during the formation phase of a new market category — before incumbent vendors adapt and before the network architecture decisions at hyperscalers become locked in for years.
Investor thesis: why networking is the next AI infrastructure category
The a16z investment announcement offers the clearest articulation of the investor thesis behind Nexthop AI: the AI infrastructure stack has layers, and each layer represents a distinct investment opportunity. The model layer has been funded. The compute layer (NVIDIA, AMD, custom silicon) is established. The data center infrastructure layer — power, cooling, real estate — is being funded through a wave of hyperscaler CapEx. The networking layer is the gap.
Networking is not glamorous, which is part of why it has been underfunded relative to its strategic importance. Switches and routers are not the subject of breathless product launches. They are invisible infrastructure that matters intensely when it fails and not at all when it works. That structural invisibility has kept the networking layer from attracting the same venture attention as AI model companies — until the performance constraint became impossible to ignore.
The Lightspeed thesis is likely similar, informed by the firm's portfolio history that includes Arista Networks itself. Lightspeed backed Arista in 2008 when the company was a challenge to Cisco's dominance in data center switching. That investment returned multiples that defined a fund. The Nexthop AI bet is structurally analogous: a purpose-built networking company with deep technical talent, founded by an executive from the company it intends to displace, entering a market where the incumbent's architecture is mismatched to the emerging dominant workload.
Altimeter's participation adds a public-market lens. Altimeter is known for investing in companies at late private stages that it expects to take public within a few years. A $4.2 billion valuation today suggests an IPO path in the $15–25 billion range if Nexthop AI executes on its product roadmap and captures meaningful share of the AI networking market before the round of consolidation that typically follows a new category's formation.
Power and cost efficiency: the operational advantage Nexthop is selling
The economic case for Nexthop AI's hardware does not rest on raw performance alone. In a data center environment where power costs are measured in millions of dollars per month, the power efficiency of every component in the stack matters directly to operating economics.
Traditional high-end spine switches consume 10–20 kilowatts per unit at maximum throughput. Data centers running large AI clusters may deploy hundreds of switches in a single fabric. The cumulative power draw from networking hardware becomes a material budget line — and a cooling problem that multiplies power costs further.
Nexthop AI has positioned all three new switches with explicit power efficiency claims relative to competitive alternatives. The company has emphasized that the NH-5010 spine switch, in particular, delivers significantly better bandwidth-per-watt than comparable offerings from Arista and Cisco — a claim that, if validated at scale by customers, directly translates to lower total cost of ownership over the three-to-five year lifecycle of a switch deployment.
The power efficiency advantage is also a regulatory consideration. Data center power consumption is facing increasing scrutiny from governments in Europe, the United States, and Asia. The UK's National Grid has cited AI data center growth as a material factor in power planning. Several US states have introduced legislation targeting data center energy consumption. For hyperscalers and cloud providers navigating this regulatory environment, a meaningful reduction in networking power draw is a legitimate consideration in vendor selection — not just an engineering preference.
The claim requires validation. Nexthop AI is a relatively young company putting these switches into production environments for the first time at scale. The power efficiency numbers that appear in product sheets will need to be validated by customers running real AI training workloads at production cluster sizes. Early customer reference cases will be a critical variable in whether the efficiency narrative converts to sales momentum.
NVIDIA GTC and the infrastructure investment narrative
The timing of Nexthop AI's Series B announcement — March 2026, coinciding with NVIDIA's annual GTC conference — is not incidental. NVIDIA GTC has become the focal point of the AI infrastructure investment calendar, the moment when the industry's spending intentions for the coming year become explicit through product announcements, customer commitments, and ecosystem partnerships.
NVIDIA GTC 2026 drove significant narrative momentum around AI infrastructure investment. NVIDIA CEO Jensen Huang's announcements of next-generation Blackwell Ultra and roadmap clarity on future GPU generations reinforced the scale of GPU deployment that hyperscalers and AI model companies are committing to. Every GPU server announced at GTC is also a networking problem that needs solving.
For Nexthop AI, launching three new switches into a GTC-amplified media environment is a deliberate positioning strategy. The company is ensuring that every infrastructure investor and data center buyer thinking about the next wave of GPU deployments also sees Nexthop AI's name alongside its switch lineup. The timing gives Nexthop AI a halo from the broader infrastructure investment narrative without requiring the company to have NVIDIA as a named customer or partner at launch.
The broader infrastructure investment narrative that GTC amplifies also serves Nexthop AI's fundraising context. The Series B closed in an environment where institutional investors are actively looking for the next layer of the AI infrastructure stack to fund. Networking is the legible answer, and Nexthop AI is the most credible pure-play networking bet at this stage. The GTC timing puts the problem statement — AI clusters need better networking — in front of the same audience that is evaluating where the next wave of AI infrastructure investment should go.
What this means for the AI infrastructure stack in 2026
Nexthop AI's $500 million Series B is the latest evidence that the AI infrastructure investment cycle is moving down the stack. The first wave funded foundation models. The second wave funded compute — GPU manufacturers, data center real estate, power and cooling infrastructure. The third wave is now funding the connective tissue: the networking, storage, and memory systems that determine whether the compute can actually perform at its rated capacity.
Networking is the most underfunded layer relative to its strategic importance. NVIDIA's GPU revenue has grown faster than any hardware category in history. The data center construction market is at a generational peak. But the switches and routers connecting all of this compute have largely been purchased from vendors whose product roadmaps were not designed around AI workloads. The performance gap between what AI clusters need from their network and what today's incumbent switches deliver is the market gap Nexthop AI is targeting.
The implications extend beyond Nexthop AI specifically. The company's funding signals to the broader market that AI-specific networking is now a venture-scale category, not a niche engineering optimization. That will accelerate investment in competitive products from both startups and incumbents. Arista and Cisco will respond — they have the resources and the customer relationships to fight back. The question is whether they can build purpose-designed AI networking hardware fast enough to address the performance gap before Nexthop AI's customer base locks in.
For AI model companies and enterprises building serious AI infrastructure, the Nexthop AI launch introduces a meaningful choice that did not exist eighteen months ago. The previous options were Arista (best incumbent), Cisco (broadest installed base), InfiniBand from NVIDIA (highest performance, highest cost, proprietary ecosystem), or custom-built solutions available only to hyperscalers. Nexthop AI is attempting to be the fourth option: purpose-built for AI, available off the shelf, and designed by people who built the previous generation of data center networking.
Whether Nexthop AI delivers on that promise will be determined by the next twelve to eighteen months of customer deployments. The capital is committed. The product line is launched. The founding team has the credentials. What remains is execution at scale — and a market that is large enough that even capturing a small share at the right margin profile would justify the $4.2 billion valuation the investors have assigned.
Frequently Asked Questions
What is Nexthop AI and what does it do?
Nexthop AI is an AI networking hardware company that designs and manufactures switches purpose-built for AI data center workloads. Its products — the NH-4010, NH-4220, and NH-5010 — are designed to address the specific communication patterns of AI training clusters, where thousands of GPUs must exchange data simultaneously with extremely low latency.
How much did Nexthop AI raise in its Series B?
Nexthop AI raised $500 million in an oversubscribed Series B round led by Lightspeed Venture Partners, with participation from Andreessen Horowitz (a16z) and Altimeter Capital. The round values the company at $4.2 billion.
Who founded Nexthop AI?
Nexthop AI was founded by Anshul Sadana, who previously served as Chief Operating Officer of Arista Networks for over a decade. Sadana helped build Arista from a challenger into a data center networking category leader before leaving to found Nexthop AI.
What are the NH-4010, NH-4220, and NH-5010?
These are the three AI-optimized switches Nexthop AI launched alongside its Series B. The NH-4010 is an access-layer switch for individual GPU server connections, the NH-4220 is an aggregation-layer switch handling traffic between multiple access switches, and the NH-5010 is a spine-layer switch forming the core of the AI cluster fabric.
Why do AI data centers need specialized networking?
AI training workloads require GPUs to exchange gradient updates simultaneously across thousands of nodes in highly synchronized patterns — a communication demand called all-reduce. Traditional data center switches were designed for bursty, latency-tolerant web and microservices traffic, not for the time-synchronized, many-to-many communication patterns that AI training generates. A mismatched network leaves GPUs idle and wastes the data center's most expensive resource.
What companies does Nexthop AI compete with?
Nexthop AI's primary competitors are Arista Networks, Cisco Systems, and Juniper Networks (now part of HPE) in the data center switching market. Broadcom is an indirect competitor as the ASIC supplier whose silicon powers many competitive switches. NVIDIA's InfiniBand networking (via Mellanox) is also a competitive option, particularly for highest-performance clusters.
What is the AI networking market size?
The AI networking market is expected to reach $100 billion by 2031 as AI workloads increasingly dominate data center build-outs. Total data center networking spend in 2026 is estimated at $65–100 billion, representing approximately 10–15% of the $650 billion in total data center capital expenditure expected this year.
How does Nexthop AI's power efficiency compare to incumbents?
Nexthop AI claims significantly better performance-per-watt for its NH-5010 spine switch compared to equivalent offerings from Arista and Cisco. In large AI clusters with hundreds of switches, the cumulative power savings translate directly to lower operating costs and reduced cooling requirements. These claims require validation at production scale by enterprise customers.
Who are the investors in Nexthop AI's Series B?
The Series B was led by Lightspeed Venture Partners, with Andreessen Horowitz and Altimeter Capital participating alongside returning investors. Altimeter's involvement is notable given the firm's track record of investing in late-stage private companies ahead of public offerings.
What does an oversubscribed round mean?
An oversubscribed round means investor demand exceeded the amount the company intended to raise. Nexthop AI set a funding target, received more commitments than the target, and chose to close at $500 million rather than accepting all available capital. Oversubscription signals strong institutional conviction and gives the company leverage to select its investors carefully.
What is the significance of Anshul Sadana's background at Arista?
Sadana spent over a decade as Arista's COO, making him one of the most knowledgeable insiders about how Arista's products are built, where its roadmap is heading, and where its architecture has limitations for AI workloads. This institutional knowledge is a strategic asset for Nexthop AI — Sadana is not guessing at his primary competitor's weaknesses; he helped build the product he is now trying to displace.
How does Nexthop AI relate to NVIDIA GTC?
Nexthop AI announced its Series B and new switch lineup during the NVIDIA GTC 2026 conference window, strategically positioning the company alongside broader narratives about AI infrastructure investment. Every GPU server announced at GTC represents a networking requirement that Nexthop AI's products are designed to address.
What is RoCEv2 and why does it matter for AI networking?
RDMA over Converged Ethernet version 2 (RoCEv2) is the networking protocol most commonly used for high-performance AI training communication. It allows data to be transferred directly between GPU memory without involving the CPU, dramatically reducing latency. The NH-4010 is specifically optimized for RoCEv2 workloads, including enhanced Explicit Congestion Notification (ECN) marking to prevent the buffer congestion that degrades AI training performance.
What is the competitive threat from NVIDIA's InfiniBand?
NVIDIA's acquisition of Mellanox gave it ownership of InfiniBand networking, the highest-performance but most expensive and proprietary networking option for AI clusters. InfiniBand has traditionally been the choice for highest-performance HPC and AI workloads. Nexthop AI's ethernet-based approach offers a more open ecosystem with better cost economics, but InfiniBand retains a performance edge at the absolute frontier of cluster scale that is relevant for the largest hyperscaler deployments.
Is Nexthop AI planning an IPO?
No timeline has been announced, but Altimeter Capital's participation in the Series B is a meaningful signal. Altimeter typically invests in late-stage private companies with clear public market trajectories. At a $4.2 billion valuation with a product line targeting a $100 billion market, the implied IPO path exists if Nexthop AI can demonstrate production deployments and revenue growth consistent with the round's pricing.
What happens if hyperscalers build their own AI networking?
Google, Meta, and Microsoft have already built custom network fabrics for their own AI infrastructure — Google's Jupiter and Meta's RAICHU are examples. The risk for Nexthop AI is that the hyperscaler custom-build trend expands, reducing the addressable market to second-tier cloud providers and enterprises. The opportunity is that building custom networking requires years of engineering effort and billions in R&D investment that most organizations cannot afford — creating a durable market for purpose-built, off-the-shelf AI networking hardware.
What does the $650B data center spending figure include?
The $650 billion estimate for 2026 data center capital expenditure aggregates announced spending plans from hyperscalers (Microsoft, Google, Amazon, Meta), cloud providers, sovereign AI initiatives from governments in the US, EU, and Asia, and enterprise AI infrastructure build-outs. Networking hardware typically represents 10–15% of this total, implying a networking TAM in the $65–100 billion range for the year.
How does Nexthop AI's NOS differentiate from incumbent operating systems?
Nexthop AI's proprietary Network Operating System (NOS) is designed to expose AI-workload-specific telemetry to cluster orchestration software — allowing training jobs to observe network conditions and adapt communication patterns accordingly. This tight feedback loop between network fabric and AI compute is not available in Arista's EOS or Cisco's NX-OS, which were designed for general-purpose data center traffic and retrofitted with AI monitoring features rather than built for AI observability from the ground up.