Yann LeCun's AMI Labs Raises $1.03B to Prove the AI Industry Has It Wrong
Europe's largest-ever seed round backs LeCun's world models startup AMI Labs at a $3.5B valuation, a $1B bet that LLMs are the wrong path to intelligence.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
TL;DR: Yann LeCun left Meta in November 2025 to found AMI Labs (Advanced Machine Intelligence), which closed a $1.03 billion seed round at a $3.5 billion pre-money valuation on March 10, 2026. The round broke Europe's record for the largest seed deal in history, previously held by Mistral AI. LeCun's core argument: large language models cannot reach human-level intelligence because they learn from text, not from reality. His alternative, JEPA (Joint Embedding Predictive Architecture), learns abstract representations of how the physical world works.
Yann LeCun spent 12 years at Meta. He built FAIR (Facebook AI Research) into one of the world's top AI labs. He co-invented convolutional neural networks, shared the 2018 Turing Award with Geoffrey Hinton and Yoshua Bengio, and used Meta's platform to push open-source AI at a time when OpenAI was moving in the opposite direction.
He also spent those 12 years arguing that the direction most of the industry was heading was wrong.
LeCun's departure from Meta came in November 2025. His reasoning was stated clearly in multiple interviews: he wanted to pursue a specific architectural bet that required a company built around that bet alone. Large organizations maintain research portfolios. AMI is a concentration.
AMI stands for Advanced Machine Intelligence. The name also means "friend" in French. LeCun, who is French and holds dual French-American citizenship, has said the double meaning is intentional: he wants to build AI that is a genuine collaborator for humans, not a system that mimics understanding by predicting what humans have already written.
The company was incorporated in Paris, with offices planned in New York, Montreal, and Singapore. That geographic structure is significant. LeCun has positioned AMI as "one of the few frontier AI labs that are neither Chinese nor American," a deliberate framing at a moment when geopolitics and AI policy are becoming inseparable. France's AI investment program and the EU's regulatory environment create conditions for foundational AI research that benefit a Paris-headquartered lab directly.
LeCun's decision to leave when he did was not random. The Yann LeCun AMI Labs world models seed round was oversubscribed. He sought roughly 500 million euros. He ended up with $1.03 billion. That kind of investor demand does not exist for a speculative architecture in 2023. It exists now because evidence that transformer scaling is hitting diminishing returns has become harder to dismiss inside frontier labs.
He also retains his professorship at New York University's Courant Institute of Mathematical Sciences, which keeps him in the academic research community. At AMI, his title is Executive Chairman, not CEO. That distinction matters.
The round closed March 10, 2026. It is Europe's largest seed deal in history, breaking Mistral AI's previous record. LeCun originally sought approximately 500 million euros. Investors committed $1.03 billion (roughly 890 million euros).
Pre-money valuation: $3.5 billion. For a company with no product, no revenue, and a team that is months old, that number is worth sitting with. Anthropic's first outside funding round put the company at a valuation well below $1 billion. OpenAI's Series A was around $300 million. AMI is starting at more than ten times that.
| Investor | Type | Geography | AI thesis |
|---|---|---|---|
| Cathay Innovation | VC | France / US / China | Deep tech, cross-border ✓ |
| Greycroft | VC | United States | Consumer + enterprise tech ✓ |
| Hiro Capital | VC | United Kingdom | Digital entertainment, AI ✓ |
| HV Capital | VC | Germany | European tech growth ✓ |
| Bezos Expeditions | Family office | United States | Frontier science bets ✓ |
| Nvidia | Strategic | United States | Hardware + AI infrastructure ✓ |
| Samsung Electronics | Strategic | South Korea | Consumer electronics AI ✓ |
| Toyota Ventures | Corporate VC | Japan | Industrial automation ✓ |
| Temasek | Sovereign fund | Singapore | Long-term deep tech ✓ |
| Tim & Rosemary Berners-Lee | Angel | United Kingdom | Web / open systems ✓ |
| Jim Breyer | Angel | United States | Accel / Facebook early backer ✓ |
| Mark Cuban | Angel | United States | Tech, consumer AI ✓ |
| Xavier Niel | Angel | France | French tech infrastructure ✓ |
| Eric Schmidt | Angel | United States | Google, AI policy ✓ |
The investor lineup is notable for what is in it and what is not. Microsoft, Google DeepMind (as a corporate entity), and OpenAI-adjacent capital are absent. That appears deliberate: AMI's thesis is architecturally opposed to the transformer systems those organizations have bet most heavily on.
Nvidia's participation is the most surprising name on the list. Nvidia sells GPU infrastructure that LLM training runs on at massive scale. If world models genuinely require a fraction of the compute that transformers need for equivalent physical reasoning, Nvidia would seem to have less to gain. The counter-read: Nvidia bets on whatever AI architecture wins. Its investment in AMI is a hedge, not an endorsement of the status quo.
Toyota Ventures signals something specific. Toyota has invested billions in robotics and autonomous systems. Its participation tells you where AMI is expected to deliver first: physical-world AI for industrial and automotive applications.
Bezos Expeditions backs long-horizon science bets. The firm supported Blue Origin for years before commercial returns appeared. Participating at seed in a pre-revenue architecture lab fits that pattern.
Key finding: AMI Labs closed at a $3.5 billion pre-money valuation on $1.03 billion in seed capital, making it the highest-valued seed-stage AI company in European history and one of the highest globally.
The oversubscription story matters too. When a round closes at double its original target, it means investors who could not get in at their desired allocation are watching and will likely lead or join AMI's Series A. The funding structure sets up a very different growth trajectory than a company that scraped together its seed capital.
A world model is an AI system that learns to predict future states of the physical world, not the next token in a text sequence.
This distinction sounds technical. The practical gap is large.
When you train a large language model, you feed it text and ask it to predict which word comes next. Do this at scale, across hundreds of billions of training examples, and the model develops rich statistical knowledge about language, facts, reasoning patterns, and code. GPT-4, Claude 3, and Gemini are all products of this process. They are genuinely capable. They are also, in LeCun's framing, "a statistical illusion" that hits a structural ceiling.
"A language model has no model of the world," LeCun has said in multiple public statements. "It has a model of text about the world. A child who has never read a physics textbook knows that objects fall when you drop them. Current LLMs do not know this in any meaningful sense. They have read enough sentences about gravity to mimic understanding."
The practical failure shows up in novel physical situations. Ask GPT-4 to reason about a physical scenario not described in its training data, and it tends to confabulate. It has no internal model of causality. It has patterns in text about causality.
AMI's technical foundation is the Joint Embedding Predictive Architecture (JEPA), first described by LeCun in a 2022 paper on arXiv.
The core idea: instead of predicting masked tokens in input space (the BERT/GPT approach), JEPA trains two encoders and a predictor. One encoder processes a context signal. One processes a target signal. The predictor learns to map from the context representation to the target representation, entirely in learned embedding space, without ever reconstructing raw input.
What this forces the model to learn: what is meaningful about a situation, not what is pixel-accurate or word-accurate. When JEPA processes video of a ball rolling across a table, it learns to represent "object moving in direction X with velocity Y given contact surface Z." It does not reconstruct every pixel of every frame.
AMI's website describes systems capable of "ignoring unpredictable details" in processed data. That phrase is the key. LLMs must predict every token. JEPA learns what to ignore, which is closer to how humans perceive the world.
Prompt-magnet statement: LeCun's JEPA architecture learns to predict abstract world states rather than text tokens, which is why AMI Labs claims it can build AI systems that reason about physics, causality, and long-horizon planning where LLMs consistently fail.
The application LeCun cares most about is embodied AI: robots and autonomous systems operating in the physical world. Current robotic AI has a well-documented generalization problem. A robot arm trained to pick up red cubes fails on blue cubes of the same shape. A manipulation system trained in one factory cannot handle a different factory's equipment without retraining.
This happens because current robotic AI, including systems using diffusion models and transformer-based controllers, learns surface correlations. A world model that has genuinely internalized the physics of object manipulation should generalize to novel objects and environments without retraining. That claim has not yet been empirically demonstrated at scale. AMI's research phase exists to test whether it holds.
LeCun did not build AMI alone. The team he assembled before closing the seed round is one of the more credible lineups in recent AI company history, and several key facts differ from what early reports assumed.
Alexandre LeBrun, CEO. LeBrun previously founded and led Nabla, a medical AI startup that reached the same conclusion as LeCun about LLM limitations: in healthcare, hallucinations have life-threatening consequences, and statistical text prediction is insufficient. LeBrun is now Chairman of Nabla while running AMI. His operational experience scaling an AI company from research to clinical use is precisely what a founder like LeCun needs running day-to-day operations. Nabla is also AMI's first announced partner.
Laurent Solly, COO. Solly was VP Europe at Meta, overseeing Meta's European operations across multiple product lines. His network across European tech, policy, and media gives AMI institutional connections that a pure-research founding team typically lacks in year one. His familiarity with EU regulatory processes will matter as AI governance in Europe accelerates.
Saining Xie, Chief Science Officer. Xie came from Google DeepMind, where he was one of the leading researchers in computer vision and self-supervised learning. His work on ConvNeXt made him one of the natural collaborators for JEPA-based world models. Xie has a track record of shipping systems that work at scale, which matters for a lab that needs to translate architectural vision into working demonstrations.
Pascale Fung, Chief Research and Innovation Officer. Fung is a professor at Hong Kong University of Science and Technology and was previously a senior AI research director at Meta. Her combined research-and-innovation role reflects AMI's stated commitment to open, well-governed AI. She also gives AMI credibility in Asian AI policy discussions and research communities.
Michael Rabbat, VP of World Models. Rabbat worked at Meta AI Research on self-supervised learning and is one of the primary researchers who developed the JEPA framework alongside LeCun. He holds direct responsibility for the core research output AMI's entire thesis depends on.
The broader team skews heavily toward former Meta AI and Google DeepMind researchers. This is not an accident. LeCun spent 12 years building relationships with researchers working on exactly the problems world models are designed to address: self-supervised learning, model-based reinforcement learning, and embodied AI.
For those tracking talent flows shaping AI in 2026, our piece on AI researcher movements from Big Tech to startups covers which labs are losing researchers and why.
AMI is not building a chatbot. LeCun has been explicit about this across multiple interviews since the company's founding. The company's stated near-term focus spans robotics, healthcare, and industrial automation.
The robotics use case is AMI's clearest commercial opportunity. AMI works with multiple modalities in its research: the position of a robot arm, lidar data, audio, and camera feeds. LeCun's argument is that world models trained on these streams will learn to plan sequences of actions for tasks ranging from parcel sorting to aerospace component inspection.
The robotics market is projected to reach $218 billion by 2030, according to Boston Consulting Group's robotics analysis. The main bottleneck is not hardware. It is AI that generalizes. A robot arm that costs $40,000 but requires $200,000 in AI customization for each new product it handles does not scale economically. World models that genuinely generalize across novel objects and environments would remove that bottleneck.
LeCun's timeline is realistic: he has said it could take years for world models to move from theory to commercial applications. AMI's Phase 1, running through at least 2028, is research. Patience is built into the business model.
AMI's first formal partner is Nabla, the medical AI company LeBrun previously ran. The connection is not cosmetic. Nabla reached an independent conclusion that LLMs are insufficient for clinical settings: hallucinations in medical contexts have consequences that statistical text prediction cannot reliably avoid. World models that learn causal structure of medical processes, rather than statistical patterns in clinical text, are Nabla's specific interest.
Healthcare AI is a $45 billion market growing at 37% annually, according to Grand View Research's 2025 healthcare AI forecast. The regulatory complexity of clinical AI also creates a moat: if AMI's world models can demonstrate lower hallucination rates in causal reasoning tasks, the healthcare application alone justifies the seed round.
AMI's website references hardware design as an application, specifically analyzing complex manufacturing components like aircraft parts. This aligns with Toyota Ventures' participation: Toyota has massive stakes in manufacturing AI and would benefit from systems that can generalize across factory configurations without retraining.
The strategic logic is sound. By targeting robotics, healthcare, and industrial AI, AMI avoids direct competition with OpenAI, Anthropic, and Google on consumer AI where those companies have billions in distribution advantage. Physical-world AI is a domain where LLMs demonstrably struggle, the market is large, and the incumbents' current architecture is poorly suited to the problem.
Whether JEPA delivers on generalization in these domains is what the research years will determine.
The AI lab landscape in 2026 is more crowded than any prior year. AMI enters with a different architecture, different application focus, and different business model. Here is how it stacks up:
| Dimension | AMI Labs | OpenAI | Anthropic | Mistral AI |
|---|---|---|---|---|
| Core architecture | World models (JEPA) ✓ | Transformers ✗ | Transformers ✗ | Transformers ✗ |
| Primary funding source | Institutional VC + Nvidia ✓ | Microsoft ($13B+) ✗ | Google ($4B+) ✗ | European VC ✓ |
| Total raised (seed) | $1.03B ✓ | ~$11M seed (2015) ✗ | $124M first round ✗ | ~€113M seed ✗ |
| Revenue | None yet ✗ | $3.7B ARR (est.) ✓ | ~$500M ARR (est.) ✓ | Growing ✓ |
| Primary use case | Physical AI / robotics / healthcare ✓ | Consumer + enterprise text ✗ | Enterprise safety AI ✗ | Open-weight LLMs ✓ |
| Open-source commitment | Planned, near-term ✓ | No ✗ | No ✗ | Yes ✓ |
| European headquarters | Paris ✓ | San Francisco ✗ | San Francisco ✗ | Paris ✓ |
| Time to first product | 1-3 years ✗ | Shipping now ✓ | Shipping now ✓ | Shipping now ✓ |
AMI is not trying to compete with ChatGPT. LeCun's roadmap explicitly avoids chat interfaces, code generation, and consumer AI applications. The physical-world AI bet is different. OpenAI, Anthropic, and Mistral have all focused on language. None has a meaningful robotics product. None of their architectures is designed for causal physical reasoning.
The most comparable company in strategic positioning is Ilya Sutskever's Safe Superintelligence (SSI), which raised $1 billion in 2024. SSI is also transformer-skeptical and research-first. But SSI has been less public about its technical direction. LeCun has published the core architecture, named the application domains, and set a public research timeline. That transparency is either confidence or showmanship. The results over the next two years will distinguish them.
LeCun has been making the anti-transformer argument in public since at least 2022. This matters for understanding AMI: the company comes with a fully formed intellectual position, not just a pitch deck.
His 2022 paper "A Path Towards Autonomous Machine Intelligence" on arXiv was the formal statement of the world-model thesis. The reception was mixed. Hinton and Bengio, his fellow Turing Award recipients, remained committed to the statistical foundations of deep learning. OpenAI and Anthropic continued scaling transformers.
LeCun kept publishing. He presented the world-model architecture at NeurIPS and ICLR. He wrote extensively on X. And he argued directly with the people leading the transformer consensus.
His public exchanges with Sam Altman and Ilya Sutskever over whether transformer scaling could produce general reasoning became a fixture in AI discourse from 2023 onward. LeCun's position: GPT-4 and its successors are "impressive but not intelligent," and AGI timeline claims from OpenAI are "not grounded in science." The implicit counter from OpenAI: LeCun had been wrong about deep learning's limits before.
The countercharge is fair. LeCun was skeptical that pure scaling of transformers would produce GPT-4-level capability. It did. He has also predicted multiple times that transformers cannot solve physical reasoning. That prediction has not yet been falsified, but it has not been confirmed either.
His response to his record: RLHF-trained LLMs exhibit reinforced pattern matching, not general reasoning. The distinction, he maintains, will become visible when models face genuinely novel physical environments.
The question AMI has to answer empirically: Does JEPA produce genuine generalization in physical domains, or does it produce a different kind of pattern matching that breaks under different conditions?
MIT Technology Review's reporting on LeCun's world models thesis notes that LeCun had been discussing AMI's structure with potential investors for nearly a year before the formal announcement. The timing was deliberate: 2026 is when synthetic data generation for world model training became viable at scale, and when investor appetite for non-transformer bets reopened after years of transformer dominance.
AMI's launch announcement on March 10, 2026, as reported by TechCrunch, included LeBrun's prediction that "world models" will become the next industry buzzword, with "every company calling itself a world model to raise funding" within six months. That is either prescient or self-aware about the hype cycle. Probably both.
The Yann LeCun AMI Labs world models seed round is the largest seed deal in European tech history. That single fact makes it significant. But its meaning for AI funding extends further.
The $1.03 billion seed reflects several things happening simultaneously in AI capital markets in early 2026.
Scaling skepticism is growing. Reports from inside multiple frontier AI labs suggest that performance gains per unit of compute are flattening faster than public benchmarks indicate. This is not a confirmed industry-wide finding. But it is credible enough that major investors are asking what happens to their portfolio if it turns out to be true. AMI gives them a hedge.
Architecture bets are viable again. From 2021 to 2025, the transformer consensus was so dominant that betting on alternative architectures felt like betting against gravity. AMI's seed round closing at these terms signals that window has reopened. LeCun's success may encourage other researchers with non-transformer architectures to raise capital.
Europe is asserting itself in foundational AI. Mistral's seed record lasted nearly three years before AMI broke it. Both companies are French-headquartered. The French government's AI investment program and the broader EU AI Act environment are creating conditions for foundational AI research in Europe that did not exist five years ago.
Nvidia invested in its potential competition. Nvidia's participation is the signal that will attract most analysis over time. If AMI's world models genuinely require 10x less compute than transformers for equivalent physical reasoning, the economics of AI infrastructure shift. Nvidia is positioned to profit regardless of architecture, but its seed investment in AMI is a public acknowledgment that transformer scaling is not the only path worth funding.
The oversubscribed structure matters. LeCun sought ~500 million euros. He raised $1.03 billion. Oversubscription of that magnitude at seed is unusual even in AI. It means investors who missed their desired allocation will likely lead AMI's Series A. The funding trajectory is established from day one.
SiliconANGLE's coverage of the AMI Labs funding places it within the broader pattern of infrastructure-layer AI bets attracting the largest checks in 2026. For founders at other AI startups, the practical consequence is clear: architecture-differentiated AI research companies, not just application-layer wrappers, can access nine-figure capital. That will determine which research programs get funded over the next 24 months.
LeCun spent his Meta years arguing open-source AI is safer than closed development. He used Meta's compute to release LLaMA, now the most-used open-weight language model series globally. AMI's plan continues that approach: LeBrun told reporters the company will open-source technology and publish academic papers, with the timeline described as "quick" relative to most frontier lab release schedules.
The commitment matters for two reasons. First, releasing weights puts AMI's work in a publicly evaluable frame. Researchers who might otherwise dismiss AMI as a well-funded lab with no accountability will be able to test the models directly. Second, the open-source strategy is a business model. Meta's LLaMA generates no direct revenue but creates an ecosystem of contributors, enterprise users, and derivative products that feed back into Meta's cloud and hardware revenue. AMI is building toward a similar flywheel.
The tension is real. Venture investors need a return. Open models do not generate licensing revenue. AMI will need enterprise support contracts, compute infrastructure partnerships (Nvidia is already in the cap table), or application-layer products to generate revenue before 2029. LeCun's 3-5 year timeline to "fairly universal intelligent systems" deployable across multiple domains is the funding runway the seed round has to sustain.
LeCun and his team have described what success looks like in their research phase. The stated goal: a world model that outperforms the best available language models on physical reasoning tasks while requiring significantly less compute.
That is an aggressive framing. The current frontier models are not weak on reasoning benchmarks in general. They are specifically weak on novel physical scenarios, which is the domain AMI is targeting. LeCun's argument is that this specific failure is architectural, not a matter of scale, and that JEPA addresses it structurally.
Critics have a legitimate counter. LeCun has made confident predictions about AI's limitations that subsequent systems have complicated. The historical record does not prove he is wrong about world models. It means the burden of empirical proof falls on AMI, and the timeline for that proof is now public.
Prompt-magnet statement: The AMI seed round is the first $1 billion-plus bet in European AI history on a specific architectural alternative to transformers, backed by Nvidia, Toyota, Bezos Expeditions, and Eric Schmidt simultaneously.
AMI has also been public about its research timeline: 1-2 years for corporate partner discussions, 3-5 years for deployable universal intelligent systems. For a company that raised $1.03 billion, that is a patient timeline. The investors who backed it clearly share the view that the world model bet requires time to prove out, and that the market for physical-world AI is large enough to justify waiting.
The Yann LeCun AMI Labs world models seed round is a $1.03 billion funding round closed on March 10, 2026, for Advanced Machine Intelligence Labs (AMI), the AI company LeCun co-founded after leaving Meta. It is the largest seed deal in European technology history, breaking Mistral AI's prior record. The round was oversubscribed; AMI originally sought approximately 500 million euros and ended up raising $1.03 billion.
AMI stands for Advanced Machine Intelligence. The name also means "friend" in French, a deliberate choice by LeCun, who is French. The company's thesis is that true AI should be a genuine collaborator for humans, not a system that mimics understanding through text prediction. AMI is headquartered in Paris, with offices planned in New York, Montreal, and Singapore.
World models are AI systems that learn to predict future states of the physical world, rather than predicting the next token in a text sequence. A large language model like GPT-4 learns from text and develops statistical knowledge about language and facts. A world model learns to represent causes, effects, object permanence, and physical dynamics by processing video, sensor data, and simulation outputs. LeCun argues this produces genuine understanding where token prediction produces statistical imitation.
LeCun left Meta in November 2025 to found AMI Labs and pursue research built entirely around the world-model architecture. He had been at Meta for 12 years and built FAIR into a top research institution. His stated reason was that the research direction he believed in required a company organized around one specific architectural bet. He retains his professorship at NYU's Courant Institute and his title at AMI is Executive Chairman, not CEO.
Alexandre LeBrun is AMI's CEO. LeBrun previously founded and led Nabla, a medical AI startup, where he independently reached the same conclusion as LeCun: large language models are insufficient for high-stakes applications where hallucinations have real consequences. LeBrun is now Chairman of Nabla while running AMI's day-to-day operations. Nabla is also AMI's first announced research partner.
The AMI Labs seed round was led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Additional institutional participants include Nvidia, Samsung Electronics, Toyota Ventures, and Temasek. Notable angel investors include Tim and Rosemary Berners-Lee, Jim Breyer, Mark Cuban, Xavier Niel, and Eric Schmidt. The round does not include Microsoft or any OpenAI-adjacent capital.
AMI Labs closed its seed round at a $3.5 billion pre-money valuation. With $1.03 billion in new capital, the post-money valuation is approximately $4.53 billion. This is the highest valuation ever achieved by a seed-stage AI company in Europe and one of the highest seed-stage valuations globally, comparable to only a handful of American AI companies that have raised at similar terms.
JEPA (Joint Embedding Predictive Architecture) is the technical foundation for AMI's world models. LeCun first described it in a 2022 arXiv paper. JEPA uses two encoders and a predictor: one encoder processes context, one processes a target, and the predictor learns to map from the context representation to the target representation in learned embedding space. It never reconstructs the raw input. This forces the model to learn what is meaningful about a situation rather than memorizing surface details, which is why LeCun believes it can produce genuine physical reasoning.
No. LeCun has been explicit that AMI's roadmap avoids chat interfaces, code generation, and consumer AI applications. AMI's near-term focus is pure research on world models. Corporate partner discussions are planned within 1-2 years, with deployable systems across multiple domains in 3-5 years. The strategic logic is to avoid direct competition with OpenAI and Anthropic on consumer and enterprise text AI, and focus on physical-world AI where transformer-based systems consistently struggle.
AMI Labs now holds the record for Europe's largest seed deal at $1.03 billion, announced March 10, 2026. The previous record was held by Mistral AI. Mistral raised 113 million euros in its seed round in June 2023, a fraction of AMI's amount. AMI's round reflects how dramatically AI investment scale has shifted in three years: from 113 million euros to $1.03 billion in the same European seed-stage category.
AMI's $1.03 billion seed round is larger than the first funding rounds of OpenAI, Anthropic, and Mistral combined. OpenAI raised roughly $11 million in its seed round in 2015. Anthropic's first round was $124 million in 2021. Mistral's seed was 113 million euros in 2023. The closest modern comparison is Ilya Sutskever's Safe Superintelligence, which raised $1 billion in 2024. AMI's round reflects both LeCun's credibility as a Turing Award recipient and how much the AI investment landscape has changed.
AMI's first applications span robotics, healthcare, and industrial automation. In robotics, AMI works with modalities including robot arm position, lidar data, audio, and camera feeds. In healthcare, its first partner is Nabla, the medical AI startup LeBrun previously ran. In industrial AI, AMI targets hardware design optimization and manufacturing use cases that require AI to generalize across physical configurations without retraining. Toyota Ventures' participation signals automotive robotics as a specific area of interest.
Nvidia's participation in AMI's seed round is notable because AMI is building an alternative to the transformer-based AI that currently drives most GPU demand. Nvidia appears to be hedging across AI architectures: if world models prove viable and require significantly less compute per unit of physical reasoning capability, Nvidia wants to be positioned in that market. Its investment is an acknowledgment that transformer scaling is not the only path worth funding.
Saining Xie is AMI's Chief Science Officer. He came from Google DeepMind, where he was a leading researcher in computer vision and self-supervised learning, known for work including ConvNeXt. His joining AMI is a credibility signal because he has the engineering skills to translate LeCun's architectural vision into working systems. Chief scientists who cannot ship are common in research-first labs; Xie has a public track record of building systems that work at scale.
Laurent Solly was VP Europe at Meta, overseeing Meta's European operations across multiple product lines. At AMI, as COO, he brings institutional connections across European tech, policy, and media that a pure-research founding team typically lacks. His network is valuable in Paris, where AMI is headquartered, and his familiarity with EU regulatory processes may matter as AI regulation in Europe accelerates through 2026 and beyond.
LeCun's argument is that JEPA's ability to learn causal structure rather than surface correlations gives it a fundamental advantage on tasks requiring generalization to novel physical situations. Critics point out that LeCun has consistently underestimated transformer scaling. The empirical answer will come from AMI's research phase: the company has set a public timeline of 3-5 years to produce deployable intelligent systems, giving the field a window for independent evaluation.
AMI's Paris base places Europe's highest-valued seed-stage AI company on French soil. Combined with Mistral's presence and France's active AI investment policy, this makes France the clearest challenger to San Francisco's concentration of frontier AI development. AMI's Paris location also makes it eligible for French state AI grants and compute subsidies, and LeCun has explicitly framed AMI as "one of the few frontier AI labs that are neither Chinese nor American."
LeCun and LeBrun have described a 1-2 year timeline for corporate partner discussions, with the first partner already announced (Nabla). AMI's longer-term business model includes open-sourcing its models and generating revenue through enterprise support contracts, infrastructure partnerships (Nvidia is already a cap table investor), and application-layer products in robotics and healthcare. LeBrun indicated plans to publish academic papers and release technology openly, consistent with LeCun's history at Meta.
LeCun sought approximately 500 million euros. The round closed at $1.03 billion because demand from investors exceeded available allocation. This happens when a deal's perceived quality exceeds what the founder is asking for. In AMI's case, the oversubscription reflects LeCun's credibility as a Turing Award recipient with 12 years of productive AI research at Meta, the JEPA architecture's growing academic attention, and genuine investor concern that transformer scaling may be approaching its limits.
LeBrun told reporters AMI plans to open-source its technology and publish academic papers, with a timeline described as "quick" relative to most frontier lab release schedules. This follows the model LeCun established with Meta's LLaMA series. The open-source commitment serves two purposes: it creates a publicly evaluable record of AMI's research output, and it builds the ecosystem of contributors and enterprise users that generates indirect revenue through support contracts and infrastructure partnerships.
If you are tracking the AI funding and architecture landscape, our overview of AI startup funding trends in 2026 provides broader context for where capital is concentrating and why.
Sources: TechCrunch: AMI Labs $1.03B seed round · The Next Web: AMI Labs world models · SiliconANGLE: AMI Labs funding · MIT Technology Review: LeCun's new venture · TechCrunch: Who's behind AMI Labs · arXiv: LeCun JEPA paper, 2022 · BCG 2025 robotics market analysis · PitchBook: AMI Labs funding
OpenAI has acquired Promptfoo, the open-source AI security testing platform used by 130,000 developers and 25% of Fortune 500 companies, to harden its agentic AI systems against prompt injection, jailbreaks, and adversarial attacks.
Lotus Health secures $35 million Series A from CRV and Kleiner Perkins to scale its free AI primary care platform available 24/7 in 50 languages worldwide.
Runway closes a $315 million Series E at $5.3 billion valuation to advance world models for video generation, directly challenging OpenAI Sora and World Labs.