GSMA's Open Telco AI Framework Wants to Fix AI's 84% Failure Rate in Telecom Networks
GSMA launches Open Telco AI at MWC 2026 with AT&T, Ericsson, Nokia, Vodafone and AMD to fix the 84% GenAI failure rate plaguing telecom network operations.
Whether you're looking for an angel investor, a growth advisor, or just want to connect — I'm always open to great ideas.
Get in TouchAI, startups & growth insights. No spam.
The numbers coming out of telecom's AI experiments are quietly devastating. Eight out of every ten generative AI deployments in network operations fail to reach production. Not fail in the dramatic sense — no meltdowns, no outages — but fail in the slow, grinding way that enterprise software dies: a promising pilot, an inconclusive proof of concept, and a budget review that quietly kills the initiative before it changes anything real. In a $1.7 trillion industry that runs the physical infrastructure of global connectivity, that failure rate is a crisis wearing business casual.
At MWC Barcelona 2026, the GSMA — the global trade body representing nearly 1,000 mobile operators — announced it has had enough. The Open Telco AI initiative, launched on the conference floor this week, is the industry's most structured attempt yet to answer a deceptively simple question: why can't telecom companies make AI work at scale, and what would it take to fix that?
The answer, according to the GSMA and its founding partners — AT&T, AMD, Ericsson, Nokia, and Vodafone — is not better models. It is shared infrastructure, shared data, and an open framework that lifts every operator instead of letting each one reinvent the same broken wheel.
The Open Telco AI framework is, at its core, a coordinated industry effort to build AI infrastructure that is purpose-built for telecommunications — and to make that infrastructure openly available rather than proprietary.
The framework has three interlocking components:
Shared model training data. Telecom networks generate enormous volumes of structured operational data — network fault logs, traffic patterns, spectrum utilization records, customer experience metrics, RAN performance telemetry — but that data has historically been siloed inside individual operators. No single carrier has enough labeled fault data across enough network configurations to train a model that generalizes well. The Open Telco AI framework establishes a federated data-sharing protocol that lets operators contribute anonymized operational data to a common pool without exposing competitively sensitive information. The result is a training dataset orders of magnitude larger than any operator could assemble alone.
Pre-trained telco LLMs. The shared data pool feeds into a series of purpose-built language models designed specifically for telecom use cases. These are not general-purpose models fine-tuned on a telecom dataset — they are models whose pretraining phase is dominated by telco-specific data, giving them the kind of domain fluency that base models like GPT-4 or Gemini lack without extensive fine-tuning. The first pre-trained telco model is scheduled for release in Q3 2026.
Open interfaces and evaluation benchmarks. The framework defines standard APIs for integrating telco AI models into existing operations support systems (OSS) and business support systems (BSS), the legacy software stacks that carriers have spent decades building. It also establishes industry-agreed evaluation benchmarks so operators can compare model performance on realistic network scenarios — not synthetic benchmarks engineered to look impressive in press releases.
The initiative builds explicitly on the GSMA's prior work with Open Gateway APIs, which standardized network capability exposure for developers. Where Open Gateway targeted external developers who wanted to build on carrier networks, Open Telco AI targets the carriers themselves, standardizing the internal tooling for network intelligence.
To understand why the GSMA is doing this now, you need to understand what the 84% failure rate actually reflects.
The number — drawn from industry surveys of GenAI pilots conducted across major operators between 2024 and 2025 — represents projects that completed a proof of concept phase but did not advance to full production deployment. Some were shut down outright. Many remain in indefinite "pilot extension" purgatory, consuming resources without delivering value. The underlying causes cluster into three categories:
Domain specificity failures. General-purpose LLMs have no reliable mental model of a 5G network. When an operator deploys a commercial LLM to assist with fault diagnosis, the model may produce grammatically fluent, technically plausible-sounding responses that are operationally wrong. It doesn't know that a specific combination of RSRP degradation and PRB utilization pattern in an Ericsson RAN cluster at a particular traffic load points to a specific configuration drift rather than a hardware fault. It has no training on the proprietary operational data that would make that distinction possible. The result is an AI assistant that sounds confident and is frequently wrong — exactly the failure mode that erodes engineer trust fastest.
Integration complexity. Carrier-grade networks are among the most complex software environments in existence. A major operator typically runs hundreds of distinct OSS/BSS systems, many of them decades old, from dozens of vendors, with integration built on a patchwork of custom APIs, proprietary protocols, and manual processes. Deploying a new AI system into that environment requires integrations that can take 18-24 months to build, by which point the model has been superseded and the business case has shifted. The opportunity cost of that integration work kills more AI projects than bad model performance.
Data access and quality. Network fault data is operationally sensitive. A dataset of network failures is also a dataset of network vulnerabilities. Operators are understandably reluctant to share it — and in many jurisdictions, they face regulatory constraints on data sharing even if they wanted to. The resulting data starvation means operators train models on inadequate datasets, producing systems that work in the lab and fail in the field when they encounter network scenarios not represented in training.
The Open Telco AI initiative directly addresses all three. Domain-specific pretraining solves the first problem. Standard integration APIs attack the second. The federated data-sharing protocol gives operators a legally and operationally viable path to contributing to the shared dataset without exposing their networks.
The five founding partners of Open Telco AI represent a deliberate cross-section of the industry.
AT&T brings the operator perspective from the world's largest telecommunications market. The carrier has been among the more aggressive experimenters with AI in network operations, and its participation signals that the initiative has buy-in from operators who have actually tried — and largely failed — to deploy GenAI in production. AT&T has also been one of the more transparent companies about the cost pressures driving the AI push: the company operates one of the most complex network environments in the world and has publicly committed to AI-driven cost reduction as a strategic priority. For more on AT&T's approach to AI efficiency at the network edge, see our coverage of AT&T's small language model experiments and the 90% cost reduction they're targeting with agentic AI.
Ericsson and Nokia are the two largest RAN and core network vendors globally. Their participation is structurally important for a reason that goes beyond market credibility: they hold the keys to the proprietary data formats, configuration schemas, and network telemetry structures that a telco LLM actually needs to be useful. A model that understands generic network concepts but doesn't understand how Ericsson's Baseband 6630 reports fault conditions is useless to an operator running Ericsson RAN. Having the vendors inside the framework means the pre-trained models can be trained on vendor-specific data that would otherwise be inaccessible.
Vodafone is the international operator anchor. Operating across 15+ markets with dramatically different network configurations, regulatory environments, and legacy infrastructure generations, Vodafone brings exactly the diversity of operational data that makes a telco LLM generalizable rather than parochially optimized for one market.
AMD is the hardware wildcard — and arguably the most strategically interesting inclusion. Where most AI infrastructure conversations default to NVIDIA, AMD's presence signals that the Open Telco AI framework is designed with chip-level integration in mind, not just software. AMD's Instinct MI300X accelerators have emerged as a credible alternative to NVIDIA's H100/H200 in large-scale inference workloads, and building hardware-agnostic design into the framework from the start avoids locking operators into a single silicon supply chain. Given the ongoing GPU supply constraints that affected AI deployment timelines throughout 2024-2025, that flexibility has real operational value.
The Open Telco AI initiative has published three primary use case targets for its initial model release, each with specific performance benchmarks:
Network fault detection and root cause analysis. This is the highest-value, highest-urgency use case. Major network faults cost operators millions of dollars per hour in customer churn, SLA penalties, and emergency operations. Current AI-assisted fault detection systems can take 4-6 hours to identify root cause in complex multi-layer failures. The GSMA's target for Open Telco AI models is a 90% reduction in time-to-root-cause — bringing fault diagnosis from hours to minutes in most scenarios. That target is ambitious but not implausible: purpose-built models with access to real-time telemetry from the vendor systems generating the fault data are operating in a fundamentally different regime than a generic LLM asked to interpret log files.
Capacity planning and traffic prediction. Network capacity planning is currently a mix of statistical modeling, human judgment, and reactive over-provisioning. Operators routinely over-build capacity by 20-30% to buffer against unpredictable traffic spikes, because the cost of under-provisioning is severe and the models available aren't good enough to reduce that safety margin. AI-driven capacity planning, with access to historical traffic data across thousands of operator networks, can substantially improve prediction accuracy — reducing the over-provisioning buffer and freeing capital that would otherwise be locked in underutilized infrastructure.
Customer service and network experience. This use case targets the operational intersection between network performance and customer-facing outcomes. When a customer calls to report dropped calls or degraded data speeds, the current workflow involves a customer service agent manually querying multiple network systems, often without the technical depth to interpret what they find. An LLM trained on network operations data can translate between customer-reported symptoms and network-level diagnostics — automatically identifying whether a customer's issue reflects a network fault, a device configuration problem, or a coverage gap, and routing the case to the appropriate resolution path without human interpretation at each step.
The cost reduction projection — 30% or more in network operations expenditure — aggregates across these use cases, weighted toward fault detection and capacity planning where the financial stakes are highest. For a top-10 global operator with $3-5 billion in annual network operations costs, a 30% reduction represents $900 million to $1.5 billion in potential annual savings. That is the number driving executive sponsorship.
The GSMA's Open Telco AI announcement landed inside a conference that had committed its entire 2026 identity to intelligence as the defining theme. MWC 2026's official theme — "the IQ era" — was not subtle about the industry's direction.
The IQ era framing reflects a real structural inflection point in telecom. The industry spent the last decade building 5G infrastructure at enormous capital cost, and the business case for that investment has been slower to materialize than promised. Enterprise 5G, private networks, and network slicing — the revenue streams meant to justify the 5G capex cycle — have developed more slowly than the equipment vendors and optimistic analysts predicted. Operators are carrying billions in stranded 5G infrastructure costs while searching for the monetization model that justifies the build.
AI in network operations represents one answer: instead of waiting for external revenue from enterprise customers, use the intelligence of the network itself to reduce the cost of running it. Operators can justify AI investment on operational savings alone, independent of whether enterprise 5G revenues materialize on schedule. That makes telco AI an unusually robust business case — it doesn't require a new product to succeed, only operational improvement on something operators are already spending money on.
The GSMA's timing is also responsive to competitive pressure from hyperscalers. Microsoft, Google, and AWS have all announced telecom-specific AI products in the past 18 months, and all three are pitching operators on the idea that the public cloud is the right deployment model for network AI. The Open Telco AI initiative is, in part, a competitive response: an assertion that the industry should build and govern its own AI infrastructure rather than ceding that layer to companies that have no skin in the network operations game and significant commercial interests in deepening operator dependency on cloud platforms.
For additional context on the AI-native network intelligence push at MWC this year, see our coverage of NVIDIA's 6G AI-native platform announcements and the broader MWC 2026 Day One AI roundup.
No open framework announcement escapes the fundamental tension between "open" and "sustainable," and the GSMA's initiative is no exception.
The governance structure for the shared training data and pre-trained models has not been fully disclosed, but the GSMA's history with Open Gateway provides a template. That initiative used a contributor-access model: operators who contribute data and resources to the shared pool receive preferential access to the resulting models, while non-contributing operators can access the models on a licensed basis. The incentive structure rewards participation while still allowing the framework to spread across the industry.
The more complex question is intellectual property over the pre-trained models themselves. If a telco LLM is trained on operational data contributed by AT&T, Vodafone, and a dozen other operators, who owns the model weights? The GSMA's approach appears to be a foundation model held by the GSMA itself, with operator partners receiving usage rights and the ability to fine-tune on their own proprietary data. That structure keeps the core asset under industry governance rather than under any single operator's control — which is the right call for a framework meant to serve the whole industry, but it creates a governance body that needs to be trusted by all participants.
The Ericsson and Nokia participation is where the IP questions get most interesting. Vendor-contributed data — particularly proprietary telemetry schemas, configuration formats, and fault classification taxonomies — is valuable precisely because it is proprietary. The vendors are participating because they see customer success as aligned with their commercial interests (operators who successfully deploy AI tools on Ericsson networks are more likely to continue buying Ericsson equipment), but that alignment has limits. If the pre-trained telco LLMs become so capable that operators start reducing their dependence on vendor-provided managed services, the vendors' commercial calculus shifts. The GSMA will need to navigate that tension carefully as the framework matures.
The first pre-trained telco model is expected in Q3 2026 — a timeline that aligns with the typical 12-18 month lifecycle for a major foundation model training run, assuming the data sharing infrastructure is operational now and training begins in earnest in Q2.
What operators should realistically expect from the first release:
The initial model will almost certainly be narrowly scoped to the highest-value, most data-rich use case: network fault detection and root cause analysis. Fault logs are the most standardized and most abundant operational data type, and the ROI case is clearest. Capacity planning and customer service applications will follow in subsequent releases as the data pool grows and the models are fine-tuned on more diverse operational scenarios.
The API integrations for OSS/BSS systems will lag the model release by at least one quarter. Building and certifying integrations with the major OSS vendors — IBM, Amdocs, Ericsson OSS, Nokia NetAct — takes time, and operators will need those integrations to deploy the model in production environments rather than sandboxes. Expect the meaningful deployment wave to begin in Q1-Q2 2027, not immediately on model release.
Evaluation benchmarks — the publicly available test sets that operators can use to assess model performance on their specific network configurations — should be released alongside or slightly before the model, to give the ecosystem time to develop tooling around them. The GSMA's track record with Open Gateway benchmark development suggests this will be done thoughtfully, but the initial benchmarks will necessarily reflect the founding partners' network environments more than the full diversity of global operator configurations.
The Open Telco AI initiative will not fix the 84% failure rate overnight. Nothing does. Enterprise AI deployment cycles are slow, integration projects are expensive, and organizational change in large operators moves at the pace of procurement committees.
But the initiative matters because it attacks the structural cause of that failure rate rather than the symptomatic one. The 84% failure rate is not primarily a model quality problem — it is a data problem, an integration problem, and an alignment problem between what AI can do today and what telecom operations actually need. The Open Telco AI framework directly addresses all three.
More broadly, the initiative represents a test of whether incumbent industries can govern their own AI infrastructure in the face of hyperscaler competition. If it works — if the shared data pool grows, the pre-trained models perform well, and the integration standards get adopted — it becomes a template for other infrastructure industries facing similar dynamics: utilities, logistics, financial services. If it stalls in governance disputes and fails to attract operators beyond the founding partners, it becomes a cautionary tale about the difficulty of industry coordination at scale.
The GSMA has done this before. Open Gateway took four years from announcement to meaningful adoption, but it got there. The telco industry moves slowly and then all at once. Given the cost pressure operators are under and the size of the savings on the table, the incentive to make this work is as strong as it has ever been for any collaborative industry initiative.
The IQ era that MWC 2026 promised is not going to arrive by buying more general-purpose AI tools from hyperscalers. It is going to arrive — if it arrives at all — through exactly the kind of domain-specific, industry-governed, purpose-built infrastructure that the GSMA is now trying to build. The 84% failure rate is the problem. Open Telco AI is the hypothesis. Q3 2026 is when the experiment begins.
Sources: GSMA Newsroom | VentureBeat | Ars Technica
While the DoD blacklisted Anthropic as a supply chain risk, Microsoft Azure and Google Cloud continue offering Claude to commercial enterprise clients — creating a two-tier AI reality.
Amazon Web Services launched Amazon Connect Health, a HIPAA-eligible AI agent platform with five specialized agents handling patient identity, scheduling, clinical notes, medical history, and billing codes.
ServiceNow launched Autonomous Workforce on March 2 — teams of AI specialists that resolve 90%+ of IT requests autonomously and are 99% faster than human agents. The enterprise agentic AI era has arrived.