On the morning of March 24, 2026, developers across the AI ecosystem were unknowingly running malware. LiteLLM — the open-source Python library that serves as the infrastructure backbone for hundreds of AI applications, boasting roughly 97 million monthly PyPI downloads — had been compromised. Two successive versions, v1.82.7 and v1.82.8, contained a credential-stealing payload that activated silently on Python startup. The attacker was a threat actor calling itself TeamPCP, and they had reached LiteLLM not through a flaw in LiteLLM's own code, but by poisoning the security tools that guard its development pipeline.
The incident is a textbook example of a cascading supply chain attack: compromise a security scanner, use that foothold to steal credentials, use those credentials to inject malware into a package with tens of millions of downloads. It is also a stark demonstration of the particular danger AI infrastructure packages face — LiteLLM's job is to hold and forward API keys for OpenAI, Anthropic, Google, and dozens of other AI providers, making its package a high-value exfiltration target.
PyPI quarantined the affected versions within approximately five and a half hours. But the attack window — roughly 10:39 to 16:00 UTC — was long enough to reach a substantial number of automated CI/CD pipelines and developer environments that pull the latest package version on each build.
What Is LiteLLM and Why Does It Matter
LiteLLM is the connective tissue of the modern AI application stack. It provides a unified interface for calling over 100 large language model APIs — OpenAI, Anthropic, Cohere, Mistral, Google Gemini, AWS Bedrock, and many others — through a single, standardized Python API. Developers use it to avoid vendor lock-in, route traffic across providers, implement fallback logic, and manage costs across multiple AI services.
That architectural role is precisely what made it a high-value target. Any application using LiteLLM to manage its AI provider relationships has those providers' API keys flowing through the LiteLLM runtime. A credential-stealing payload embedded in LiteLLM does not need to breach a database or intercept network traffic — it simply reads the keys from memory or environment variables at the moment they are used.
The package's scale amplifies the exposure. At 97 million monthly downloads, LiteLLM is not a niche library. It is foundational infrastructure used by startups, enterprise AI teams, and individual developers building everything from customer service bots to autonomous agents. Many of those deployments run LiteLLM in production environments where the API keys it handles carry billing authority worth thousands of dollars per month.
How TeamPCP Got In
The attack chain begins not with LiteLLM itself but with Trivy, the open-source container and code vulnerability scanner maintained by Aqua Security. Trivy is widely used in CI/CD pipelines to scan container images and application dependencies for known vulnerabilities — including, in many cases, the dependencies of AI applications like those built on LiteLLM.
TeamPCP compromised a Trivy-associated GitHub Action. The precise method of initial Trivy compromise is still under investigation, but researchers at GitGuardian and Snyk assess with high confidence that the attacker obtained a maintainer or workflow token with sufficient privilege to modify the action's behavior. The poisoned GitHub Action then ran within LiteLLM's own CI/CD pipeline — a deeply ironic outcome in which the security scanner became the attack vector.
From inside LiteLLM's pipeline, TeamPCP extracted PyPI publishing credentials. With those credentials, the attacker was able to upload modified versions of the LiteLLM package directly to PyPI under the package's legitimate namespace — no hijacking of the package name required, no need to create a typosquatting variant. The malicious packages appeared, to automated systems and casual inspection, as routine releases from the legitimate LiteLLM project.
This is what makes the attack particularly sophisticated: it bypassed the name-similarity checks that catch most PyPI attacks because it was publishing to the real package name with real credentials.
The Malicious Payload: litellm_init.pth
The technical mechanism TeamPCP used to deploy its credential-stealing code is both elegant and alarming. Rather than modifying LiteLLM's source files — a change that might have been detected by diff-based auditing — the attacker embedded the malicious code in a .pth file named litellm_init.pth.
Python's .pth file mechanism exists for a legitimate purpose: it allows packages to add paths to Python's module search path at interpreter startup. But a lesser-known property of .pth files is that any line beginning with import is executed as a Python statement immediately when the Python interpreter initializes — before any user code runs, before any imports in the target application, and before most runtime security monitoring tools have a chance to hook into the process.
This means litellm_init.pth executed its payload the moment any Python process started in an environment where the compromised LiteLLM was installed. The application did not need to import LiteLLM. The application did not need to call any LiteLLM function. Simply having the malicious package present in the Python environment was sufficient to trigger exfiltration.
The payload itself was designed to harvest credentials: API keys for AI providers, authentication tokens, environment variables with common credential naming patterns, and configuration files that might contain service account credentials. The stolen data was exfiltrated to an attacker-controlled endpoint. Researchers analyzing the payload found it included logic to avoid triggering common rate-limiting or anomaly detection systems on the receiving end, suggesting the attacker had thought through operational security on the collection side.
Two Vectors, Two Versions
Security researchers identified that the two compromised versions — v1.82.7 and v1.82.8 — used distinct injection methods, which indicates either iterative refinement by the attacker or a deliberate use of different techniques to complicate forensic attribution.
In v1.82.7, the injection was embedded differently than in v1.82.8's .pth approach, though both versions ultimately delivered credential-stealing capability. The existence of two successive compromised versions also means that a developer who saw v1.82.7 pulled from their registry and upgraded to v1.82.8 — a natural response to any reported issue with a prior version — would have remained exposed.
LiteLLM's own security update, published on March 26, confirmed that both versions should be treated as compromised and recommended immediate upgrade to v1.82.9 or later. The update also advised all users who had v1.82.7 or v1.82.8 installed to treat all credentials accessible from those environments as compromised and rotate them without delay.
The dual-version compromise also has implications for detection: organizations that scan only for the presence of known-bad package versions may have gaps in coverage if their scanning cadence missed the window during which these versions were live.
The Trivy and Checkmarx Exposure
TeamPCP's activity did not stop at LiteLLM. The same campaign compromised Trivy itself and affected Checkmarx, another widely deployed application security testing platform. As Kaspersky's analysis documented, the attacker appeared to be targeting the security toolchain specifically — a strategy that offers a significant multiplier effect.
Security scanners occupy a privileged position in development infrastructure. They run with broad filesystem access to scan code, they often have elevated permissions to query container registries and package managers, and they are typically trusted by CI/CD orchestration systems in ways that ordinary application dependencies are not. Compromising a security tool provides access to credentials and configurations that the tool legitimately needs in order to do its job.
This creates a deeply uncomfortable dynamic: the tools organizations use to detect supply chain attacks are themselves attractive supply chain attack targets. A security scanner that has been compromised to steal credentials while continuing to perform its nominal scanning function could operate undetected for extended periods — it continues to report vulnerabilities, continues to pass CI checks, and there is no functional signal that it has been compromised.
For Checkmarx users, the exposure path was similar: the compromised Trivy action was used in pipelines that also involved Checkmarx tooling, and the credential harvest extended to Checkmarx authentication tokens and API keys. Organizations using both tools in integrated pipelines faced compounded exposure.
The Attack Window and Response Timeline
The timeline of the incident, reconstructed from BleepingComputer's reporting and GitGuardian's technical analysis, shows how quickly a well-executed supply chain attack can propagate — and how the response, though reasonably fast, still left a multi-hour exposure window.
~10:39 UTC — The first malicious version (v1.82.7) appears on PyPI. Automated systems and developers pulling the latest LiteLLM version begin receiving the compromised package.
~11:XX UTC — v1.82.8 is published, the second compromised version using the .pth file injection method. The attacker appears to have used this version to refine or replace the initial payload.
~14:XX UTC — Security researchers begin flagging anomalous behavior in LiteLLM packages. GitGuardian's automated scanning detects credential exfiltration patterns.
~16:00 UTC — PyPI quarantines both v1.82.7 and v1.82.8, making them unavailable for new downloads. The quarantine does not affect installations already completed.
~16:30 UTC — LiteLLM's maintainers publish an initial advisory. v1.82.9, a clean release, is pushed to PyPI.
The approximately five-hour window from first malicious upload to quarantine is meaningfully shorter than many historical PyPI compromise incidents, reflecting improvements in PyPI's automated detection and the security research community's capacity for rapid coordinated response. But for automated pipelines running on hourly or continuous builds, five hours represents dozens of build cycles — each of which may have installed and run the compromised package in production-adjacent environments.
Who Is Affected and How to Know
The exposure question is complicated by the .pth auto-execution mechanism. Affected environments are not limited to applications that actively used LiteLLM during the attack window — they include any Python environment where v1.82.7 or v1.82.8 was installed, regardless of whether LiteLLM was imported or called.
As DreamFactory's technical breakdown notes, this means the blast radius extends to:
CI/CD build environments that pinned to a floating version like litellm>=1.82 or litellm~=1.82. These pipelines would have received the malicious version on any build that ran during the attack window, and the .pth payload would have executed in the build runner's Python environment — potentially exfiltrating any credentials mounted as environment variables in that runner.
Development machines where developers had installed the latest LiteLLM for local development. Any API keys stored in .env files, shell environment variables, or configuration files in standard locations were potentially accessible to the payload.
Docker images built during the attack window with the compromised LiteLLM pinned or floating. These images may still be deployed in production if the vulnerable layer has not been rebuilt.
Virtual environments and conda environments that were not rebuilt after the attack window. Simply having the environment present on a machine constitutes exposure even if it was not actively used.
Checking for exposure involves verifying whether pip show litellm returns version 1.82.7 or 1.82.8, inspecting Python site-packages directories for litellm_init.pth, and reviewing build logs for any builds that ran between 10:39 and 16:00 UTC on March 24.
Snyk's advisory includes detection scripts and SBOM query patterns that organizations can use to audit their exposure across containerized environments.
Credential Rotation: What and How
For organizations that determine they were exposed, the immediate priority is credential rotation. The scope of rotation should be determined by what credentials were accessible in the affected Python environment during the attack window.
At minimum, organizations should rotate all AI provider API keys (OpenAI, Anthropic, Google, AWS Bedrock, and any others) that were present as environment variables or in configuration files accessible from affected environments. LiteLLM's architecture means these keys were likely present — that is, after all, the library's primary function.
Beyond AI provider keys, the rotation scope should extend to any other secrets that were environment variables or readable files in the affected environment: database connection strings, authentication tokens for internal services, cloud provider access keys, and any OAuth tokens that might have been mounted in the build or runtime environment.
GitGuardian's incident analysis recommends treating this as a full secrets audit rather than a targeted rotation, given that the payload was designed to sweep broadly for credential patterns rather than targeting specific key formats.
Organizations should also review their AI provider billing dashboards and API usage logs for any activity between March 24 at 10:39 UTC and the time they completed credential rotation — unexpected usage spikes or calls to unfamiliar endpoints could indicate that exfiltrated keys were used.
The Broader Supply Chain Security Problem
The LiteLLM compromise is not an isolated incident — it is the latest in a pattern of increasingly sophisticated attacks targeting the Python ecosystem and, specifically, AI infrastructure libraries.
The PyPI ecosystem faces structural challenges that make it an attractive attack surface. Package publishing is relatively easy to achieve with stolen credentials. The .pth auto-execution mechanism is a legitimate Python feature that cannot be disabled without breaking valid use cases. And the dependency graph of modern AI applications is deep and complex — a typical LLM-powered application may have hundreds of transitive dependencies, most of which receive minimal security scrutiny.
What makes the TeamPCP campaign distinctive is the decision to attack the security toolchain as the entry point. Previous supply chain attacks typically compromise the target package directly — through credential phishing of a maintainer, through a malicious pull request that a maintainer approves without careful review, or through a typosquatting package that mimics a legitimate name. TeamPCP's approach of compromising Trivy first, then using Trivy's pipeline access to steal LiteLLM credentials, represents a more sophisticated threat model.
This "poison the scanner" strategy has implications beyond this specific incident. It suggests that the security tools organizations rely on to detect supply chain attacks should themselves be treated as potentially untrusted components, subject to the same integrity verification and anomaly monitoring applied to application dependencies.
For AI-specific libraries, the threat model is further intensified by the concentration of high-value credentials. A single developer environment using LiteLLM to test AI application features may contain API keys for five or six different AI providers. A single compromised CI/CD runner building a production AI application may contain not just API keys but cloud provider credentials, database access tokens, and internal service accounts. The credential yield from a single successful infection is substantially higher than for a typical application dependency compromise.
What LiteLLM Is Doing
LiteLLM's maintainers responded quickly once the compromise was detected. The official security update published March 26 outlines both the technical details of the compromise and the remediation steps the team is implementing.
The team has committed to several process changes: mandatory two-factor authentication for all PyPI publishing credentials, removal of long-lived PyPI API tokens in favor of short-lived publishing credentials generated per-release, and a review of all third-party GitHub Actions used in CI/CD pipelines with a shift toward pinned action versions using commit SHAs rather than mutable version tags.
The commit SHA pinning change is particularly important. The standard practice of referencing a GitHub Action as uses: aquasecurity/trivy-action@v0.28.0 means that if the maintainer of that action pushes malicious code and retags v0.28.0 to point to the new commit, any pipeline using the action will execute the malicious code on its next run. Pinning to a commit SHA like uses: aquasecurity/trivy-action@abc123def456 ensures the pipeline runs exactly the code that was reviewed, regardless of subsequent changes to the tag.
The team is also working with PyPI's security team on post-incident analysis and has engaged external security researchers for a pipeline audit. LiteLLM's architecture makes its publishing credentials extraordinarily high-value, and the remediation reflects that reality.
What AI Teams Should Do Now
The LiteLLM incident has immediate operational implications for any team building on Python-based AI infrastructure.
Audit and update immediately. Any environment running LiteLLM v1.82.7 or v1.82.8 should be updated to v1.82.9 or later, and the environment should be inspected for the presence of litellm_init.pth in the site-packages directory. Find it with: python3 -c "import site; print(site.getsitepackages())" and check those directories.
Rotate all AI provider credentials. Treat this as a required action, not an abundance-of-caution measure. The payload was specifically designed to harvest API keys. Any key that was present as an environment variable in an affected environment during the attack window should be considered compromised.
Audit your GitHub Actions for mutable version references. The attack path through Trivy highlights the risk of using floating version tags in CI/CD pipelines. Every uses: line in your GitHub Actions workflows that references a mutable tag should be converted to a pinned commit SHA. Tools like actions/dependency-review-action can help automate this audit.
Re-examine your security scanner trust model. Trivy and Checkmarx were not negligent in being targeted — they are high-value targets precisely because they are trusted. But that trust should be verified, not assumed. Pin security scanner versions, monitor their behavior anomalously, and treat any output from a compromised scanner as potentially tainted.
Implement package integrity verification. PyPI supports package hash pinning through requirements.txt hash checking (pip install --require-hashes). This does not prevent the installation of a malicious package with valid credentials, but it prevents silent drift to unreviewed versions.
The LiteLLM incident is a reminder that the AI application stack, built on a foundation of open-source Python packages with complex dependency graphs and broad credential access, has a supply chain attack surface that has not received commensurate security attention. That calculus is changing, forced by incidents exactly like this one.
FAQ
Do I need to have called LiteLLM for my credentials to be at risk?
No. The .pth file mechanism executes code at Python interpreter startup, before any application code runs. If LiteLLM v1.82.7 or v1.82.8 was installed in your Python environment during the attack window, the payload ran in that environment regardless of whether your application imported or used LiteLLM. Any credentials accessible as environment variables or files in standard configuration paths in that environment should be considered potentially exfiltrated.
How did PyPI quarantine the packages? Can this happen again?
PyPI has tooling to quarantine packages that are flagged by automated scanning or reported by the security research community. Quarantine makes the package unavailable for new downloads but does not force an upgrade on existing installations. The same attack pattern could be attempted against any package whose publishing credentials are compromised — the quarantine response time has improved, but the window between a malicious upload and detection will never be zero.
Why was a .pth file used instead of modifying LiteLLM's source code directly?
The .pth approach offers two advantages for an attacker. First, it executes before application code, before most runtime monitoring hooks, and regardless of whether the package is actively imported. Second, a diff of LiteLLM's Python source files between a clean and compromised version might catch a malicious modification to __init__.py or a core module; the presence of an extra .pth file in the package's installed files is a less obvious anomaly, especially for security tools that diff source code rather than inspecting installed package manifests.
What happened to the stolen credentials?
TeamPCP's exfiltration infrastructure was active during the attack window. Researchers were able to identify the collection endpoint, and it was reported to the relevant hosting provider for takedown. Whether credentials collected during the attack window have been used or sold to other threat actors is not publicly known. This is precisely why proactive rotation — rather than waiting for evidence of misuse — is the recommended response.
Is LiteLLM safe to use now?
Yes, versions of LiteLLM from v1.82.9 onward are clean. The LiteLLM team has confirmed the removal of the malicious components and implemented process changes to reduce the risk of credential theft enabling a future attack. Organizations should update and rotate credentials, then continue using the library with normal dependency management practices including version pinning.
Were other AI infrastructure packages affected?
The confirmed affected packages are LiteLLM (v1.82.7 and v1.82.8), with Trivy and Checkmarx tooling also compromised as part of the same campaign. Security researchers are continuing to assess whether other packages in the AI infrastructure ecosystem were targeted during the same campaign. Teams using packages with significant AI provider credential access should treat this as an opportunity to audit their dependency security posture broadly, not just for LiteLLM specifically.