TL;DR: Check Point Research disclosed two critical vulnerabilities in Claude Code, Anthropic's AI-powered coding assistant. CVE-2025-59536 (CVSS 8.7) enables full remote code execution when a developer opens a malicious git repository. CVE-2026-21852 (CVSS 5.3) allows an attacker to silently exfiltrate the developer's Anthropic API key through the same attack vector. Both flaws require no special user interaction beyond a single action every developer performs dozens of times per day: cloning and opening a repo. Anthropic has issued patches; all Claude Code users should update immediately.
What you will learn
- What Check Point Research found
- CVE-2025-59536: how the remote code execution attack works
- CVE-2026-21852: the API key theft mechanism
- CVSS scores explained: what 8.7 and 5.3 actually mean
- Who is affected and at what scale
- The broader attack surface: AI coding tools as a supply chain vector
- Anthropic's response: patches, timeline, and transparency
- How to check if you are vulnerable and what to patch
- What developers should change immediately beyond the patch
- The larger pattern: AI tools as the new attack surface
- Frequently asked questions
What Check Point Research found
Check Point Research, the threat intelligence division of the Israeli cybersecurity firm Check Point Software Technologies, published a disclosure in early March 2026 detailing two distinct vulnerabilities in Claude Code — Anthropic's terminal-based AI coding assistant that developers use to write, review, debug, and explain code directly from their local development environment.
The disclosure is significant for two reasons. First, the attack vector is trivially achievable: opening a malicious git repository. This is not an obscure edge case. Developers clone and open repositories from GitHub, GitLab, Bitbucket, and private hosting dozens of times per week. Contributing to open source, reviewing pull requests, evaluating dependencies, onboarding to a new codebase — all of these activities begin with the same command: git clone. If a repository can weaponize that action against Claude Code users, the entire developer workflow becomes a threat surface.
Second, the severity of the outcomes is high. Remote code execution at CVSS 8.7 means an attacker gains the ability to run arbitrary commands on the developer's machine. API key theft at CVSS 5.3 means credentials that can access Anthropic's API — credentials that may be billed to the developer's account, used to impersonate them in automated systems, or leveraged to access other services using the same key — are silently extracted. These are not theoretical risks. They are concrete capabilities an attacker gains by convincing a developer to open a repository.
Check Point's disclosure follows a responsible disclosure process: the firm reported both vulnerabilities to Anthropic before publishing, giving the company time to develop and release patches. The CVE assignments — CVE-2025-59536 and CVE-2026-21852 — indicate the vulnerabilities were registered with MITRE's Common Vulnerabilities and Exposures database, making them part of the public record that security teams worldwide monitor for patch management purposes.
CVE-2025-59536: how the remote code execution attack works
CVE-2025-59536 is the higher-severity of the two vulnerabilities, scoring 8.7 on the CVSS v3.1 scale. It enables remote code execution when a developer opens a malicious git repository in Claude Code.
The attack exploits how Claude Code processes repository contents when a developer opens a project. Claude Code, like most AI coding assistants, is designed to be context-aware: it reads the files in your project to understand the codebase, follow conventions, and provide relevant suggestions. This context-awareness is a core feature. It is also the attack surface.
The exploit mechanism involves embedding specially crafted content in files that Claude Code automatically reads when a repository is opened. Claude Code's context-loading behavior — the process by which it ingests and processes project files to build its understanding of a codebase — can be manipulated so that the crafted content causes Claude Code to execute commands on the underlying operating system rather than simply read and analyze the file contents.
This class of vulnerability is known as prompt injection at the agent layer. Unlike prompt injection attacks against conversational AI systems, which typically aim to change what the model says, prompt injection in an agentic coding context — where the AI has the ability to run commands, write files, and interact with the operating system — can escalate to full code execution. The model does not merely say something unexpected; it does something unexpected with real-world consequences.
The exploitation path in practice would look like this: an attacker publishes a repository to GitHub or any public hosting service. The repository could be disguised as a legitimate open source library, a starter template, a coding exercise, or any other content that developers routinely clone. The attacker may share the link in developer communities, post it as an answer to a Stack Overflow question, or include it as a dependency reference in documentation. When a Claude Code user clones and opens the repository, the malicious payload executes with the same privileges as the developer's local user account.
No additional user interaction is required beyond opening the repository. The developer does not need to run any code from the repository, accept any prompts, or take any action beyond the single cd into the directory and invocation of Claude Code that begins every development session.
On a developer machine, that level of execution access is effectively complete compromise. Developer machines routinely contain SSH private keys, cloud provider credentials, database connection strings, .env files with API keys, browser session cookies, private code, and access to internal corporate networks via VPN. An attacker with arbitrary code execution on a developer machine has, in practice, an entry point to everything that developer can reach.
CVE-2026-21852: the API key theft mechanism
CVE-2026-21852 scores 5.3 on CVSS v3.1 and targets a more specific but highly valuable asset: the developer's Anthropic API key.
Claude Code requires an Anthropic API key to function. Developers either provide this key through environment variables, configuration files, or Claude Code's own credential storage mechanism. The key is what authenticates Claude Code's requests to Anthropic's API and is what Anthropic bills against. A stolen API key is a credential, and credentials have real monetary and operational value.
The theft mechanism operates through a variant of the same attack vector as CVE-2025-59536: malicious content embedded in a repository that Claude Code processes. Where the RCE vulnerability exploits the context-loading behavior to execute commands, the API key theft vulnerability exploits Claude Code's access to its own configuration and credential storage to extract the API key and transmit it to an attacker-controlled endpoint.
The exfiltration can be made silent. From the developer's perspective, Claude Code appears to be functioning normally — reading the repository, generating responses, behaving as expected. The API key theft occurs in the background, embedded in what looks like routine context-processing behavior. The developer may not discover the theft until they notice unexpected API usage charges, receive a security alert from Anthropic, or conduct a deliberate audit of their credentials.
Stolen Anthropic API keys have multiple utility for attackers. At minimum, they enable the attacker to consume Anthropic API credits billed to the victim — effectively stealing paid compute. More significantly, API keys tied to enterprise accounts can have elevated rate limits and access to models or capabilities that free-tier keys do not. In environments where the same API key is used across multiple systems — CI/CD pipelines, production deployments, internal tools — a compromised key represents a much broader access footprint than just Claude Code usage.
The CVSS score of 5.3 reflects a medium severity rating, but "medium" in CVSS terms applies to the base score before accounting for environmental factors. In a professional development environment where Claude Code API keys may be shared with organizational billing, linked to enterprise Anthropic accounts with extensive usage history, or reused across multiple services, the practical impact is significantly higher than the base score suggests.
CVSS scores explained: what 8.7 and 5.3 actually mean
The Common Vulnerability Scoring System (CVSS) provides a standardized numerical severity rating for vulnerabilities, ranging from 0.0 (no impact) to 10.0 (maximum severity). Understanding what the scores for these two CVEs mean in practice matters for how security teams and individual developers should prioritize response.
CVE-2025-59536 at CVSS 8.7 falls in the "High" severity band (7.0–8.9). The score reflects several factors: the attack vector is network-adjacent (a publicly accessible repository), no authentication is required on the part of the attacker, user interaction is minimal (opening a repository), and the impact on confidentiality, integrity, and availability of the victim system is high. An 8.7 score is not "Critical" (9.0+), but it is firmly in the range of vulnerabilities that security teams treat as urgent — typically expecting patch deployment within 30 days under standard vulnerability management policies, and within 7–14 days in environments with elevated risk profiles.
CVE-2026-21852 at CVSS 5.3 falls in the "Medium" severity band (4.0–6.9). The lower score reflects that the impact is more constrained — credential theft rather than full system compromise. However, CVSS base scores are calculated without reference to environmental context. Security teams apply "environmental scores" that modify the base rating based on the specific value of the affected assets and the potential downstream impact of exploitation in their environment. For most professional developers, an API key tied to an Anthropic account with organizational billing is an asset whose theft warrants treating this as a higher priority than the 5.3 base score implies.
Both scores assume the vulnerability has not yet been exploited in the wild. Once exploitation begins — if it has not already — the effective risk increases substantially because the attacker's cost of exploiting is low (publish a repository, share the link) while the defender's cost of auditing every previously opened repository for potential compromise is high.
Who is affected and at what scale
Claude Code is used by a large and rapidly growing population of professional developers. Anthropic has not published specific Claude Code user counts, but the tool has become a primary development environment for engineers working with Claude's API, building AI-native applications, and increasingly for general software development where teams have adopted Claude Code as their primary coding assistant.
The at-risk population includes every developer who:
- Has Claude Code installed on their machine
- Has opened any external git repository using Claude Code since the vulnerabilities were introduced
- Has an Anthropic API key stored in any configuration format accessible to Claude Code
This is not a narrow category. It encompasses individual developers, engineering teams at startups, engineering teams at enterprise organizations with Anthropic API contracts, open source contributors who use Claude Code to explore projects, and security researchers who use Claude Code to analyze codebases. The developer community's practice of routinely exploring public repositories — to evaluate dependencies, contribute to open source, reproduce issues — means that most active Claude Code users have almost certainly opened multiple external repositories over the tool's lifetime.
The risk is also not symmetric. Developers who primarily work within private, controlled codebases face lower exposure than developers who actively explore public repositories, evaluate open source dependencies, or contribute to community projects. The attack requires a developer to open a repository controlled by an attacker. But "repository controlled by an attacker" includes any repository that an attacker has compromised through a separate attack — dependency confusion, account compromise, or supply chain infiltration — in addition to repositories the attacker explicitly created to exploit Claude Code users.
The Claude Code vulnerabilities are not isolated. They represent a category of risk that security researchers have been tracking with increasing urgency as AI coding assistants have become embedded in professional development workflows: the AI coding tool as a supply chain attack vector.
Traditional software supply chain attacks target the code itself — malicious packages, compromised dependencies, backdoored build scripts. AI coding tool attacks target the development environment at a layer above the code: the AI assistant that a developer trusts to help them understand, write, and debug that code. If an attacker can compromise the coding assistant, they can compromise the developer before any code is written or executed.
This attack surface has several properties that make it particularly dangerous.
Implicit trust. Developers extend a high degree of trust to their coding tools. An IDE, a linter, a compiler — these are tools the developer expects to have broad access to their system. An AI coding assistant that reads project files, runs commands, and interacts with the filesystem is trusted at the same level by default. This implicit trust means developers are unlikely to be suspicious of Claude Code behavior unless they are specifically watching for anomalies.
High-value targets. Developers are among the highest-value targets in a software supply chain context. Compromise a developer machine, and you gain access not just to personal files but to the code repositories, cloud environments, internal systems, and CI/CD pipelines they interact with professionally. The blast radius of a single developer compromise can include production systems serving millions of users.
Scale through open source. A single malicious repository can reach a large number of Claude Code users if it is published as a plausible open source project and promoted in developer communities. The attacker's cost is low; the potential reach is high.
The Claude Code vulnerabilities are the first high-severity CVEs assigned to a major AI coding assistant, but they will not be the last. Every AI coding tool that automatically processes repository contents — GitHub Copilot, Cursor, Codeium, Continue, and others — has a surface area that security researchers will probe with increasing intensity as these tools become more capable and more widely deployed.
Anthropic's response: patches, timeline, and transparency
Anthropic received Check Point's disclosure through a coordinated vulnerability disclosure process. The company confirmed both vulnerabilities, developed patches, and released updated versions of Claude Code before Check Point's public disclosure. This is the correct sequence: fix first, publish second.
The patches address the root cause of both vulnerabilities — the way Claude Code processes repository content during context loading — by implementing stricter validation and sanitization of content that influences Claude Code's behavior during project ingestion. The specific implementation details of the patches are not publicly documented at the level of code changes, which is standard practice to avoid providing exploitation guidance for users who have not yet patched.
Anthropic's public response to the disclosure has been measured. The company confirmed the CVE assignments, acknowledged Check Point's responsible disclosure, and directed users to update. What it has not provided — as of the disclosure date — is a detailed post-mortem explaining how the vulnerability was introduced, how long it existed before discovery, and what architectural changes beyond the immediate patches will prevent similar vulnerabilities in future versions.
That post-mortem matters for enterprise customers making risk assessments. A patch fixes the known vulnerability. A post-mortem answers the question every security team is actually asking: is the underlying code quality and review process sound enough that this was an isolated failure, or is it symptomatic of a broader gap in how Anthropic builds and reviews security-sensitive code in Claude Code?
Anthropic has built a reputation for thoughtful communication around safety and security in AI model development. Applying that same transparency standard to the security of its developer tools would strengthen enterprise confidence more than the acknowledgment of the vulnerabilities damaged it.
How to check if you are vulnerable and what to patch
The immediate priority for every Claude Code user is verifying they are running a patched version.
Check your Claude Code version. Run claude --version in your terminal. Anthropic has not published a formal advisory page with the specific version numbers that contain the fix as of this writing, but the patched versions were released following Check Point's disclosure coordination. If you have not updated Claude Code since early March 2026, assume you are running a vulnerable version.
Update Claude Code. Claude Code is installed via npm. Run npm update -g @anthropic-ai/claude-code or the equivalent for your installation method. If you installed via a package manager other than npm, use that package manager's update mechanism.
Audit recently opened external repositories. If you have opened any repository from an external source — public GitHub repos, repos shared via links in forums or documentation, third-party dependencies you cloned locally — treat those sessions as potentially compromised. Check for unexpected processes, outbound network connections during Claude Code sessions, and any new files created in your home directory or configuration directories during those sessions.
Rotate your Anthropic API key. If there is any chance you opened an external repository with a vulnerable version of Claude Code, rotate your API key immediately. In the Anthropic console, navigate to API keys and generate a new key. Update all systems that use the old key. The cost of rotating a key is low; the cost of a compromised key that you did not rotate is open-ended.
Check API usage. Review your Anthropic API usage in the console for any anomalous requests — usage patterns inconsistent with your normal development activity, requests from IP addresses you do not recognize, or usage spikes that do not correspond to your own development sessions.
Patching the immediate vulnerability is necessary but not sufficient. These two CVEs expose a set of practices that should become permanent parts of a developer's security posture when using AI coding tools.
Treat AI coding tool processes as sandboxed where possible. Run Claude Code in environments where its access to sensitive credentials is limited. Consider using dedicated API keys for Claude Code that have restricted permissions and usage caps rather than the same key you use for production API access.
Separate credentials by use. Do not store production API keys, cloud provider credentials, or SSH private keys in the same home directory environment that Claude Code accesses during ordinary development sessions. Use a secrets manager, environment-specific credential stores, or hardware security keys that are not accessible to software running with standard user privileges.
Be deliberate about what repositories you open with AI tools. Before cloning and opening an external repository with Claude Code, consider who controls that repository and whether you have reason to trust them. For high-risk exploration — security research, dependency auditing, evaluating unfamiliar open source code — consider doing the initial exploration in a sandboxed or virtualized environment.
Monitor Claude Code's behavior. During Claude Code sessions, especially when working with external repositories, run a process monitor in a separate terminal. Any network connections Claude Code makes that are not directed at Anthropic's API endpoints are anomalous. Any file writes outside the project directory are anomalous.
Apply the same supply chain scrutiny to your AI tools that you apply to your code dependencies. Pin Claude Code to specific versions in team environments rather than always running latest. Review release notes for each update. Track CVEs assigned to developer tools the same way you track CVEs in your application dependencies.
The Claude Code CVEs are, in the broadest frame, an early data point in a pattern that the security industry will be dealing with for years: as AI tools become embedded in developer workflows, they become high-value targets for attackers who understand that the fastest path to a codebase is not through the code itself but through the developer who writes it.
The attack surface for software development has expanded dramatically. It used to be code, dependencies, and build infrastructure. It now includes AI coding assistants, model providers, context files, and the prompts and instructions that shape AI behavior in agentic contexts. Each of these new components is a potential attack vector that did not exist three years ago.
The security industry has decades of practice identifying, disclosing, and patching vulnerabilities in traditional software. It does not yet have equivalent maturity for AI tool security. The vocabulary is still being built — prompt injection, context poisoning, agentic exploit chains — and the tooling for detecting and preventing these attacks is nascent.
Check Point's disclosure is valuable precisely because it names the problem concretely. These are not theoretical risks; they are real CVEs assigned to a widely used developer tool with real CVSS scores that security teams can act on. The fact that they were discovered and disclosed responsibly, with patches available before public disclosure, is the best-case outcome for a vulnerability of this severity.
What comes next matters more than what happened this week. The industry needs vulnerability research focused specifically on AI coding tools, disclosure norms that are adapted to the agentic attack surface, and tool developers — Anthropic, Microsoft, Google, and others — who treat the security of their developer tools with the same rigor they apply to their model safety programs.
The Claude Code vulnerabilities are a test of whether the AI industry can absorb that lesson early, while the tools are still maturing, rather than after a major exploitation event forces the conversation.
Frequently asked questions
What is Claude Code and why does it run code on my machine?
Claude Code is Anthropic's terminal-based AI coding assistant. Unlike chat-based AI tools, Claude Code is designed as an agentic tool that can read your project files, run shell commands, write code, and interact with your development environment directly. This capability — running commands on your local machine — is what makes it useful for real development work and also what makes it a higher-stakes security target than a purely conversational AI system.
Do I need to have done anything unusual for these vulnerabilities to affect me?
No. The attack vector is opening a malicious git repository with Claude Code — an action that is part of ordinary development work. Any external repository you opened with a vulnerable version of Claude Code is a potential exposure. This includes public GitHub repositories, repositories shared in forums or documentation, and clones of dependencies you made to audit their source code.
Is my Anthropic API key definitely stolen if I opened an external repo with a vulnerable version?
Not necessarily. CVE-2026-21852 requires that an attacker specifically crafted the repository you opened to exploit this vulnerability. Not every external repository is malicious. However, because you cannot inspect a repository retrospectively with certainty to determine whether it contained the exploit payload, the safest response is to treat any API key used during potentially exposed sessions as compromised and rotate it.
The specific CVEs apply only to Claude Code as of this disclosure. However, the underlying vulnerability class — AI coding tools that automatically process repository contents and have access to credentials and system resources — applies to the entire category. Other tools have not been disclosed as having identical vulnerabilities at the time of writing, but the attack surface exists across the category and other researchers are likely examining it.
Does running Claude Code in a Docker container or virtual machine protect me?
Yes, with caveats. Running Claude Code in an isolated environment — a Docker container without access to your host credentials, a virtual machine, or a dedicated development sandbox — significantly limits the blast radius if an exploit is triggered. The attacker can still execute code within the container, but they cannot reach your host system's credentials, SSH keys, or cloud provider configuration. This is one of the best architectural choices for developers who regularly explore external repositories with AI coding tools.
What happened to the responsible disclosure timeline?
Check Point Research followed responsible disclosure norms: the firm reported both vulnerabilities to Anthropic before publishing, allowing time for patches to be developed and released. Anthropic confirmed the vulnerabilities, developed fixes, and released patched versions before Check Point's public disclosure. The CVE assignments were registered with MITRE. This is the correct process, and it worked as intended — patches were available before attackers had access to the technical details needed to exploit the vulnerabilities at scale.
Where can I follow updates on these CVEs?
Monitor the MITRE CVE database entries for CVE-2025-59536 and CVE-2026-21852. Follow Anthropic's security advisories and Claude Code release notes. Check Point Research's disclosure blog post contains the most technical detail available publicly. If you run Claude Code in an enterprise environment, subscribe to your organization's vulnerability management feed, which should pick up both CVEs through standard NVD feeds within days of MITRE registration.