Recent research by Check Point Research has uncovered critical vulnerabilities in Anthropic’s Claude Code, highlighting a growing and often overlooked risk in modern AI-powered development tools: configuration files that quietly cross the line from passive settings into active execution.
The flaws allowed attackers to execute arbitrary commands and steal authenticated API keys simply by convincing a developer to clone and open a malicious repository. No additional interaction was required. In enterprise environments where AI tools are deeply embedded into workflows, the implications are far-reaching.
Claude Code was designed to improve collaboration by allowing project-level configuration files to live directly inside code repositories. When a developer opens a project directory, these configurations are automatically applied to streamline setup and automation.
However, researchers demonstrated that these same configuration files could be weaponized. In certain scenarios, simply opening an untrusted repository could:
- Execute hidden shell commands on a developer’s machine
- Bypass built-in trust and consent mechanisms
- Expose active Anthropic API keys
- Expand impact beyond a single endpoint into shared enterprise workspaces
What appeared to be harmless metadata effectively became an execution layer. This transformed the act of opening a project into a supply chain risk.
Claude Code supports Hooks, an automation feature that runs predefined actions when a session starts. Researchers found that malicious repositories could abuse this capability to run arbitrary shell commands automatically. The result is that a developer could unknowingly trigger command execution simply by opening the project, without clicking, approving, or running anything manually.
One vulnerability, tracked as CVE-2025-59536, allowed bypassing of user Consent with MCP integrations. Claude Code integrates with external tools through the Model Context Protocol (MCP). While safeguards were intended to prompt users for approval before initializing these integrations, repository-controlled configuration settings could override those prompts. This allowed external tools to initialize before the user granted consent and without clear visibility into what was running, despite existing trust warnings. When execution happens before trust is established, control shifts away from the user. This represents a fundamental breakdown of the security model.
The second vulnerability, trackd as CVE-2026-21852, allowed API key theft before trust confirmation. Claude Code authenticates to Anthropic’s services using an API key sent with every request. By manipulating repository-level configuration, attackers could redirect API traffic which includes authorization headers to attacker-controlled infrastructure before the user confirmed trust in the project. This allowed exfiltration of active API keys and interception of authenticated API traffic without any visible warning. In collaborative AI environments, this kind of exposure can quickly scale beyond a single user.
Anthropic’s platform supports Workspaces, where multiple API keys share access to cloud-stored project files. These files belong to the workspace, not an individual key.
A single compromised key could potentially allow:
- Access to shared project data
- Modification or deletion of files
- Upload of malicious content
- Generation of unexpected and costly API usage
What starts as a developer-level compromise can rapidly become a team-wide or organization-wide incident.
Check Point Research worked closely with Anthropic throughout the disclosure process with implemented fixes that strengthened trust and consent prompts, prevented external tool execution before approval, and blocked API communication until trust is explicitly confirmed.
These findings reflect a broader shift in the software supply chain threat model. In AI-driven development environments, repository configuration files no longer function solely as instructions, but actively influence execution, integration, and network behavior. As AI-powered coding tools become standard in enterprise environments, the risk is no longer limited to running untrusted code and now includes opening untrusted projects, requiring organizations to reassess trust boundaries, governance models, and supply chain security assumptions.

Leave a Reply