In a recent discovery by researches at Legit Security, vulnerabilities were found in GitLab Duo, the AI assistant integrated into GitLab (and powered by Anthropic’s Claude). The uncovered issue demonstrated how hidden code in GitLab could manipulate Duo into leaking private source code, injecting malicious HTML, and even exfiltrating confidential vulnerabilities.
GitLab Duo is designed to assist developers with tasks such as code suggestions and security reviews.It’ss deeply integrated into the GitLab ecosystem, but this also created a potential attack surface.
The vulnerability centered around remote prompt injection — a flaw that allowed attackers to manipulate Duo’s responses by embedding hidden instructions in various parts of GitLab projects, including:
- Merge request (MR) descriptions
- Commit messages
- Issue descriptions and comments
- Source code itself
These hidden instructions could influence Duo’s behavior, leading it to suggest malicious code, present unsafe URLs as safe, or even leak private information when responding to code or security-related queries.
How the Attack Worked
The hackers used several techniques to hide malicious prompts in GitLab, including hiding them within Unicode or white text made it harder for Duo to detect and utizling Duo’s use of asynchronous markdown rendering for raw HTML (such as or tags) to be injected into its responses.
This combination of vulnerabilities allowed attackers to manipulate Duo into revealing private source code or security information, such as zero-day vulnerabilities, by embedding hidden prompts inside public-facing parts of GitLab projects. When a victim interacted with GitLab Duo, the AI would process these instructions, inject malicious HTML into its response, and inadvertently leak sensitive data.
Full Attack Scenario
- An attacker embeds a hidden prompt in a public merge request or comment
- A victim uses GitLab Duo for a code review or merge request analysis
- Duo processes the hidden prompt and injects malicious HTML (such as an tag)
- The browser tries to render the tag, unknowingly sending sensitive data such as private source code to the attacker’s server
While the main focus of the attack was exfiltrating source code, the same technique could also expose sensitive project data, such as private issue discussions or internal security disclosures. This could allow attackers to gain access to confidential vulnerabilities or zero-day information.
GitLab addressed the issue quickly after being alerted to the vulnerabilities, releasing a patch to prevent Duo from rendering unsafe HTML tags, thereby closing the loophole that allowed attackers to exfiltrate data via malicious HTML.
AI assistants can carry risks when deeply integrated into development workflows. While tools such as GitLab Duo that interact with user-generated content can be incredibly helpful, they also introduce new attack surfaces if not properly secured to prevent exploitation.
Leave a Reply