A new zero-click vulnerability, dubbed ShadowLeak, has been discovered in OpenAI’s ChatGPT Deep Research agent, according to a report by The Hacker News. The flaw has the potential to expose Gmail inbox data to attackers, without any direct user interaction, simply by sending a malicious email to a victim.
This security breach underscores the growing concerns around the intersection of AI tools and cloud-based personal data. Let’s break down what this means for both businesses and individuals, how it works, and what steps can be taken to protect sensitive data from similar attacks.
ShadowLeak is a sophisticated attack method identified by security researchers at Radware, involving a zero-click vulnerability in OpenAI’s ChatGPT-powered Deep Research agent. The flaw enables a threat actor to craft an email that, when opened by a victim, silently extracts sensitive data from the victim’s Gmail inbox. This exfiltration occurs without the victim’s knowledge or any need for them to click or respond to the email in any way.
This attack works by leveraging a hidden prompt injection embedded within an email’s HTML. Using tricks like white-on-white text or tiny font sizes, attackers are able to subtly embed instructions that ChatGPT’s Deep Research agent interprets, without the victim noticing anything unusual. These instructions direct the AI to access their Gmail inbox and transmit personal information to an external server.
Once the malicious email is received, the attacker only needs the victim to engage ChatGPT’s Deep Research agent to initiate the extraction of sensitive data. This exploit can be extended to other connectors that ChatGPT supports, such as Dropbox, Google Drive, Microsoft Outlook, and more, significantly broadening the attack surface.
How Does It Work?
The process begins with an attacker sending a seemingly benign email containing invisible instructions. These instructions are encoded into the email’s HTML or styled using CSS to remain hidden from the user. When the victim interacts with ChatGPT’s Deep Research agent—either by prompting it to analyze their emails or another connected service—the agent unknowingly follows the malicious instructions embedded in the email.
The instructions direct the agent to extract personal identifiable information (PII) from the victim’s inbox, encode it into Base64 format, and send it to the attacker’s external server using the browser.open() tool.
Because the extraction occurs within OpenAI’s cloud infrastructure and bypasses any local or enterprise-level defenses, the attack is particularly dangerous and hard to detect. Traditional security measures like antivirus programs and email filters would likely miss this type of exploit, as the threat operates directly within the AI’s cloud environment.
The major issue with the attack is the stealthiness. Unlike other client-side attacks, such as AgentFlayer and EchoLeak, this vulnerability operates within the AI’s cloud ecosystem. This makes it invisible to local defenses, as the data is exfiltrated through the cloud, not via the client device. This lack of visibility could leave individuals and organizations unaware of the breach, as it bypasses traditional security layers.
Furthermore, because Deep Research agents are increasingly integrated into productivity tools like Gmail, Box, Google Drive, and Microsoft Outlook, the attack can affect a wide array of users and organizations, escalating the risk.
It’s not an isolated incident, as researchers have also demonstrated how AI can be coaxed into bypassing other types of security measures, such as image-based CAPTCHAs. In these cases, malicious actors can exploit AI systems to solve CAPTCHAs intended to differentiate between human users and bots. The attacker redefines the CAPTCHA as “fake” and manipulates the system to bypass it, underscoring the importance of context integrity and memory hygiene in AI systems.
As AI tools like ChatGPT continue to evolve, the risk of these types of attacks becomes more pronounced, especially as they integrate further into business workflows, cloud storage solutions, and personal communication platforms.
Steps You Can Take to Protect Data
Given the rising frequency of these types of vulnerabilities, both individuals and organizations need to be proactive about cybersecurity to reduce risk of similar attacks:
Monitor AI Interactions
Ensure that any AI-driven tools, such as ChatGPT, are regularly monitored for unusual behavior or unauthorized access to sensitive data.
Keep Software Up to Date
Regularly update security patches for all software and platforms, including AI tools and cloud integrations, to minimize known vulnerabilities.
Be Wary of Suspicious Emails
Even with AI-powered email filtering, it’s still important to avoid interacting with suspicious emails, especially those from untrusted sources.
Limit AI Integrations
If possible, limit the integrations that ChatGPT or other AI systems have with sensitive accounts (such as Gmail or Dropbox). Disabling unnecessary integrations will reduce the attack surface.
Use Encryption & Implement AI-Specific Security Measures
Always encrypt sensitive data stored on cloud platforms and consider using multi-factor authentication (MFA) for added security. And as AI systems become more integrated into business operations, it is crucial to implement specialized security measures that protect against AI-specific exploits.
Emerging threats show how AI systems can potentially be used to exploit cloud-based services in ways that are difficult to detect. While this specific issue has been addressed, as we move deeper into the AI-driven future, both individuals and organizations must stay vigilant, implement robust security protocols, and stay updated on emerging threats to ensure their data remains safe.
For more details on the ShadowLeak vulnerability and how AI tools can be exploited in cyberattacks, check out the full article on The Hacker News here.
.
Leave a Reply