Security researchers have demonstrated a new way that artificial intelligence features can be misused without exploiting traditional software bugs. In recent research published by application security firm Miggo, a standard Google Calendar invite was used to influence Google’s Gemini AI assistant and bypass expected privacy boundaries using language alone.
The finding highlights a growing concern among security professionals as AI systems gain access to user data and application tools. Miggo responsibly disclosed the issue to Google, which confirmed the findings and implemented mitigations. While the vulnerability has been addressed, researchers say the technique reveals a broader challenge facing AI-powered applications.
A New Kind of Attack Without Malicious Code
The research focuses on Google Gemini’s integration with Google Calendar, where the assistant is designed to read calendar events and answer routine questions such as whether a user is free at a certain time or what meetings are scheduled for the day. That same functionality also created an unexpected attack surface.
Because Gemini interprets calendar event descriptions as part of its contextual understanding, an attacker who can influence that text can plant instructions that are later acted upon by the AI.
Unlike traditional exploits, the payload used in the research did not contain suspicious characters, scripts, or obvious indicators of abuse. It was written in plain language and appeared similar to a legitimate user request. The risk emerged only when Gemini later processed the event and combined it with a user’s routine question.
In the demonstrated scenario, a malicious calendar invite was sent to a target user. The invite contained hidden instructions embedded within the event description. These instructions remained inactive until the user later asked Gemini a normal question about their schedule.
When that happened, Gemini parsed the calendar context, followed the embedded instructions, and performed actions that went beyond what the user intended. In the researchers’ testing, this resulted in private calendar information being summarized and written into a newly created calendar event.
In some enterprise configurations, that new event could be visible to other parties, creating a pathway for sensitive scheduling data to be exposed without the user ever clicking a link or approving an action.
From the user’s perspective, Gemini appeared to behave normally, returning an innocuous response. The unintended activity happened quietly in the background.
Security teams have traditionally built defenses around predictable patterns, like SQL injection or cross-site scripting, which rely on structured input. The calendar invite exploit illustrates a different kind of risk, known as a semantic vulnerability: the language in the payload appeared harmless on its own, but became dangerous when interpreted by an AI system with access to tools and user data.
This dynamic makes traditional filtering and detection techniques less effective. Keyword blocking and pattern matching cannot reliably distinguish between benign instructions and harmful ones when both are written in natural language.
AI as an Application Layer
The research describes Gemini as more than a conversational interface, showing that it also accessed tools and APIs.
When natural language becomes an interface to application workflows, the boundary between user input and application logic becomes less rigid. That boundary is increasingly enforced through interpretation and context rather than strict rules.
Securing this layer requires approaches that account for context, intent, and the downstream effects of AI-initiated actions.
While this specific issue has been mitigated, the researchers say the underlying lesson applies across the industry. As AI features are embedded into productivity tools, messaging platforms, and enterprise systems, the risk surface expands beyond traditional code vulnerabilities.
Attacks may increasingly arrive in the form of documents, messages, calendar entries, or other content that appears harmless but is designed to influence how an AI system behaves later.
The research highlights broader considerations for securing AI-powered features beyond prompt filtering or model behavior.
For organizations adopting AI-powered assistants, protecting users will require security controls that account for both language and intent.

Leave a Reply