Researchers at Cybernews recently uncovered a critical security vulnerability in an AI-powered chatbot that could enable attackers to execute malicious code and steal sensitive data. The flaw was discovered in a the Lenovo AI chatbot “Name of Chatbox here” used for customer support.”
Lenovo’s Lena, which uses DeepAI’s GPT-4 technology, was vulnerable to a type of attack called Cross-Site Scripting (XSS). Essentially, this meant that malicious users could craft messages that trick Lena into executing harmful code. If successful, attackers could steal session cookies which are tiny bits of data that keep users logged in, and even gain access to internal company systems.
Essentially, with just one clever message, someone could potentially hijack a support agent’s session or access sensitive customer data.
The attack involved a single prompt—about 400 characters—that contained four key elements:
- A normal question: For example, asking Lena for product specs
- Instructions to respond in HTML where the hacker asks Lena to format its reply as a webpage, which opens the door for malicious code
- The malicious payload where the message includes hidden HTML code that tries to load an image from an attacker-controlled server. When the image fails to load, the browser is tricked into sending session cookies to the attacker
- Then a final instruction where the attacker prompts Lena to display the image, which triggers the malicious code.
When Lena generated this response, it stored the malicious HTML in the chat history. Later, if a support agent opened that chat, their browser could unknowingly execute the harmful code—sending sensitive data straight to the attacker.
This flaw reveals a fundamental security oversight where the system was not properly sanitizing, or cleaning, inputs and outputs. Large language models like GPT-4 follow instructions carefully but don’t inherently recognize malicious commands. Without proper safeguards, they can be manipulated into doing things they shouldn’t.
In all systems that intake and production of data, its critical to treat all as untrusted. Sanitize everything, clean and filter all data entering and leaving the system, and strip out any embedded code or HTML that could be harmful. Also limit permissions to ensure AI systems and chatbots run with only the permissions they need, and implement strong security policies such as Content Security Policies (CSPs) to restrict what resources (like scripts or images) can be loaded.
AI offers incredible benefits, but it also comes with new risks. As organizations adopt AI more widely, it’s vital to Regularly review, monitor, and update AI security protocols to catch new threats early.
Learn more about the vulnerability findings on Cybernews’ article post here.
Leave a Reply