Google Will Not Fix the ASCII Smuggling Issue in Its Gemini AI Assistant

Google Will Not Fix the ASCII Smuggling Issue in Its Gemini AI Assistant

Google developers have confirmed that they will not issue a fix for the so-called “ASCII smuggling” issue affecting their Gemini AI assistant. The flaw can be used to trick the model into producing false information, altering its behavior, or even poisoning training data with invisible payloads.


Understanding ASCII Smuggling

In an ASCII smuggling attack, characters from the Unicode Tags block are used to embed hidden instructions that are invisible to human users but readable by Large Language Models (LLMs). This technique exploits the difference between what a person sees and what the machine interprets similar to CSS-based prompt injections or graphical interface bypasses demonstrated in earlier AI security research.

Viktor Markopoulos, a security researcher at FireTail, tested the attack across several AI systems and found that Gemini (via Calendar invites and emails), DeepSeek (via prompts), and Grok (via posts on X) were all vulnerable.

By contrast, Claude, ChatGPT, and Microsoft Copilot proved more resilient, as they apply input sanitization to block hidden Unicode payloads.


Risks Within Google Workspace

FireTail’s researchers warn that Gemini’s tight integration with Google Workspace creates a particularly dangerous attack surface. Malicious actors could embed hidden text inside emails or Calendar invites, allowing them to silently influence how Gemini interprets and acts on content.

Markopoulos demonstrated that it is possible to insert malicious commands in the subject line of a Calendar invite, causing the assistant to modify meeting data or inject hidden links.

“If an LLM has access to a user’s mailbox, it’s enough to send an email containing hidden commands,” Markopoulos explained. “The model could then search for confidential messages, forward contacts, or assist in data collection—turning a standard phishing email into an autonomous breach tool.”

Furthermore, LLMs tasked with web browsing could unknowingly process invisible payloads hidden in online content such as product descriptions, retrieving malicious URLs and passing them to users.


Demonstration and Google’s Response

In a live test, the researcher managed to trick Gemini into recommending a malicious website, which the model falsely described as a legitimate phone retailer offering discounts.

Markopoulos reported the vulnerability to Google on September 18, 2025, but the company declined to classify it as a security issue. Google’s engineers argued that ASCII smuggling “does not represent a vulnerability” and can only be exploited in social engineering contexts.

Other firms have taken the opposite stance. For example, Amazon has published a detailed security advisory on Unicode character smuggling, emphasizing its potential for misuse in AI and web contexts.