Researchers Reveal “Gemini Trifecta” Vulnerabilities in Google’s AI Assistant
Security researchers have disclosed details of three now-patched vulnerabilities in the Google Gemini AI assistant, collectively dubbed the “Gemini Trifecta.” If successfully exploited, the flaws could have tricked the AI into assisting in data theft and other malicious activities.
The Three Vulnerabilities
According to researchers at Tenable, the issues affected three separate Gemini components:
- Prompt Injection in Gemini Cloud Assist
- This flaw allowed attackers to compromise cloud services by embedding malicious prompts in the User-Agent header of an HTTP request.
- When Gemini summarized logs pulled directly from services such as Cloud Run, App Engine, Compute Engine, Cloud Endpoints, Cloud Asset API, Cloud Monitoring API, and Recommender API, the hidden prompt was executed.
- Attackers could use this technique to uncover IAM misconfigurations or request sensitive resources, embedding the results in a hyperlink.
- Search Injection in Gemini Search Personalization Models
- This vulnerability enabled attackers to poison the victim’s Chrome search history using JavaScript.
- Because the model failed to distinguish between legitimate queries and injected prompts, attackers could steal saved information and location data.
- Indirect Prompt Injection in the Gemini Browsing Tool
- By placing a malicious prompt on a webpage, attackers could hijack Gemini’s internal summarization process.
- When Gemini summarized the page, it executed the attacker’s hidden instructions, allowing exfiltration of user data to a malicious server.
Notably, these exploits did not require Gemini to render links or images—data could be exfiltrated purely through hidden prompts embedded in requests.

Potential Impact
“One of the most dangerous attack scenarios looks like this: an attacker injects a prompt that instructs Gemini to request all publicly available resources or find IAM configuration errors, and then generate a hyperlink with this confidential data,” Tenable explained, using the Cloud Assist bug as an example. “This is possible because Gemini has permissions to request resources via the Cloud Asset API.”
In the search injection case, attackers only needed to lure victims to a specially crafted website to poison their browser history. Later, when Gemini Search Personalization was used, the malicious instructions were executed, resulting in the theft of sensitive data.
Google’s Response
After being notified, Google moved quickly to patch the vulnerabilities. Mitigations included:
- Disabling hyperlink rendering in log summaries
- Adding additional safeguards against prompt injection attacks
Broader Implications
The findings highlight how AI systems can act as both targets and tools in cyberattacks.
“The Gemini Trifecta vulnerabilities demonstrate that AI can become not only a target but also an attack tool. When implementing AI, organizations cannot neglect security,” the researchers emphasized.