Gartner Analysts Recommend Blocking AI Browsers

Gartner Analysts Recommend Blocking AI Browsers

The analytical firm Gartner has issued guidance for businesses to stop using AI browsers. Analysts warn that these new tools with autonomous agent functions create significant risks—from data leaks to unintended purchases of unnecessary goods.

This month, Gartner released a report titled "Cybersecurity Should Block AI Browsers For Now." In the document, Research Vice President Dennis Xu, Senior Director Analyst Evgeny Mirolyubov, and Analyst John Watts state that "AI browser default settings prioritize user convenience over security."

The researchers classified tools like Perplexity Comet and ChatGPT Atlas as AI browsers, which possess two key characteristics:

  • A sidebar AI panel that enables summarizing, searching, translating, and interacting with web content via AI services
  • Agent functions that allow the browser to autonomously navigate websites, interact with them, and perform tasks (particularly within authenticated sessions)

The specialists warn that these sidebars pose a serious threat: confidential data (including active web content, browser history, and open tabs) is often transmitted to the cloud backend of the AI service, increasing the risk of information leakage. Per the report, this problem can only be addressed through strict security and privacy rules combined with centralized management.

The document recommends evaluating the security posture of the AI service backend to determine if the risk level is acceptable for the organization.

However, even if an organization determines that the cloud AI service meets security requirements, Gartner recommends warning users that any information visible in the browser could potentially be transmitted to the backend. Employees should avoid working with confidential data in open tabs while the AI sidebar is active.

If the backend is deemed unsafe, Gartner advises blocking the installation and use of AI browsers for all employees.

Analysts express particular concern about the agent capabilities of AI browsers. According to the report, potential problems include actions by compromised agents caused by indirect prompt injections, erroneous actions based on incorrect AI reasoning, and possible credential loss or leakage if the AI browser is manipulated into autonomously visiting phishing sites.

The report authors also warn that employees "may succumb to the temptation to use AI browsers to automate certain tasks that are mandatory, repetitive, and not particularly interesting." Experts describe a scenario where an employee might have an AI browser complete mandatory cybersecurity training on their behalf.

Another potentially dangerous scenario involves internal procurement systems, where LLMs could make errors leading to unnecessary organizational expenses.

"A form could be filled out incorrectly, resulting in ordering the wrong office supplies, booking tickets for the wrong flight, and so on," the authors write.

The report suggests that protective measures—for example, prohibiting AI agents from accessing email—could help limit agent capabilities. AI browsers can also be configured not to save data.

The analysts conclude that AI browsers are currently too dangerous to use without prior risk assessment. Even after such analysis, organizations will likely need to create extensive lists of prohibited use cases and monitor compliance with these restrictions.