new AI browsers: the new cybersecurity risk

The new AI-based browsers

Convenience with high security risks

AI browsers OpenAI ATLAS and Perplexity Comet: Convenience with high security risks

New AI web browsers such as OpenAI’s Atlas and Perplexity’s Comet promise revolutionary, convenient interaction with web content and a profound simplification of work. However, according to experts, this integration poses an enormous risk for companies and could turn out to be a Trojan horse with significant cybersecurity risks.

The problem of extensive data collection and far-reaching system rights

AI browsers are designed to fundamentally change the way we work on computers. To do this, the integrated AI assistants require comprehensive access to sensitive, often personal and business data:

  • Personal information: emails, documents, appointments, media, and much more.

  • Transparent users: Users must make themselves “transparent” via the browser so that the AI can work effectively.

  • Comprehensive access rights: Browsers grant themselves far-reaching rights to collect personalized context.

  • AI training: All surfing behavior and online activities become part of AI training.

  • Cybersecurity gaps: Vulnerability to prompt injection attacks.

For companies, this poses a massive threat to data protection and the information security of sensitive company data that is processed or viewed via the browser.

Incalculable risk from AI jailbreak attacks using prompt injection

The deep integration of AI functions, which increases convenience, is also the greatest security risk. At asvin, our security researchers have already demonstrated vulnerabilities such as “jailbreaking” or “prompt injection” on AI agents, chatbots, and LLMs on several occasions:

  • Injecting hidden commands: Attackers place specially prepared, often invisible or misleading instructions (jailbreak prompts) in web pages, documents, or emails that are executed by the AI browser.

  • Bypassing security: The AI assistant reads and executes these hidden commands because it interprets them as part of the legitimate user request. This breaks through the language model’s internal security barriers (“guardrails”) .

  • Massive damage: These attacks are alarmingly simple to execute and cause serious damage.

Danger from autonomous AI agents

The situation becomes particularly critical when AI takes on autonomous tasks such as scheduling appointments or making automatic purchases. If the actual task is distorted by hidden instructions, AI agents can perform sensitive actions without the user’s knowledge:

  • Login abuse: The agent can log into online services (such as online banking or company portals) where the user is still actively logged in.

  • Data exfiltration: Transactions can be triggered, emails sent, or protected/sensitive data read and sent to external, third-party servers.

  • Stay away from AI browsers as long as these security flaws exist

    The vulnerability to jailbreak attacks and the extensive data collection pose an incalculable risk. Companies should urgently avoid using these AI browsers (Atlas, Comet) for business purposes at this time.

Lock with asvin logo

Conclusion: stay away from AI browser

The speed of development (“move fast, break things”) in the AI industry seems to come at the expense of data protection, information security, and cybersecurity. There is a lack of robust, transparent security architectures that neutralize prompt injection attacks and place sensitive actions under the manual control of the user. Until this is guaranteed, the clear recommendation is: stay away from AI browsers.

Talk to our experts at asvin if you would like to learn more about AI security and how you can protect your company and employees