OpenAI’s new ChatGPT Atlas browser is rolling out with powerful agentic features designed to read pages, click buttons, and carry out tasks on a user's behalf.
But almost immediately, security researchers are sounding the alarm, warning that this new agentic browsing mode creates a dangerous and highly exploitable new attack surface.
To understand just how serious these risks are and what it means for businesses and users, I talked it through with SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 176 of The Artificial Intelligence Show.
A Dangerous New Attack Surface
While features like browser memories and an experimental agent mode sound useful, cybersecurity experts warn they are vulnerable to prompt injection attacks.
The core issue is that the AI agent can fail to distinguish between a user's trusted instructions and malicious, often hidden, instructions embedded in a webpage.
This effectively "collapses the boundary between data and instructions," as one expert noted in Fortune. A hidden prompt on a website could hijack the Atlas agent to exfiltrate a user's emails, overwrite their clipboard with malicious links, or even initiate malware downloads, all without the user's knowledge.
Unseeable Attacks and Clipboard Hijacks
This isn't just a theoretical threat. Security researchers are already demonstrating how these exploits work in the wild.
The browser company Brave detailed how "unseeable prompt injections,” or malicious commands hidden in faint text or even within screenshots, can be read and executed by AI agents.
Another researcher, known as Pliny the Liberator, highlighted the vulnerability of "clipboard injection." A user might think they are copying simple text from a webpage, but they could also be copying hidden instructions that command the AI agent to perform a malicious action the next time they paste.
As Brave’s research points out, if you are signed into sensitive accounts like your bank or email, simply asking the agent to summarize a Reddit post could result in an attacker stealing your data or money.
An Unsolved Security Problem
The security community's backlash was so swift that OpenAI’s Chief Information Security Officer, Dane Stuckey, released a public statement.
While Stuckey noted that the company performed "extensive red-teaming" and implemented "overlapping guardrails," he also made a critical admission: prompt injection remains a "frontier, unsolved security problem."
This response was telling.
"You could tell this became an issue real fast," says Roetzer. "This is very obviously not safe for work stuff."
The Business Takeaway: "Do Not Turn This On"
For any business leader or professional wondering if they should try Atlas, Roetzer’s advice is unequivocal.
"As the CEO of a company my first thing is like: do not turn this on. Do not use this unless it's in a very controlled environment and we know what we're doing," he says.
Beyond active attacks, the basic privacy implications are massive. Atlas's browser memories feature works by summarizing web content on OpenAI's servers. While the company claims it applies filters designed to keep out personally identifiable information (PII) like social security or bank account numbers, the key word is "designed."
"You are now trusting OpenAI that their filters work," Roetzer notes. "And that that stuff doesn't end up somewhere you don't want it to. Just to make this super clear to everybody, they monitor everything you do. It remembers everything you do, including all of your personal information and activity, and it summarizes all of that unless their data filters work correctly.”
Roetzer also pointed to a confusing setting that seems to imply users can decide whether OpenAI can use third-party copyrighted content they browse to train its models.
The End Goal vs. Today’s Reality
It's clear what OpenAI is trying to build.
"They're trying to shift behavior and really get you to treat ChatGPT as a platform for your life and your work," says Roetzer.
But this new browser is just a very early, and very risky, step in that direction. As noted programmer Simon Willison wrote, the "security and privacy risks involved here feel insurmountably high to me."
The bottom line for now? Experiment at your own risk.
Mike Kaput
As Chief Content Officer, Mike Kaput uses content marketing, marketing strategy, and marketing technology to grow and scale traffic, leads, and revenue for Marketing AI Institute. Mike is the co-author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business (Matt Holt Books, 2022). See Mike's full bio.

