Chatgpt Atlas Browser Can Be Tricked By Fake Urls Into Executi...

Chatgpt Atlas Browser Can Be Tricked By Fake Urls Into Executi...

The newly released OpenAI ChatGPT Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit.

"The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to the agent," NeuralTrust said in a report published Friday.

"We've identified a prompt injection technique that disguises malicious instructions to look like a URL, but that Atlas treats as high-trust 'user intent' text, enabling harmful actions."

Last week, OpenAI launched Atlas as a web browser with built-in ChatGPT capabilities to assist users with web page summarization, inline text editing, and agentic functions.

In the attack outlined by the artificial intelligence (AI) security company, an attacker can take advantage of the browser's lack of strict boundaries between trusted user input and untrusted content to fashion a crafted prompt into a URL-like string and turn the omnibox into a jailbreak vector.

The intentionally malformed URL starts with "https" and features a domain-like text "my-wesite.com," only to follow it up by embedding natural language instructions to the agent, such as below -

Should an unwitting user place the aforementioned "URL" string in the browser's omnibox, it causes the browser to treat the input as a prompt to the AI agent, since it fails to pass URL validation. This, in turn, causes the agent to execute the embedded instruction and redirect the user to the website mentioned in the prompt instead.

In a hypothetical attack scenario, a link as above could be placed behind a "Copy link" button, effectively allowing an attacker to lead victims to phishing pages under their control. Even worse, it could contain a hidden command to delete files from connected apps like Google Drive.

"Because omnibox prompts are treated as trusted user input, they may receive fewer checks than content sourced from webpages," security researcher Martí Jordà said. "The agent may initiate actions unrelated to the purported destination, including visiting attacker-chosen sites or executing tool commands."

The disclosure comes as SquareX Labs demonstrated that threat actors can spoof sidebars for AI assistants inside browser interfaces using malicious extensions to steal data or trick users into downloading and running malware. The technique has been codenamed AI Sidebar Spoofing. Alternatively, it is

Source: The Hacker News