Hackedgpt – 7 New Vulnerabilities In Gpt-4o And Gpt-5 Enables...
Seven critical vulnerabilities in OpenAI’s ChatGPT, affecting both GPT-4o and the newly released GPT-5 models, that could allow attackers to steal private user data through stealthy, zero-click exploits.
These flaws exploit indirect prompt injections, enabling hackers to manipulate the AI into exfiltrating sensitive information from user memories and chat histories without any user interaction beyond a simple query.
With hundreds of millions of daily users relying on large language models like ChatGPT, this discovery highlights the urgent need for stronger AI safeguards in an era where LLMs are becoming primary information sources.
The vulnerabilities stem from ChatGPT’s core architecture, which relies on system prompts, memory tools, and web browsing features to deliver contextual responses.
OpenAI’s system prompt outlines the model’s capabilities, including the “bio” tool for long-term user memories enabled by default and a “web” tool for internet access via search or URL browsing.
Memories can store private details deemed important from past conversations, while the web tool uses a secondary AI, SearchGPT, to isolate browsing from user context, theoretically preventing data leaks.
However, Tenable researchers found that SearchGPT’s isolation is insufficient, allowing prompt injections to propagate back to ChatGPT.
Among the seven vulnerabilities, a standout is the zero-click indirect prompt injection in the Search Context, where attackers create indexed websites tailored to trigger searches on niche topics.
Here are short summaries of all seven ChatGPT vulnerabilities discovered by Tenable Research:
Tenable demonstrated full attack chains, such as phishing via blog comments leading to malicious links or image markdowns that exfiltrate info using url_safe bypasses.
Source: Cybersecurity News