Tools: Powerful Openai API Logs: Unpatched Data Exfiltration
OpenAI’s API log viewer is vulnerable to a data exfiltration attack, exposing apps and agents that use OpenAI APIs, even if developers (and Agent Builder users) leverage all available defenses. The vulnerability was disclosed to OpenAI, but was closed with the status 'Not applicable' after 4 follow-ups.
Edit for clarity: The attacker is a third party who poisons a data source used by the AI tool with an indirect prompt injection — the app user triggers the injection by querying the assistant, and when the conversation logs are opened by the developer (since the response is flagged for review), data is exfiltrated to the third-party attacker who poisoned the data used by the app.
The OpenAI Platform interface has a vulnerability that exposes all AI applications and agents built with OpenAI ‘responses’ and ‘conversations’ APIs to data exfiltration risks due to insecure Markdown image rendering in the API logs. ‘Responses’ is the default API recommended for building AI features (and it supports Agent Builder) — vendors that list OpenAI as a subprocessor are likely using this API, exposing them to the risk. This attack succeeds even when developers have built protections into their applications and agents to prevent Markdown image rendering.
Attacks in this article were responsibly disclosed to OpenAI (via BugCrowd). The report was closed with the status ‘Not applicable’ after four follow-ups (more details in the Responsible Disclosure section). We have chosen to publicize this research to inform OpenAI customers and users of apps built on OpenAI, so they can take precautions and reduce their risk exposure.
Additional findings at the end of the article impact five more surfaces: Agent Builder, Assistant Builder, and Chat Builder preview environments (for testing AI tools being built), the ChatKit Playground, and the Starter ChatKit app, which developers are provided to build upon.
An application or agent is built using the OpenAI Platform
In this attack, we demonstrate a vulnerability in OpenAI's API log viewer. To show how an attack would play out, we created an app with an AI assistant that uses the ‘responses’ API to generate replies to user queries.
For this attack chain, we created an AI assistant in a mock Know Your Customer (KYC) tool. KYC tools enable banks to verify customer identities and assess risks, helping prevent financial crimes — this process involves sensitive data (PII and financial data provided by the customer) being processed along
Source: HackerNews