Cyber: Previously Harmless Google API Keys Now Expose Gemini AI Data
Google API keys for services like Maps embedded in accessible client-side code could be used to authenticate to the Gemini AI assistant and access private data.
Researchers found nearly 3,000 such keys while scanning internet pages from organizations in various sectors, and even from Google.
The problem occurred when Google introduced its Gemini assistant, and developers started enabling the LLM API in projects. Before this, Google Cloud API keys were not considered sensitive data and could be exposed online without risk.
Developers can use API keys to extend functionality in a project, such as loading Maps on a website to share a location, for YouTube embeds, usage tracking, or Firebase services.
When Gemini was introduced, Google Cloud API keys also acted as authentication credentials for Google's AI assistant.
Researchers at TruffleSecurity discovered the issue and warned that attackers could copy the API key from a website's page source and access private data available through the Gemini API service.
Since using the Gemini API is not free, an attacker could leverage the access and make API calls for their benefit.
"Depending on the model and context window, a threat actor maxing out API calls could generate thousands of dollars in charges per day on a single victim account," Truffle Security says.
The researchers warn that these API keys have been sitting exposed in public JavaScript code for years, and now they have suddenly gained more dangerous privileges without anyone noticing.
TruffleSecurity scanned the November 2025 Common Crawl dataset, a representative snapshot of a large swath of the most popular sites, and found more than 2,800 live Google API keys publicly exposed in their code.
Source: BleepingComputer