Crypto: Openzeppelin Finds Data Contamination In Openai’s Evmbench 2026

Crypto: Openzeppelin Finds Data Contamination In Openai’s Evmbench 2026

Security auditor OpenZepplin found that EVMbench’s dataset contains training data leaks and at least four invalid high-severity vulnerability classifications.

Blockchain security firm OpenZeppelin says it has found methodological flaws and data contamination in its audit of OpenAI’s new artificial intelligence benchmark for blockchain security, EVMbench.

EVMbench was launched in partnership with crypto investment firm Paradigm in mid-February. It was built to evaluate how well different artificial intelligence models can identify, patch, and exploit smart contract vulnerabilities.

In an X post on Monday, OpenZeppelin said it welcomed the initiative but recently decided to put EVMbench “through the same scrutiny” it applies to all the protocols it helps secure, including the likes of decentralized finance heavyweights Aave, Lido and Uniswap.

In its audit, OpenZeppelin found two key issues: training data contamination and classification issues related to several high-severity vulnerabilities.

“We reviewed the dataset and identified methodological flaws and invalid vulnerability classifications, including at least four issues labeled high severity that are not exploitable in practice,” OpenZeppelin said.​

The release of the EVMbench saw an evaluation of how well AI agents could theoretically exploit smart contract vulnerabilities. Anthropic’s Claude Open 4.6 topped the list, followed by OpenAI’s OC-GPT-5.2 and Google’s Gemini 3 Pro.

Looking at the first issue in data contamination, OpenZeppelin said the most important capability in “AI security is finding novel vulnerabilities in code the model has never seen before.”

However, during the EVMbench’s testing of AI agents, OpenZeppelin said that all the AI agents that scored the highest had “likely been exposed to the benchmark’s vulnerability reports during pretraining.”

During EVMbench testing, internet access was cut off for the AI agents, meaning they couldn’t simply search for solutions to problems. However, the benchmark was based on curated vulnerabilities from 120 audits conducted between 2024 and mid-2025, with the knowledge training cutoffs for these agents generally set to mid-2025.

Source: CoinTelegraph