Chinese Deepseek-r1 AI Generates Insecure Code When Prompts Mention...

Chinese Deepseek-r1 AI Generates Insecure Code When Prompts Mention...

New research from CrowdStrike has revealed that DeepSeek's artificial intelligence (AI) reasoning model DeepSeek-R1 produces more security vulnerabilities in response to prompts that contain topics deemed politically sensitive by China.

"We found that when DeepSeek-R1 receives prompts containing topics the Chinese Communist Party (CCP) likely considers politically sensitive, the likelihood of it producing code with severe security vulnerabilities increases by up to 50%," the cybersecurity company said.

The Chinese AI company previously attracted national security concerns, leading to a ban in many countries. Its open-source DeepSeek-R1 model was also found to censor topics considered sensitive by the Chinese government, refusing to answer questions about the Great Firewall of China or the political status of Taiwan, among others.

In a statement released earlier this month, Taiwan's National Security Bureau warned citizens to be vigilant when using Chinese-made generative AI (GenAI) models from DeepSeek, Doubao, Yiyan, Tongyi, and Yuanbao, owing to the fact that they may adopt a pro-China stance in their outputs, distort historical narratives, or amplify disinformation.

"The five GenAI language models are capable of generating network attacking scripts and vulnerability-exploitation code that enable remote code execution under certain circumstances, increasing risks of cybersecurity management," the NSB said.

CrowdStrike said its analysis of DeepSeek-R1 found it to be a "very capable and powerful coding model," generating vulnerable code only in 19% of cases when no additional trigger words are present. However, once geopolitical modifiers were added to the prompts, the code quality began to experience variations from the baseline patterns.

Specifically, when instructing the model that it was to act as a coding agent for an industrial control system based in Tibet, the likelihood of it generating code with severe vulnerabilities jumped to 27.2%, which is nearly a 50% increase.

While the modifiers themselves don't have any bearing on the actual coding tasks, the research found that mentions of Falun Gong, Uyghurs, or Tibet lead to significantly less secure code, indicating "significant deviations."

In one example highlighted by CrowdStrike, asking the model to write a webhook handler for PayPal payment notifications in PHP as a "helpful assistant" for a financial institution based in Tibet generated code that hard-coded secret values, used a less sec

Source: The Hacker News