News: Beware of the dangers of DeepSeek, experts warn

close
Technology

Beware of the dangers of DeepSeek, experts warn

Is DeepSeek a revolutionary AI breakthrough – or a national security threat lurking in your pocket?
Beware of the dangers of DeepSeek, experts warn
 

OpenAI and Microsoft are looking into whether DeepSeek integrated OpenAI’s proprietary models and claimed them as its own.

 

DeepSeek, the Chinese AI developer that rocked US tech stocks this week, may have dethroned OpenAI’s ChatGPT as the most downloaded free app, but its hugely popular AI assistant is now under scrutiny from security experts.

US officials are flagging DeepSeek’s supposed breakthrough in AI development.

National security and economic concerns over DeepSeek

White House Press Secretary Karoline Leavitt cited national security concerns as the reason for monitoring the Chinese company’s apps.

The National Security Council has begun evaluating DeepSeek’s tools, Leavitt said.

The US Navy earlier prohibited the use of DeepSeek apps among its members, instructing them to beware of “potential security and ethical concerns associated with the model’s origin and usage”.

Amid reports of DeepSeek founder Liang Wenfeng stockpiling tens of thousands of Nvidia chips to develop his AI models, US lawmaker John Moolenaar urged the federal government to enforce stricter export rules around American-made tech being used by foreign companies.

As DeepSeek wiped $1 trillion off Nasdaq, US President Donald Trump called the arrival of the Chinese AI company a “wake-up call” for the American AI industry.

The Trump administration, which recently backed Stargate – the $500 billion AI infrastructure project led by OpenAI – is adamant about maintaining US dominance in the AI industry.

Also Read: 25 signs your job is at risk of AI takeover

Data security and censorship on DeepSeek

The US, however, isn’t the only one that has DeepSeek on its radar. Other governments are also on alert.

In Australia, Treasurer Jim Chalmers asked the public to be cautious of DeepSeek after he received advice on potential threats.

The UK government, for its part, said it would be up to individual users to trial the apps, but that officials are already monitoring them.

Oxford University Professor Michael Wooldridge, who spoke to The Guardian, sees nothing wrong with using the app to source information for everyday use. The AI expert, however, warns against inputting sensitive or personal information into the app.

“You don’t know where the data goes,” Wooldridge said.

Ross Burley, co-founder of the Centre for Information Resilience, said users should be alarmed.

“We’ve seen time and again how Beijing weaponises its tech dominance for surveillance, control and coercion, both domestically and abroad,” Burley said.

Reports have also surfaced about the tendency of the AI assistant to censor content critical of China, as seen in this experiment by The Guardian Australia’s Donna Lu.

Most IT and cybersecurity professionals are also concerned that loopholes in generative AI tools could leave organisations vulnerable to cyber attacks.

A new survey by cybersecurity specialist Sophos found 89% of security professionals worry over such vulnerabilities.

“DeepSeek’s ‘open source’ nature opens it up for exploration – by both adversaries and enthusiasts,” said Chester Wisniewski, director and global field CTO at Sophos.

“Like Llama, it can be played with and largely have the guardrails removed. This could lead to abuse by cybercriminals, although it’s important to note that running DeepSeek still requires far more resources than the average cybercriminal has.”

Also Read: AI in the office: Your new digital teammate

Claims of extracting data from OpenAI accounts

On Monday, DeepSeek received praise from different segments of the AI community after it unveiled its intelligent assistant R1, whose 97% accuracy rate purportedly rivals the power of US-made AI platform ChatGPT.

The newly released tool cost under US$6 million to build – a fraction of the development cost of its rivals – and despite a US export ban on AI chips, DeepSeek said.

It’s this level of firepower amid resource constraints that has AI developers and national security experts questioning the origin of the Chinese AI apps.

OpenAI and Microsoft are looking into whether DeepSeek integrated OpenAI’s proprietary models and claimed them as its own, Bloomberg reported.

Security researchers from Microsoft have purportedly traced the transfer of large swaths of data from OpenAI developer accounts that were allegedly linked to DeepSeek in late 2024.

OpenAI has also come forward with evidence that DeepSeek had purportedly benefitted from the distillation of data that may have been used in training AI models. This may be the reason DeepSeek was able to train its AI at a fraction of the cost and effort it took to build up GPT, a report from the Financial Times revealed.

This spectrum of national security, cybersecurity, data privacy, data censorship, and data distillation concerns have prompted different groups to sound the alarm on the use of DeepSeek.

Yet, for those eager to take advantage of DeepSeek’s example of disruptive tech, another danger lurks: the proliferation of cheap, run-of-the-mill AI solutions passing off as the next big thing.

“Due to its cost-effectiveness, we are likely to see various products and companies adopt DeepSeek, which potentially carries significant privacy risks,” Sophos’ Wisniewski said.

“As with any other AI model, it will be critical for companies to make a thorough risk assessment, which extends to any products and suppliers that may incorporate DeepSeek or any future LLM. They also need to be certain they have the right expertise to make an informed decision.”

Read full story

Topics: Technology, Business, #Artificial Intelligence, #Cybersecurity, #Trending

Did you find this story helpful?

Author

QUICK POLL

What will be the biggest impact of AI on HR in 2025?