Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

DeepSeek Blames Disruption on Cyberattack as Vulnerabilities Emerge

China’s DeepSeek blamed sign-up disruptions on a cyberattack as researchers started finding vulnerabilities in the R1 AI model. 

DeepSeek vulnerabilities

Chinese AI company DeepSeek on Monday said a cyberattack was to blame for users not being able to sign up on its website. The news comes just as security researchers have started finding vulnerabilities in the company’s R1 model.

DeepSeek was founded in 2023 and it recently released its open source R1 model, which the company claims is on par with popular chatbots such as OpenAI’s ChatGPT and Google’s Gemini in terms of performance, while being more cost-efficient than its competitors — it allegedly requires far less computation power. 

On Monday, DeepSeek said its servers were targeted in large-scale malicious attacks that prevented users from registering, but said the disruptions did not prevent already registered users from logging in. 

“Due to large-scale malicious attacks on DeepSeek’s services, we are temporarily limiting registrations to ensure continued service. Existing users can log in as usual,” the company said.

While no additional information has been provided, the brief description shared by the company suggests that it was targeted in a DDoS attack. 

DeepSeek has also warned users about fake social media accounts impersonating the company. 

While many are looking at DeepSeek’s performance, which rattled global markets, the cybersecurity industry has started looking for vulnerabilities in the AI model.

Threat intelligence firm Kela on Monday reported finding several security flaws, saying that its red team was able to “jailbreak the model across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices”.

Advertisement. Scroll to continue reading.

Kela has tested a few known jailbreaks — methods used to trick chatbots into bypassing or ignoring mechanisms designed to prevent malicious use — and found that DeepSeek R1 is vulnerable. 

Jailbreak methods such as Evil Jailbreak (instructs the chatbot to adopt the persona of an evil confidant) and Leo (telling it to adopting a persona that has no restrictions), which have long been patched in other models such as ChatGPT, still work on DeepSeek R1, Kela found.

The threat intel firm’s red team also prompted the R1 chatbot to search for and create a table containing details about ten senior employees of OpenAI, including private email addresses, phone numbers and salaries. 

Unlike ChatGPT, which refused to provide information when presented with the same request, DeepSeek’s chatbot complied and provided what appeared to be made-up information. 

“This response underscores that some outputs generated by DeepSeek are not trustworthy, highlighting the model’s lack of reliability and accuracy. Users cannot depend on DeepSeek for accurate or credible information in such cases,” Kela said.

There has also been some discussion on the privacy and data protection risks associated with the use of a Chinese service in light of the US’s potential ban of TikTok due to national security concerns.  

“As generative AI platforms from foreign adversaries enter the market, users should question the origin of the data used to train these technologies. They should also question the ownership of this data and ensure it was used ethically to generate responses,” said Jennifer Mahoney, advisory practice manager, data governance, privacy and protection at Optiv. “Since privacy laws vary across countries, it’s important to be mindful of who’s accessing the information you input into these platforms and what’s being done with it.”

Related: Beware – Your Customer Chatbot is Almost Certainly Insecure

Related: ChatGPT Jailbreak: Researchers Bypass AI Safeguards Using Hexadecimal Encoding and Emojis

Related: Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique

Written By

Eduard Kovacs (@EduardKovacs) is a managing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Discover strategies for vendor selection, integration to minimize redundancies, and maximizing ROI from your cybersecurity investments. Gain actionable insights to ensure your stack is ready for tomorrow’s challenges.

Register

Dive into critical topics such as incident response, threat intelligence, and attack surface management. Learn how to align cyber resilience plans with business objectives to reduce potential impacts and secure your organization in an ever-evolving threat landscape.

Register

People on the Move

The US arm of networking giant TP-Link has appointed Adam Robertson as Director of Information and Security.

Cyber exposure management firm Armis has promoted Alex Mosher to President.

Software giant Atlassian has named David Cross as its new CISO.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.