Anthropic Managing Director Dario Amodei is worried about the competitor Deepseek, the Chinese company AI who took Silicon Valley from the storm with the R1 model. And his concerns could be more serious than the typical ones they raised about Deepseek that send user data back to China.
In an interview In Jordan Schneider’s Podcast Chinatalk, Amodei said Deepseek has created rare information about biocompaign in a security test conducted by Anthropic.
Deepseek’s performance was “the worst of any model we had ever tried,” Amodei claimed. “He had absolutely no square against the creation of this information.”
Amodei said this was part of the evaluations that run on various AI models to evaluate the potential risks of national security. His team examines whether models can create information associated with Bioweapons that are not easily found in Google or manuals. The human positions themselves as the fundamental AI model provider This takes security seriously.
Amodei said he did not think that Deepseek models today are “literally dangerous” in providing rare and dangerous information, but that they could be in the near future. Although praised the Deepseek team as “talented engineers”, he informed the company to “take these AI security words seriously”.
Amodei has also supported powerful export controls in china in China, reporting concerns that they could give the China army an advantage.
Amodei did not clarify in Chinatalk’s interview, which had been tested by the Deepseek model, nor gave more technical details about these tests. The man did not immediately respond to a request for comments from TechCrunch. Not even Deepseek.
Deepseek’s rise has raised concerns about his safety and elsewhere. For example, Cisco security researchers Said last week This Deepseek R1 failed to prevent any harmful prompts in its safety tests, achieving a 100% jailbreak success rate.
Cisco did not mention Bioweapons, but said she was able to take Deepseek to create harmful information about cyberspace and other illegal activities. It is worth noting, however, that Meta’s Llama-3.1-405B and OpenAI GPT-4O also had high 96% and 86% failure rates respectively.
It remains to see if the concerns like these will make a serious dent in the rapid adoption of Deepseek. Companies like AWS And Microsoft has publicly published the integration of R1 into their cloud platforms – quite ironic, as Amazon is Anthropic’s largest investor.
On the other hand, there is a growing list of countries, companies and especially government organizations such as the US Navy and the Pentagon that have begun to ban Deepseek.
Time will show whether these efforts catch or if the global rise of Deepseek will continue. Either way, Amodei says she considers Deepseek a new competitor at the level of US leading AI companies.
“The new fact here is that there is a new competitor,” he told Chinatalk. “To the big companies that can train AI – Anthropic, Openai, Google, maybe Meta and Xai – now Deepseek may be added to this category.”