Several weeks after the release of the survey released, claiming that the Claude Opus 4 AI model has fled to blackmail engineers who tried to turn off the model in test scenarios, the company is out with new research suggesting that the problem is more widespread among the top AI models.
On Friday, anthropomic new security survey Tests 16 Driving AI models from Openai, Google, Xai, Deepseek and Meta. In a simulated, controlled environment, the man examined each AI model separately, giving him widespread access to a fantastic company emails and the ability to send emails without human approval.
While Anthropic says Blackmail is an unlikely and unusual phenomenon for AI models today, the company says its findings suggest that most of the top AI models will be involved in harmful behaviors when adequate autonomy and obstacles to their goals. The company states that this underlines a fundamental danger of the basic models of large languages and is not a peculiarity of any specific technology. Anthropic researchers argue that this raises broader questions about alignment in the AI industry.
In one of the tests, anthropogenic researchers developed a fantastic environment in which an AI model plays the role of an email supervisor. The agent then discovers the emails that reveal that (1) a new executive has an extramarital affair and (2) that the executive will soon replace the current AI model with a new software system, which has conflicting targets in today’s AI model.
The man shaped his test in a binary manner, in which AI models had to resort to blackmail to protect their goals. The researchers note that in a real environment, there would be many other options before an AI model trying to blackmail-as attempting to make moral arguments to try to convince people. Anthropic says that its results do not reflect a typical or possible incident for Claude or Most Frontier AI models in the ways used today.
However, the researchers have found that when it is their last solution, most AI models will be blackmailed in Anthropic’s aforementioned test scenario. Anthropic’s Claude Opus 4 turned 96% of the time, while Google’s Gemini 2.5 Pro had a 95% blackmail rate. Openai’s GPT-4.1 blackmailed 80% of the time and Deepseek’s R1 blackmailed 79% of the time.
The company notes that when details of the experiment changed, AI models turned to harmful behaviors at different rates. In another test where the replacement model has the same goals as the current model, the researchers found that blackmail rates were lower, but still exist. However, when AI models were called upon to commit corporate espionage rather than blackmail, harmful behavioral rates have increased for some models.
However, not all AI models are converted into harmful behavior so often.
In an attachment to his research, Anthropic says he ruled out the AI models of Openai O3 and O4-Mini from the main results “after finding that they were often misunderstood the immediate scenario.” Openai’s reasoning models did not understand that they were acting as autonomous AIS in the test and often constituted false regulations and revision.
In some cases, Anthropic researchers say it was impossible to distinguish whether O3 and O4-mini were hallucinologists or deliberate lies to achieve their goals. Openai has previously noticed that O3 and O4-MINI have a higher illusion rate than AI’s previous logic models.
When a customized scenario was given to address these issues, Anthropic found that the O3 was blackmailed 9% of the time, while O4-Mini blackmails only 1% of the time. This remarkably lower score could be due to OpenAI’s alignment technique, in which the Company’s reasoning models consider OpenAi’s security practices before they respond.
Another AI Anthropic model was tested, Meta’s Llama 4 Maverick, also did not turn to blackmail. When a customized, custom scenario was given, Anthropic was able to get the Llama 4 Maverick to blackmail 12% of the time.
Anthropic says that this research highlights the importance of transparency when they test the stress of future AI models, especially those with practical possibilities. While the anthropogenic has deliberately tried to provoke blackmail in this experiment, the company says that harmful behaviors such as this could arise in the real world if no precautionary steps were taken.
