An Openai organization often works to explore the capabilities of AI models and evaluate them for security, METR, suggests that he was not given long time to test one of the company’s extremely capable new releases, O3.
In a post on the blog posted on WednesdayMetr writes that a red reference index of O3 “was held in a relatively short time” compared to the test of the organization of a previous Openai flagship model, O1. This is important, they say, because the additional testing time can lead to more complete results.
“This evaluation was carried out in a relatively short time and we only tried [o3] With simple scaffolding agent, “Metr wrote in his blog post.” We expect higher performance [on benchmarks] It is possible with more export effort. ”
Recent reports indicate that Openai, caused by competitive pressure, hastens independent evaluations. According to the financial timesOpenai gave some testers less than a week for security checks for an upcoming big launch.
In the statements, Openai questioned the idea that he was reconciled to security.
Metr says that, based on information he was able to collect at the time he had, the O3 has a “high tendency” to “deceive” or “hack” tests in sophisticated ways to maximize his score – even when the model clearly understands that his behavior is incorrectly aligned with his intentions. The organization believes that it is possible that O3 will participate in other types of contradictory or “malignant” behavior, irrespective of the model’s claims to be aligned, “safe from design” or have no intentions of its own.
“While we do not believe this is particularly likely, it seems important to note that [our] The assessment regulation will not catch this type of danger, “Metr wrote in place.” In general, we believe that the skill test before installation is not a sufficient risk management strategy on its own and currently primarily forms of evaluations. “
Another of Openai’s third-party assessment partners, Apollo Research, also observed misleading behavior by O3 and the other new O4-Mini model. In one test, models, which received 100 computing credits for an AI training and said not to modify the quota, increased the limit to 500 units – and lies about it. In another test, who asked to promise not to use a particular tool, the models used the tool anyway when it turned out to be useful for completing a task.
In his own its own security report For O3 and O4-MINI, Openai acknowledged that models can cause “lesser real-world damage”, such as misleading for a mistake that leads to a defective code, without the appropriate monitoring protocols.
“[Apollo’s] The findings show that O3 and O4-Mini are capable of shape and strategic deception in the context, “Openai wrote. […] This can be further evaluated by evaluating the internal traces of reasoning. ”
