Openai models recently started O3 and O4-MINI AI are state-of-the-art in many ways. However, new models are still paid or do things – in fact, paid off more From many of the older models of Openai.
Halfuses have been shown to be one of the biggest and most difficult problems to solve AI, even affecting today’s better -performance systems. Historically, every new model has improved slightly in the illusory section, less than its predecessor. But this does not seem to happen for O3 and O4-mini.
According to Openai’s internal tests, O3 and O4-MINI, which are so-called Reasoning models, illusions more often From previous reasoning models of the company-O1, O1-mini and O3-mini-as the traditional models of Openai, such as the GPT-4O.
Perhaps more about, the Chatgpt manufacturer doesn’t really know why it happens.
In his technical report for O3 and o4-miniOpenai writes that “more research is needed” to understand why hallucinations get worse as they scale models of reasoning. O3 and O4-MINI better attribute to certain areas, including coding and mathematics. But because they “make more claims overall”, they often lead to “more accurate allegations as well as more inaccurate/parable claims,” according to the report.
Openai found that the O3 was assigned in response to 33% of the personqa questions, the company’s internal reference point to measure the accuracy of a model of a model for humans. This is about twice the illusion rate of previous Openai, O1 and O3-MINI reasoning models, which recorded 16% and 14.8% respectively. O4-mini even gets worse in Personqa-rendering 48% of the time.
Third trial With Transluce, a non -profit AI research workshop, he also found that O3 tends to compose actions it took in the process of arriving in answers. In an example, Transluce observed the O3 claiming that it ran the code to a 2021 MacBook Pro “outside the chatgpt”, then copies the numbers to its answer. While O3 has access to some tools, it cannot do so.
“Our hypothesis is that the type of aid learning used for models in the O series can strengthen issues that are usually mitigated (but not fully deleted) by standard pipelines after training,” said Neil Chowdhury, a translated researcher and former Openai employee in an email.
Sarah Schwettmann, co -founder of Transluce, added that the O3 illusion rate can make it less useful than it would be.
Kian Katanforoosh, Professor and Managing Director of Stanford, Stanford, told TechCrunch that his team is already testing the O3 in coding flows and found it to be one step above the competition. However, Katanforosh says that O3 tends to give up broken site links. The model will provide a link that, when clicking, does not work.
Halfuses can help models reach interesting ideas and be creative in their “thinking”, but they also make some models a harsh sale for shopping in markets where accuracy is primary. For example, a law firm would probably not be happy with a model that introduces many real errors into customer contracts.
A very promising approach to enhance the accuracy of their models gives web search opportunities. Openai’s GPT-4O with tissue search achieves Accuracy of 90% In Simpleqa, another of the reference points of Openai’s accuracy. Perhaps the search could also improve the illusion rates of logic models, at least in cases where users are willing to expose the suggestions to a third search provider.
If the escalation of the reasoning models continues to aggravate hallucinations, it will make hunting for an even more urgent solution.
“Tackling the hallucinations in all our models is an ongoing research sector and we are constantly working to improve their accuracy and reliability,” Openai Niko Felix spokesman said in an email in TechCrunch.
Last year, the wider AI industry has rotated to focus on reasoning models after techniques to improve traditional AI models has begun to show reduced yields. Reason improves the performance of the model in a variety of work without requiring huge amounts of computers and data during training. However, it seems that reasoning can also lead to more illusions – presenting a challenge.
