Anthropogenic CEO Dario Amodei believes that today’s AI models are paid or do things and present them as they are true, lower than people, he said during an information in the first event of Anthropic, code with Claude, in San Francis.
Amodei said that all this in the middle of a larger point he made: that AI hallucinations are not a restriction on Anthropic’s path to Agi-Ai systems with intelligence at human level or better.
“It really depends on how you count it, but I suspect AI models are probably less paid to people, but they give in more amazing ways,” AModei said, responding to TechCrunch’s question.
Anthropic’s chief executive is one of the most analog leaders in the industry on the prospect of AI models achieved. In a widely released document he wrote last year, Amodei said he believed that Agi could arrive only in 2026. During the Thursday’s press briefing, the human chief executive said he saw steady progress on this purpose, noting that “water”.
‘Everyone is always looking for these hard blocks for what [AI] He can do, “Amodei said.” It’s nowhere to see it. There is no such thing. ”
Other AI leaders believe that illusion has a major obstacle to AGI. Earlier this week, Google Deepmind Demis Hassabis CEO said today’s AI models have too many “holes” and get too many obvious questions. For example, earlier this month, a lawyer representing Anthropic was forced to apologize to the court after using Claude to create reports on a court archiving, and Ai Chatbot was abandoned and got the names and titles wrong.
It is difficult to verify AModei’s claim, mainly because most observation points of Pit AI models with each other. They do not compare models to humans. Some techniques seem to help lower illusion rates, such as providing access to AI models to tissue search. Separately, some AI models, such as Openai’s GPT-4.5, mainly have lower illusion rates in reference points compared to early generations of systems.
However, there is also evidence that illusions are really worse in advanced AI models. The O3 and O4-MINI O3 models of Openai have higher illusion rates than OpenAi’s previous generation reasoning models and the company does not really understand why.
Later in the press briefing, Amodei pointed out that television broadcasting, politicians and people in all types of professions make mistakes all the time. The fact that AI makes mistakes is also not a blow to his intelligence, according to Amodei. However, Anthropic’s chief executive acknowledged the confidence with which AI models show untrue things, as events can be a problem.
In fact, Anthropic has done a fair research on the tendency of AI models to deceive people, a problem that seemed particularly predominant in recently launched Claude Opus 4. Apollo reached the point of suggesting that the man should not be released this early model. Anthropic said he came with some mitigation that appeared to address the issues raised by Apollo.
Amodei’s comments suggest that the man can consider an AI model as AGI, or equal to human intelligence, even if he is still paid. An AI paid can be lacking from AGI from the definition of many people, however.
