The Arc Prize Foundation, a non -profit institution by the prominent researcher of AI François Chollet, announced to a blog On Monday that he created a new, provocative test to measure the general intelligence of AI models.
So far, the new test, called the ARC-AGI-2, has thrown most models.
AI models such as OPENAI’S O1-PRO R1 and R1’s R1 between 1% and 1.3% in ARC-AGI-2, according to The Leaderboard of the ARC Award. Strong models that do not project, including GPT-4.5, Claude 3.7 Sonnet and Gemini 2.0 Flash rated about 1%.
ARC-AGI tests consist of puzzles-like problems, where an AI has to identify visual patterns from a collection of different colors and create the right “answer” grid. The problems were designed to force an AI to adapt to new problems that he has not seen before.
The Arc Prize Foundation had over 400 people to get ARC-AGI-2 to create a human basic line. On average, these people’s “panels” got 60% of the test questions correctly – much better than any of the models’ scores.
To one Post in xChollet claimed that Arc-AGI-2 is a better measure of the real intelligence of an AI model than the first repetition of the test, ARC-AGI-1. The ARC Foundation Award Tests aim to evaluate whether an AI system can effectively acquire new skills out of the data in which it has been trained.
Chollet said that unlike ARC-AGI-1, the new test prevents AI models from relying on “Brute Force”-expanded computing power-to find solutions. Chollet previously acknowledged that this was an important ARC-AGI-1 defect.
To cope with the defects of the first test, ARC-AGI-2 introduces a new measurement: performance. It also requires models to interpret patterns in relation to flight instead of being based on memorization.
“Intelligence is not exclusively determined by the ability to solve problems or to achieve high ratings,” writes the co -founder of Arc Prize Foundation Greg Kamradt in a blog. “The efficiency with which these capabilities are acquired and developed is a critical, decisive element. The basic question asked is not only” AI may obtain [the] Skill to solve a task? “But also,” in what performance or cost? “
ARC-AGI-1 was undefeated for about five years until December 2024, when Openai released the advanced model of reasoning, which exceeded all other AI models and fits human performance in evaluation. However, as we noted at that time, the O3 efficiency earnings in ARC-AGI-1 came with a heavy price.
The version of the O3 Openai-O3 (Low) (Low)-which was the first to reach ARC-AGI-1 new heights, scoring 75.7% in the test, got a 4% reserve on ARC-AGI-2 using a $ 200 computing power.


The arrival of the ARC-AGI-2 comes to those in the technological industry demanding new, unsaturated reference points to measure AI progress. The co -founder of Hugging Face, Thomas Wolf, recently told Techcrunch that the AI industry does not have sufficient tests to measure the key features of the so -called artificial general intelligence, including creativity.
Along with the new reference point, the Arc Prize Foundation announced A new ARC 2025 Award Contestcausing developers to reach 85% accuracy in the ARC-AGI-2 test, while spending only $ 0.42 per job.