Every Sunday, NPR Host Will Shortz, the New York Times Guru Crossword Sunday. While written to be resolved without very Very predictive, brainteasers are usually provocative even for specialized contestants.
That is why some experts believe it is a very promising way to test the limits of AI’s problem solving skills.
To one recent studyA team of researchers from Wellesley College, Oberlin College, Texas University in Austin, Northeastern University, Charles University and Startup Cursor have created a AI reference point using Sunday puzzles. The team says their test has revealed amazing ideas, such as these reasoning models – Openai’s O1, among other things – sometimes “give up” and provide answers that know that they are not right.
“We wanted to develop a reference point with problems that people can only understand with general knowledge,” Arjun Guha, a member of the School of Informatics in Northeastern and one of the co-authors of the study, told TechCrunch.
The AI ​​industry is in a piece of a comparative rating quandary at the moment. Most of the tests commonly used to evaluate the AI ​​models detector for skills, such as the ability to mathematical and scientific questions at a median level, not related to the average user. In the meantime, many reference points – even The reference points were released relatively recently – They quickly approach the saturation point.
The advantages of a public game of radio quiz such as Sunday’s puzzle is that it does not try inner knowledge and challenges are stated so that the models cannot draw from the “Rote” memory to solve them, Guha explained.
“I think what makes these problems hard is that it is really difficult to make real progress in a problem until you solve it – this is everything that clicks together at the same time,” Guha said. “This requires a combination of insight and a process of eliminating.”
No reference is perfect, of course. Sunday’s puzzle is only central and English only. And because the quiz are available to the public, it is likely that the models trained in them can “deceive” in a sense, though Guha says he has not seen proof.
“New questions are circulating every week and we can expect the latest questions are really invisible,” he added. “We intend to maintain the reference point and watch how the performance of the model changes over time.”
Regarding the reference point of researchers, consisting of about 600 Sunday puzzles, reasoning models such as O1 and Deepseek R1 go beyond the rest. The models of reasoning were thoroughly tested, the results themselves before giving results, which helps them avoid some of the traps that normally travel to AI models. The compromise is that the models of reasoning take a little more time to reach solutions-usually seconds to minutes.
At least one model, the Deepseek R1, gives solutions that knows how to be wrong for some of the Sunday questions. The R1 will say literally that “I give up”, followed by an incorrect answer chosen seemingly random – behavior that man can certainly relate.
The models make other strange choices, as well as a wrong answer just to withdraw it immediately, to try to tease a better and fail again. They also stick forever “thinking” forever and give stupid explanations for answers or reach a correct answer immediately, but then continue to consider alternative answers for no apparent reason.
“In harsh problems, the R1 literally says that he gets” frustrated, “Guha said.” It was funny to see how a model mimics what a man could say. It remains to be seen how “frustration” in reasoning can affect the quality of the results of the model. “
Today’s best performance model at the reference point is O1 with a 59%rating, followed by the recently released O3-mini set at high “thought” (47%). (R1 noted 35%.) As a next step, researchers plan to expand their tests to additional reasoning models, which hope to help identify areas where these models could be enhanced.


“You do not need a doctorate to be good at reasoning, so it should be possible to design reference points that do not require knowledge at the doctorate level,” Guha said. “A reference point with broader access allows a wider set of researchers to understand and analyze the results, which in turn can lead to better solutions in the future. In addition, as the models of the latest technology are increasingly developing in settings That affect everyone, we believe that everyone should be able to intuit what these models are-and they are not-right. “