A new AI coding challenge revealed his first winner-and set a new bar for AI software engineers.
On Wednesday at 5 pm PST, the Laude Non -Profit Institute announced the first winner of the K award, a multilevel Coding Challenge started by Databricks and co -founder Andy Konwinski. The winner was a Brazilian engineer called Eduardo Rocha de Andrade, who will receive $ 50,000 for the prize. But more amazing than the victory was his final score: he won the right answers to just 7.5% of test questions.
“We are happy to have built a reference point that is really difficult,” Konwinski said. “The benchmarks should be difficult if they are going to matter,” he continued, adding: “The scores would be different if the big laboratories had entered their largest models, but this is the kind of point.
Konwinski is committed to $ 1 million in the first open source model that can rate higher than 90% in the test.
Similar to the well -known Swench system, the K Award Tests models against signs of Github issues as a test for how good models can deal with real world planning problems. However, while the Swench is based on a stable set of problems that can train models, the K award is designed as “version without SWENCH infection”, using a timed input system to protect against any special reference training. For the first round, the models are due to March 12th. The organizers of the K award then built the test using only GitHub issues highlighted after this date.
The 7.5% top score is intense in contrast to Swe Bench itself, which currently shows a top 75% top score in the easiest “verified” test and 34% of the toughest “complete” test. Konwinski is still not sure if inequality is due to the infection in the Swench or simply to challenge the collection of new issues from Github, but expects that the K will soon answer the question.
“As we have more routes of the thing, we will have a better feel,” he told TechCrunch, “because we expect people to adapt to the dynamics of competition every few months.”
TechCrunch event
Francisco
|
27-29 October 2025
It may seem like a strange place to remain, given the wide range of AI coding tools that are already available to the public – but with reference points to become very easy, many critics see projects such as the K as a necessary step towards resolving The growing AI evaluation problem.
“I am quite refreshing to build new tests for existing reference points,” says Princeton Sayash Kapoor researcher, who presented a similar idea In a recent document. “Without such experiments, we can’t really say if the issue is infection, or even just aiming at the table with man with a man in the loop.”
For Konwinski, it’s not just a better point of reference, but an open challenge for the rest of the industry. “If you hear the advertising campaign, it’s like seeing AI doctors and AI lawyers and AI software engineers, and that’s not true,” he says. “If we can’t even get more than 10% in a cooling infection, this is the control of reality for me.”
