AI models from Openai, Anthropic and other top AI laboratories are increasingly used to help with programming work. Google Sundar Pichai Managing Director said in October This 25% of the new code in the company is created by AI and Meta Mark Zuckerberg CEO has expressed aspirations To widely develop AI encoding models within the social media giant.
However, even some of the best models today are struggling to solve software errors that will not travel to experienced devs.
A new study By Microsoft Research, Microsoft’s R&D department, it reveals that models, including Claude 3.7 Sonnet and Openai’s O3-Mini, fail to identify many issues at a reference point for software developed. The results are a disappointing reminder that, rather than daring statements by companies like OpenaiAI still does not match people in areas such as coding.
The co-authors of the study examined nine different models as the backbone for a “prompt agent” that had access to various error detection tools, including a Python bug tracking. Were assigned to this agent by resolving a diligent set of 300 software detection by Swe Bench Lite.
According to co-authors, even when they are equipped with stronger and more recent models, their agent rarely completed more than half of the bugs. Claude 3.7 Sonnet had the highest average success rate (48.4%), followed by O1 (30.2%) and O3-MINI (22.1%).
Why the sluggish performance? Some models struggled to use the bugs available tools available to them and to understand how different tools could help on different issues. The biggest problem, however, was the lack of data, according to co-authors. They say that there is not enough data representing “successive decision-making processes”-that is, anthropogenic traces of errors-in the training data of today’s models.
‘We firmly believe that training or perfection [models] They can make them better interactive lockers, “the co-authors wrote in their study.” However, this will require specialized data to fulfill this model training, for example, the track data that record the factors interacting with a bug tracking program to collect the necessary information before proposing a mistake repair. “
The findings are not exactly shocking. Many studies have appears This Code AI tends to introduce safety and security errors due to weaknesses in areas such as the ability to understand logical planning. A recent evaluation of DevinA popular AI coding tool found that it could only complete three of the 20 programming tests.
But Microsoft’s work is one of the most detailed appearance in a persistent problem for models. It will probably not weaken investor enthusiasm for the auxiliary coding tools powered by AI, but by chance, it will make the developers-and the highest-ups-are twice to let the AI run the coding show.
For what is worth, a growing number of technology leaders questioned the idea that AI would automate coding work. The co -founder of Microsoft Bill Gates He has said that he believes that planning as a profession He’s here to stay. Thus has Replit CEO Amjad Masad; Okta Todd McKinnon CEO Oktaand IBM Arvind Krishna’s chief executive.