Anthropic is launching one program to fund the development of new types of benchmarks capable of evaluating the performance and impact of AI models, including production models like Claude’s.
Anthropic’s program, unveiled Monday, will make payments to third-party organizations that can, as the company puts it in a blog post, “effectively measure advanced capabilities in artificial intelligence models.” Interested parties may submit applications for evaluation on a rolling basis.
“Our investment in these assessments is intended to elevate the entire field of AI security, providing valuable tools that benefit the entire ecosystem,” Anthropic wrote on its official blog. “Developing high-quality, safety-relevant assessments remains a challenge, and demand outstrips supply.”
As we’ve pointed out before, AI has a benchmarking problem. The most commonly cited AI benchmarks today do a poor job of capturing how the average human actually uses the systems under review. There are also questions about whether some benchmarks, particularly those released before the dawn of modern genetic artificial intelligence, even measure what they are supposed to measure, given their age.
The very high-level, harder-than-it-sounds solution proposed by Anthropic creates challenging benchmarks with an emphasis on AI security and social impact through new tools, infrastructure and methods.
The company specifically requests tests that assess a model’s ability to perform tasks such as carrying out cyber attacks, “enhancing” weapons of mass destruction (e.g. nuclear weapons), and manipulating or deceiving people (e.g. via deepfakes or disinformation). For AI risks related to national security and defense, Anthropic says it’s committed to developing some kind of “early warning system” to identify and assess risks, though it didn’t reveal in the blog post what it might to imply such a system.
Anthropic also says it intends its new program to support benchmark research and “end-to-end” work that explores the potential of artificial intelligence to aid scientific study, converse in multiple languages, and mitigate entrenched biases. as well as toxicity self-censoring.
To achieve all this, Anthropic envisions new platforms that allow subject matter experts to develop their own assessments and large-scale model tests involving “thousands” of users. The company says it has hired a full-time coordinator for the program and may buy or expand projects it believes have the potential to scale.
“We offer a range of financing options tailored to the needs and stage of each project,” Anthropic writes in the post, though an Anthropic spokesperson declined to elaborate on those options. “Teams will have the opportunity to interact directly with Anthropic domain experts from the frontier red team, detail, trust and security and other relevant teams.”
Anthropic’s effort to support new AI benchmarks is commendable — assuming, of course, that there’s enough cash and manpower behind it. But given the company’s commercial ambitions in the AI race, it may be hard to fully trust.
In the blog post, Anthropic is rather transparent about the fact that it wants some of the assessments it funds to align with AI Security Classifications the developed (with some input from third parties, such as the non-profit AI research organization METR). This is within the company’s prerogative. But it may also force applicants to the program to accept definitions of “safe” or “dangerous” AI with which they may not agree.
A portion of the AI community is also likely to take issue with Anthropic’s references to “catastrophic” and “misleading” AI risks, such as the dangers of nuclear weapons. Many experts let’s just say there’s little evidence to suggest that AI as we know it will achieve global human-surpassing capabilities anytime soon, if ever. Claims of impending “superintelligence” only serve to draw attention away from pressing AI regulatory issues of the day, such as AI’s hallucinatory tendencies, these experts add.
In its post, Anthropic writes that it hopes its program will serve as a “catalyst for progress toward a future where comprehensive AI assessment is an industry standard.” This is a mission that many have opened, corporate-unaffiliated efforts to create better AI benchmarks can be identified. But it remains to be seen whether those efforts are willing to join forces with an AI vendor whose loyalty ultimately rests with shareholders.