An organization developing mathematical benchmarks for artificial intelligence did not disclose that it had received funding from OpenAI until relatively recently, prompting allegations of misappropriation by some in the AI community.
Epoch AI, a non-profit organization primarily funded by Open Philanthropy, a research and grant-making foundation, revealed on December 20 that OpenAI had supported the creation of FrontierMath. FrontierMath, a test with expert-level problems designed to measure an AI’s math skills, was one of the benchmarks OpenAI used to showcase its upcoming AI flagship, o3.
In one position on the LessWrong forum, an Epoch AI contributor with the username “Meemi” says that many contributors to the FrontierMath benchmark were not made aware of OpenAI’s involvement until it was made public.
“Communication on this has been opaque,” Meemi wrote. “In my view, Epoch AI should have disclosed its funding of OpenAI, and contractors should have transparent information about the potential use of their work when choosing whether to work on a benchmark.”
On social media, some users expressed concerns that secrecy could erode FrontierMath’s reputation as an objective benchmark. In addition to supporting FrontierMath, OpenAI had visibility into many of the problems and solutions in the benchmark—something Epoch AI didn’t reveal until December 20, when o3 was announced.
In one position at X, Stanford PhD math student Carina Hong also claimed that OpenAI has privileged access to FrontierMath thanks to its deal with Epoch AI, and that this doesn’t sit well with some contributors.
“Six mathematicians who have contributed significantly to the FrontierMath benchmark have confirmed [to me] … that they don’t know that OpenAI will have exclusive access to this benchmark (and others won’t),” Hong said. “Most express that they are not sure they would have contributed if they had known.”
In a response to Meemi’s post, Tamay Besiroglu, deputy director of Epoch AI and one of the organization’s co-founders, maintained that FrontierMath’s integrity had not been compromised, but admitted that Epoch AI “made a mistake” by not being more transparent .
“We were limited in disclosing the partnership until o3 was launched and in retrospect we should have negotiated harder for the ability to be transparent on benchmarks as soon as possible,” Besiroglou wrote. “Our mathematicians deserved to know who could access their work. Even though we were contractually limited in what we could say, we should have made transparency with our contributors a non-negotiable part of our agreement with OpenAI.”
Besiroglou added that while OpenAI has access to FrontierMath, it has a “verbal agreement” with Epoch AI not to use FrontierMath’s problem set to train its AI. (Training an AI in FrontierMath would be similar teaching to the test.) Epoch AI also has a “separate holding pool” that serves as an additional safeguard to independently verify FrontierMath’s results, Besiroglu said.
“OpenAI … fully supported our decision to maintain a separate, invisible reservation set,” Besiroglou wrote.
However, muddying the waters, Epoch AI chief mathematician Ellot Glazer noted in a Reddit post that Epoch AI does not have the ability to independently verify OpenAI’s FrontierMath o3 results.
“My personal opinion is this [OpenAI’s] The score is legitimate (ie, they weren’t trained on the dataset) and that they have no incentive to lie about internal benchmark performance,” Glazer said. “However, we cannot guarantee them until our independent assessment is complete.”
The saga is yet another example of the challenge of developing empirical benchmarks for evaluating AI — and securing the necessary resources to develop benchmarks without creating the perception of a conflict of interest.