OpenAI says it wants to implement ideas from the public on how to ensure its future AI models are “aligned with the values of humanity.”
To that end, the AI startup is forming a new Collective Alignment team of researchers and engineers to create system for collecting and “encoding” public input about the behaviors of its models into OpenAI products and services, the company was announced today.
“We will continue to work with external consultants and grant teams, including the pilots that are running to integrate … prototypes into the direction of our models,” writes OpenAI iand blog post. “We are recruiting … research engineers from different technical backgrounds to help build this work with us.”
The Collective Alignment team is a result of the OpenAI community program, launched last May, to award grants to fund experiments to create a “democratic process” to decide what rules artificial intelligence systems should follow. The goal of the program, OpenAI said in its debut, was to fund individuals, groups and organizations to develop evidence that could answer questions about guardrails and governance for artificial intelligence.
In a blog post today, OpenAI summarized the work of the grantees, which ranged from video chat interfaces to platforms for AI model checking and “approaches to mapping beliefs into dimensions that can used to improve the behavior of the models. ” All code used in grantees’ work was built public this morning, along with brief summaries of each proposal and high-level proposals.
OpenAI has sought to characterize the program as divorced from its commercial interests. But that’s a bit hard to swallow, given OpenAI CEO Sam Altman’s criticism of regulation in the EU and elsewhere. Altman, along with OpenAI president Greg Brockman and chief scientist Ilya Sutskever, have repeatedly argued that the pace of innovation in artificial intelligence is so fast that we cannot expect existing authorities to adequately rein in the technology – hence and the need to crowdsource the work.
Some OpenAI competitors, including Meta, have accused OpenAI (among others) of trying to ensure a “regulatory capture of the AI industry” by lobbying against open AI R&D. OpenAI unsurprisingly denies this – and will likely point to the grant program (and the Collective Alignment team) as an example of its “openness”.
OpenAI has come under increasing scrutiny from policymakers in either case, facing a UK investigation into its relationship with close partner and investor Microsoft. The start-up recently sought to shrink its regulatory risk in the EU over data privacy, leveraging a Dublin-based subsidiary to reduce the ability of some of the bloc’s supervisors to act unilaterally on concerns.
Yesterday — in part to appease regulators, no doubt — OpenAI was announced that it is working with organizations to try to limit the ways in which its technology could be used to influence or influence elections through malicious means. The startup’s efforts include making it more obvious when images are AI-generated using its tools and developing approaches to detect generated content even after images have been modified.