OpenAI on Monday published what it calls an “economic plan” for artificial intelligence: a living document that lays out the policies the company believes it can leverage with the US government and its allies.
The plan, which includes a pitch from Chris Lehane, OpenAI’s vice president of global affairs, asserts that the U.S. must act to attract billions in funding for the chips, data, power and talent necessary to “win in artificial intelligence”.
“Today, while some countries sideline AI and its economic potential,” Lehane wrote, “the U.S. government can pave the way for its AI industry to continue the nation’s global leadership in innovation while protecting national security”.
OpenAI has repeatedly called to the US government to receive more meaningful action on artificial intelligence and the infrastructure to support the development of the technology. The federal government has largely left the regulation of artificial intelligence to the states, a situation that OpenAI describes in the plan as untenable.
Only in 2024 state legislators was introduced nearly 700 accounts related to artificial intelligence, some of which contradict others. The Texas AI Responsible Governance Act, for example, imposes onerous liability requirements to developers of open source AI models.
OpenAI CEO Sam Altman also criticized existing federal laws on the books, such as the CHIPS Act, which aimed to revitalize the US semiconductor industry by attracting domestic investment from the world’s leading chip makers. In a recent interview with Bloomberg, Altman he said that the CHIPS Act “[has not] has been as effective as any of us could have hoped,” and that he believes there is “a real opportunity” for the Trump administration to “do something much better as a follow-up.”
“That I really deeply agree with [Trump] But it’s wild how hard it’s become to build things in the United States,” Altman said in the interview. “Power plants, data centers, anything like that. I understand how red tape piles up, but it is not helpful for the country as a whole. It’s not particularly helpful when you think about what needs to happen for the US to lead in AI. And the U.S. really needs to lead the way in artificial intelligence.”
To power the data centers necessary to develop and operate artificial intelligence, OpenAI’s plan recommends “dramatically” increased federal spending on power and data transmission and substantial creation of “new energy sources” such as solar, wind farms and nuclear. OpenAI — along with its AI rivals — has previously threw his support behind nuclear power projects, arguing that they need to meet the electricity demands of next-generation server farms.
Tech giants Meta and AWS have hit snags with their nuclear efforts, though for reasons that have nothing to do with nuclear power itself.
In the short term, OpenAI’s plan suggests the government “develop best practices” for developing models to protect against misuse, “streamline” the AI industry’s engagement with national security agencies, and develop export controls that allow shared using models with allies while “ limit[ing]Exporting them to “rival nations”. In addition, the plan encourages the government to share certain national security-related information, such as updates on threats to the AI industry, with vendors and help vendors secure resources to assess their risk models.
“The federal government’s approach to model border safety and security should streamline requirements,” the plan states. “Responsibly exporting models to our allies and partners will help them develop their own AI ecosystems, including their developer communities that innovate with AI and distribute its benefits, while building AI into American technology, not in the technology sponsored by the Chinese Communist Party. “
OpenAI already counts some US government departments as partners, and — should its draft gain currency among policymakers — is set to add more. The company has deals with the Pentagon for cyber work and other related projects, and has partnered with defense startup Anduril to supply its AI technology to systems the US military uses to counter drone attacks.
In its draft, OpenAI calls for the drafting of standards “recognized and respected” by other nations and international bodies on behalf of the US private sector. But the company stops short of passing mandatory rules or ordinances. “[The government can create] a defined, voluntary pathway for companies to grow [AI] to work with the government to establish model assessments, model tests and information sharing to support companies’ safeguards,” the plan states.
The Biden administration followed a similar path with its AI executive order, which sought to establish several high-level, voluntary AI safety and security standards. The executive order established the US AI Safety Institute (AISI), a federal government body that studies risks in artificial intelligence systems, which has partnered with companies including OpenAI to assess the safety of the models. But Trump and his allies pledged to repeal Biden’s executive order, putting its codification — and AISI — at risk of being overturned.
The OpenAI project also deals with copyright as it relates to artificial intelligence, a hot topic. The company argues that AI developers should be able to use “public information,” including copyrighted content, to develop models.
OpenAI, along with many other AI companies, trains models on public data from around the web. The company has licensing agreements with various platforms and publishers and offers limited ways for creators to “opt out” of its model development. But OpenAI has too he said that it would be “impossible” to train AI models without using copyrighted material, and several creators have sued the company for allegedly training their works without permission.
“[O]These actors, including developers in other countries, make no effort to respect or cooperate with intellectual property rights holders,” the plan states. “If the US and like-minded countries do not address this imbalance through sensible measures that help advance AI in the long term, the same content will continue to be used for AI training elsewhere, but to the benefit of other economies. [The government should ensure] that artificial intelligence has the ability to learn from universal, publicly available information, as humans do, while protecting creators from unauthorized digital copies.”
It remains to be seen which parts of OpenAI’s plan, if any, affect the legislation. But the proposals are a signal that OpenAI intends to remain a key player in the fight for a unifying US AI policy.
In the first half of last year, OpenAI more than tripled its lobbying spending, spending $800,000, up from $260,000 in all of 2023. The company has also brought former government leaders into its executive ranks, including former Defense Department official Sasha Baker, the NSA chief Paul Nakasone and Aaron Chatterji, former chief economist at Commerce Department under President Joe Biden.
As it recruits and expands Its global affairs arm OpenAI has been more vocal about what AI laws and rules it favors, for example throwing its weight behind Senate bills that would establish a federal AI rulemaking body and provide federal artificial intelligence R&D grants. The company has also opposed bills, notably California’s SB 1047, arguing it would stifle AI innovation and drive away talent.
TechCrunch has a newsletter focusing on AI! Sign up here to get it in your inbox every Wednesday.