While Washington’s rift with Anthropic has exposed a complete lack of coherent rules governing artificial intelligence, a bipartisan coalition of thinkers has put together something the administration has so far refused to produce: a framework for what responsible AI development should actually look like.
THE Declaration in favor of man was finalized before last week’s Pentagon-Anthropic showdown, but the clash of the two events was not lost on anyone involved.
“There is something very remarkable that has happened in America in just the last four months,” said Max Tegmark, the MIT physicist and AI researcher who helped organize the effort. in conversation with this processor. “Polls all of a sudden [is showing] that 95% of all Americans oppose an anarchic race to overintelligence.”
The newly published document, signed by hundreds of experts, former officials and public figures, begins with the no-nonsense observation that humanity is at a fork in the road. One path, which the manifesto calls “the struggle for replacement,” leads to people being replaced first as workers, then as decision-makers, as power accumulates in unaccountable institutions and their machines. The other leads to artificial intelligence that massively expands human potential.
The latter scenario hinges on five key pillars: keeping humans in charge, avoiding concentration of power, protecting the human experience, preserving individual freedom, and holding AI companies legally accountable. Among its most muscular provisions is a blanket ban on the development of superintelligence until there is scientific consensus that it can be safely and genuinely democratically acquired. mandatory switches in powerful systems. and banning architectures that are capable of self-replication, autonomous self-improvement, or shutdown resistance.
The publication of the statement coincides with a period that makes it much easier to appreciate its urgency. On the last Friday of February, Defense Secretary Pete Hegseth branded Anthropic – whose AI already works on classified military platforms – as a “supply chain risk” after the company refused to grant the Pentagon unrestricted use of its technology, a label usually reserved for companies with ties to China. Hours later, OpenAI cut its own deal with the Department of Defense, an agreement that legal experts say will be difficult to enforce in any meaningful way. What has been revealed is just how costly congressional inaction has become on AI.
As Dean Ball, senior fellow at the Foundation for American Innovation, he told the New York Times then, “This isn’t just some dispute over a contract. This is the first conversation we’ve had as a country about controlling AI systems.”
Techcrunch event
San Francisco, California
|
13-15 October 2026
Tegmark came up with an analogy that most people can understand when we talked. “You never have to worry that some drug company is going to release some other drug that causes massive harm before people figure out how to make it safe,” he said, “because the FDA won’t let them release anything until it’s safe enough.”
Washington turf wars rarely generate the kind of public pressure that changes laws. Instead, Tegmark sees child safety as the pressure point most likely to break the current impasse. Indeed, the statement calls for mandatory testing before the development of AI products — particularly chatbots and accompanying apps aimed at younger users — covering risks such as increased suicidal ideation, worsening mental health conditions and emotional manipulation.
“If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to get that boy to kill himself, the guy can go to jail for that,” Tegmark said. “We already have laws. It’s illegal. Why is it any different if a machine does it?”
He believes that once the principle of pre-market testing for children’s products is established, the scope will almost inevitably expand. “People will come and be like — let’s add some other requirements. Maybe we should also test that this can’t help terrorists build bioweapons. Maybe we should test to make sure that super-espionage doesn’t have the ability to subvert the U.S. government.”
It’s no small fact that former Trump adviser Steve Bannon and President Obama’s National Security Adviser Susan Rice signed the same document — along with former Joint Chiefs Chairman Mike Mullen and progressive religious leaders.
“What they agree on, of course, is that everyone is human,” Tegmark says. “If it’s going to come down to whether we want a future for humans or a future for machines, of course they’re going to be on the same side.”
