Silicon Valley leaders, including White House AI & Crypto Czar David Sacks and OpenAI Strategy Officer Jason Kwon, caused an uproar online this week for their comments about groups promoting AI security. In separate cases, they have claimed that some AI security advocates are not as virtuous as they appear and are acting either in their own interests or in the interests of billionaire puppeteers behind the scenes.
AI security groups who spoke to TechCrunch say the claims by Sacks and OpenAI are Silicon Valley’s latest attempt to intimidate its critics, but certainly not the first. In 2024, some venture capital firms were spreading rumors that a California AI safety bill, SB 1047, would send startup founders to prison. The Brookings Institution characterized the reputation as one of manyfalsificationson the bill, but Gov. Gavin Newsom ultimately vetoed it anyway.
Whether Sacks and OpenAI intended to intimidate critics or not, their actions have scared enough AI safety advocates. Many nonprofit leaders contacted by TechCrunch last week spoke on condition of anonymity to protect their groups from retaliation.
The controversy underscores Silicon Valley’s growing tension between building artificial intelligence responsibly and building it into a massive consumer product—a topic that my colleagues Kirsten Korosec, Anthony Ha, and I unpack in this week’s product Justice podcast. We also look at a new AI safety law passed in California to regulate chatbots and OpenAI’s approach to flirting in ChatGPT.
On Tuesday, Sachs wrote a post on X claiming that Anthropic — which has raises concerns over AI’s ability to contribute to unemployment, cyber-attacks and catastrophic damage to society – it’s just terrorizing to pass laws that will benefit itself and suffocate smaller startups in red tape. Anthropic was the only major AI lab to endorse California Senate Bill 53 (SB 53), a bill that sets security reporting requirements for major AI companies, which was signed into law last month.
Sachs was responding to a viral essay by Anthropic co-founder Jack Clark about his fears about artificial intelligence. Clark delivered the essay as a talk at the Curve AI security conference in Berkeley weeks earlier. Sitting in the audience, it certainly felt like a genuine account of a technologist’s reservations about his products, but Sacks didn’t see it that way.
Sacks said Anthropic is pursuing a “sophisticated regulatory capture strategy,” though it’s worth noting that a truly sophisticated strategy likely wouldn’t involve making an enemy of the federal government. In one post tracking on X, Sachs noted that Anthropic has “firmly positioned itself as an enemy of the Trump administration.”
Techcrunch event
San Francisco
|
27-29 October 2025
Also this week, OpenAI chief strategist Jason Kwon wrote one post on X explaining why the company was subpoenaing AI security nonprofits like Encode, a nonprofit that advocates for responsible AI policy. (A subpoena is a legal class that requires documents or testimony.) Kwon said that after Elon Musk sued OpenAI — over concerns that the ChatGPT maker had strayed from its nonprofit mission — OpenAI found it suspicious how several organizations also objected to its restructuring. Encode filed an amicus brief in support of Musk’s lawsuit, and other nonprofits have spoken out publicly against the OpenAI restructuring.
“This raised questions of transparency about who was funding them and whether there was coordination,” Kwon said.
NBC News reported this week that OpenAI sent broad subpoenas to Encode and six other non-profit organizations who criticized the company, calling for their communication with two of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also requested communications from Encode regarding its support for SB 53.
A prominent AI security leader told TechCrunch that there is a growing divide between OpenAI’s government affairs team and its research organization. While OpenAI security researchers often publish reports exposing the dangers of AI systems, OpenAI’s policy unit lobbied against SB 53, saying it would prefer to have uniform rules at the federal level.
OpenAI’s Head of Mission Alignment, Joshua Achiam, talked about his company’s calls to nonprofits post on X this week.
“At a possible risk to my entire career I will say: this does not look great,” Achiam said.
Brendan Steinhauser, CEO of the non-profit AI security organization Alliance for Secure AI (which has not been invited by OpenAI), told TechCrunch that OpenAI seems convinced that its critics are part of a conspiracy led by Musk. However, he argues that this is not the case, and that much of the AI security community is quite critical of xAI’s security practices, or lack thereof.
“On OpenAI’s part, this is intended to silence critics, intimidate them, and prevent other nonprofits from doing the same,” Steinhauser said. “For Sachs, I think he’s concerned about that [the AI safety] Traffic is growing and people want to hold these companies accountable.”
Sriram Krishnan, the White House’s senior policy adviser on artificial intelligence and a former general contributor to a16z, joined the conversation this week with a post on social media of his own, calling AI safety advocates out of touch. He urged AI security organizations to talk to “people in the real world who are using, selling, adopting AI in their homes and organizations.”
A recent Pew study found that about half of Americans are more worried than excited about artificial intelligence, but it’s not clear what exactly worries them. Another recent study went into more detail and found that American voters care more job losses and fakes despite the catastrophic risks posed by AI, which the AI security movement is heavily focused on.
Addressing these security concerns could come at the expense of the AI industry’s rapid growth — a trade-off that worries many in Silicon Valley. With AI investment underpinning much of America’s economy, the fear of over-regulation is understandable.
But after years of unchecked AI progress, the AI security movement appears to be gaining real momentum heading into 2026. Silicon Valley’s efforts to push back against security-focused groups may be a sign they’re working.