President Donald Trump signed executive order Thursday afternoon directing federal agencies to challenge state AI laws, arguing that startups need relief from a “mess” of rules. Legal experts and startups, meanwhile, say the order could prolong uncertainty, sparking legal battles that leave startups navigating changing state requirements while waiting to see if Congress can agree on a single national framework.
The order, titled “Ensuring a National Policy Framework for Artificial Intelligence,” directs the Justice Department to form a task force within 30 days to challenge certain state laws on the grounds that artificial intelligence is interstate commerce and should be federally regulated. It gives the Commerce Department 90 days to compile a list of “burdensome” state AI laws, an assessment that could affect states’ eligibility for federal funds, including broadband grants.
The order also asks the Federal Trade Commission and the Federal Communications Commission to explore federal standards that could preempt state rules and directs the administration to work with Congress on a single AI law.
The order lands amid a broader push to loosen state-by-state AI rules, after efforts in Congress to halt state legislation stalled. Lawmakers in both parties argued that without a federal standard, blocking states from acting could leave consumers exposed and companies largely unregulated.
“This executive order under David Sacks is a gift to Silicon Valley oligarchs who use their influence in Washington to shield themselves and their companies from accountability,” said Michael Kleinman, head of US policy at the Future of Life Institute, which focuses on reducing extreme risks from transformative technologies.
Sacks, Trump’s AI and crypto policy czar, has been a leading voice behind the administration’s push to prevent AI.
Even advocates of a national framework admit that class does not create. With state laws still enforceable unless courts block them or states stop enforcement, startups could face an extended transition period.
Techcrunch event
San Francisco
|
13-15 October 2026
Sean Fitzpatrick, CEO of LexisNexis North America, UK and Ireland, tells TechCrunch that states will defend the consumer protection principle in court, with cases likely to escalate to the Supreme Court.
While supporters argue the order could reduce uncertainty by centralizing the fight to regulate AI in Washington, critics say the legal battles will create an immediate headwind for startups navigating conflicting state and federal requirements.
“Because startups prioritize innovation, they typically don’t have … robust regulatory governance programs until they reach a scale that requires a program,” Hart Brown, lead author of Oklahoma Gov. Kevin Stitt’s Task Force on AI and Emerging Technology Recommendations, told TechCrunch. “These programs can be expensive and time-consuming to respond to in a very dynamic regulatory environment.”
Arul Nigam, co-founder at Circuit Breaker Labs, a startup red-teaming conversational and mental health AI chatbots, echoed those concerns.
“There is uncertainty about, act [AI companion and chatbot companies] do you have to self-regulate?’ Nigam told TechCrunch, noting that the patchwork of state AI laws is hurting smaller startups in his field. “Are there open source standards that they have to adhere to? Should they keep building?”
He added that he hopes Congress can move more quickly now to pass a stronger federal framework.
Andrew Gamino-Cheong, CTO and co-founder of AI governance company; Reliabletold TechCrunch that the EO will backfire on AI innovation and pro-AI targets: “Big tech and big AI startups have the capital to hire lawyers to help them figure out what to do, or they can just hedge their bets. Uncertainty hurts startups the most, especially those that can’t get nearly billions in funding,” he said.
He added that legal ambiguity makes it harder to sell to risk-sensitive customers such as legal groups, financial firms and healthcare organizations, increasing sales cycles, systems work and insurance costs. “Even the perception that AI is uncontrollable will reduce trust in AI,” which is already low, and threatens adoption, Gamino-Cheong said.
Gary Kibel, a partner at Davis + Gilbert, said businesses would welcome a single national standard, but “an executive order is not necessarily the appropriate vehicle to override laws that states have properly enacted.” He warned that the current uncertainty leaves open two extremes: highly restrictive rules or no action at all, either of which could create a “Wild West” that favors Big Tech’s ability to absorb risk and wait things out.
Meanwhile, Morgan Reed, president of The App Association, urged Congress to quickly enact a “comprehensive, targeted, and risk-based national AI framework. We can’t have a patchwork of state AI laws, and a long court battle over the constitutionality of an Executive Order is no better.”
