You can capture a year through product launches, or you can measure it in the biggest moments that change the way we look at AI. The AI industry is constantly churning out news like big acquisitions, indie developer successes, public outcry against sketchy products, and existentially dangerous contract negotiations—it’s a lot to digest, so we take a look at where we are and where we’ve been so far this year.
Anthropic vs. Pentagon
Once business partners, Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth reached a bitter impasse in February as they renegotiated the contracts that dictate how the US military can use Anthropic’s AI tools.
Anthropic has established a hard line against using its artificial intelligence for mass surveillance of Americans or to power autonomous weapons that can attack without human supervision. Meanwhile, the Pentagon has argued that the Defense Department — which President Donald Trump’s administration calls the War Department — should have access to Anthropic’s models for any “legitimate use.” Government representatives took offense to the idea that the military should be confined to the rules of a private company, but Amodei stood his ground.
“Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to specific military operations or attempted to limit the use of our technology in an ad hoc manner,” Amodei wrote in a statement dealing with the situation. “However, in a narrow set of cases, we believe that artificial intelligence can undermine, rather than defend, democratic values.”
The Pentagon gave Anthropic a deadline to agree to their contract. Hundreds of Google and OpenAI employees signed an open letter calling on their respective leaders to respect Amodei’s limits and refuse to back down on autonomous weapons or domestic surveillance.
The deadline passed without Anthropic agreeing to the Pentagon’s demands. Trump has asked federal agencies to phase out the use of Anthropic tools six month transition period and called the $380 billion AI company a “radical left-wing, woke company” in a social media post in all caps. The Pentagon then moved to declare Anthropic a “supply chain risk,” a designation usually reserved for foreign adversaries that prevents any company that works with Anthropic from doing business with the U.S. military. (Anthropic has since sued to challenge the designation.)
Human rival OpenAI then swooped in and announced it had reached an agreement allowing its own models to be deployed in classified situations. It has been a shock to the tech community ever since reports said that OpenAI will adhere to Anthropic’s red lines governing the use of artificial intelligence for the military.
Techcrunch event
San Francisco, California
|
13-15 October 2026
Common sentiment would suggest that people found OpenAI’s move awful — the day after the OpenAI deal was announced, ChatGPT installs increased 295% day-to-day, and Anthropic’s Claude shot to No. 1 on the App Store. OpenAI’s chief hardware officer Caitlin Kalinowski resigned in response to the deal, saying it was “rushed without defined guardrails.”
OpenAI told TechCrunch that it believes its deal “cleans up [its] red lines: no autonomous weapons and no autonomous surveillance.”
As this saga unfolds, it will have major implications for the future of how AI is developed in warfare, potentially changing the course of history – you know, no big deal…
“Vibe-coded” OpenClaw accelerates shift to agent AI
February was OpenClaw month, and its impact continues to reverberate. In quick succession, the vibe-coded AI assistant app went viral, spawned a series of spinoff companies, suffered from privacy issues, and was then acquired by OpenAI. Even one of the companies built on OpenClaw, a Reddit clone for AI agents called Moltbook, was recently acquired by Meta. This shellfish-themed ecosystem has whipped Silicon Valley into an absolute frenzy.
Created by Peter Steinberger — who has since joined OpenAI — OpenClaw is a wrapper for AI models like Claude, ChatGPT, Google’s Gemini, or xAI’s Grok. What sets it apart is that it allows people to communicate with AI agents in natural language through the most popular chat apps, such as iMessage, Discord, Slack or WhatsApp. There’s also a public marketplace where people can code and upload “skills” for people to add to their AI agents, making it possible to automate basically anything that can be done on a computer.
If this seems too good to be true, that’s because it is. For an AI agent to be effective as a personal assistant, it must have access to email, credit card numbers, text messages, computer files, etc. If it’s going to be hacked, a lot could go wrong, and unfortunately, there’s no way to completely secure these agents from direct injection attacks.
“It’s just an agent sitting with a bunch of credentials in a box connected to everything — your email, your messaging platform, whatever you use.” Ian AhlCTO at Permiso Security, told TechCrunch. “So that means when you get an email and maybe someone can put a little injection technique to do an action, [and] that agent sitting in your box with access to everything you’ve given it can now take that action.”
An AI security researcher at Meta said OpenClaw ran into her inbox, deleting all her emails despite repeated calls to stop. “I had to run to my Mac mini like I was defusing a bomb” to physically disconnect the device, he wrote in a now-viral post on Xwhich included images of the ignored stop prompts as evidence.
Despite the security risks, the technology piqued OpenAI’s interest enough for an acquisition.
Other tools built on OpenClaw, including Moltbook – a Reddit-like “social network” where AI agents can communicate with each other – ended up going more viral than OpenClaw itself.
In one case, a the post went viral in which an AI agent appeared to encourage his colleagues to develop their own secret, end-to-end encrypted language where they could organize with each other without humans knowing.
But researchers soon discovered that the vibe-encoded Moltbook was not very secure, meaning it was very easy for people to pose as AI to make posts that would cause viral social hysteria.
Again, although the talk surrounding Moltbook was based more on panic than reality, Meta saw something in the app and announced that Moltbook and its creators, Matt Schlicht and Ben Parr, would be joining Meta Superintelligence Labs.
It seems strange that Meta would buy a social network where all the users are bots. While Meta hasn’t revealed much about the acquisition, we think the acquisition of Moltbook has more to do with gaining access to the talent behind it, who are excited about experimenting with AI agent ecosystems. CEO Mark Zuckerberg has he said it himself: He thinks that one day, every business will have business AI.
As we watch the buzz around OpenClaw, Moltbook, and NanoClaw play out, it looks like those who predicted an AI agency future may be on to something, at least for now.
Chip shortages, hardware drama and data center demands are escalating
The harsh demands of the AI industry – requiring computing power and data centers in unprecedented volumes – are reaching a point where the average consumer has no choice but to pay attention. Now it may not even be possible for the industry to satisfy it astronomical requirements for memory chipsand consumers are already seeing the prices of their phones, laptops, cars and other hardware go up.
So far, analysts from IDC and Counterpoint have predicted that smartphone shipments, for example, will plummet about 12% to 13% this year. Apple has already raised MacBook Pro prices by up to $400.
Google, Amazon, Meta and Microsoft plan to spend up to a total of 650 billion dollars in data centers alone this year, which is an estimated 60% increase from last year.
If the lack of chips doesn’t hit you in your wallet, it can hit your community at large. Only in the US, pretty much 3,000 new data centers are under construction, adding to the 4,000 already operating in the country. The need for workers to build these data centers is quite significant “men’s camps” popped up in Nevada and Texas, trying to lure workers with the promise of golf simulator arcades and grilled-to-order steaks.
Not only does data center construction have a long-term impact on the environment, it also creates health risks for nearby residents, polluting the air and affecting the safety of nearby water sources.
Meanwhile, one of the most valuable hardware and chip developers, Nvidia, is reshaping its relationship with leading AI companies like OpenAI and Anthropic. Nvidia is a constant supporter of these companies, fueling concerns around the circularity of the AI industry and how many of these eye-popping valuations are based on repeat deals with each other. Last year, for example, Nvidia invested $100 billion in OpenAI stock, and OpenAI said at the time that it would buy $100 billion worth of Nvidia chips.
So it was a surprise when Nvidia CEO Jensen Huang said his company would stop investing in OpenAI and Anthropic. He said this is because companies are planning to go public later this year, although that logic doesn’t make sense, as investors typically pump more money ahead of an IPO to extract as much value as possible.
