Keeping up with an industry as fast-paced as artificial intelligence is a tall order. So, until an AI can do it for you, here’s a helpful roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.
By the way, TechCrunch plans to launch an AI newsletter soon. Stay tuned. In the meantime, we’re increasing the pace of our semi-regular AI column, previously twice a month (or so), to weekly — so be on the lookout for more releases.
This week in artificial intelligence, OpenAI once again dominated the news cycle (despite Google’s best efforts) with a product launch, but also some palace intrigue. The company unveiled the GPT-4o, its most capable production model yet, and a few days later effectively disbanded a team working on the problem of developing controls to prevent “super-intelligent” AI systems from going rogue.
The group’s breakup generated a lot of headlines, as expected. Reports—including our own—suggest that OpenAI scrapped the team’s security research in favor of releasing new products like the aforementioned GPT-4o, ultimately leading to the resignation of the team’s two advocates, Jan Leike and OpenAI co-founder , Ilya Sutskever.
Superintelligent AI is more theory than reality at this point. It’s not clear when — or if — the tech industry will achieve the innovations necessary to create artificial intelligence capable of accomplishing any task a human can. But this week’s coverage seems to confirm one thing: that OpenAI’s leadership—particularly CEO Sam Altman—has increasingly chosen to prioritize products over safeguards.
Altman reportedly “enragedSutskever was quick to introduce AI-powered features at OpenAI’s first developer conference last November. And he is is said to have been criticizing Helen Toner, director at Georgetown’s Center for Security and Emerging Technologies and a former OpenAI board member, in a paper she co-authored that cast OpenAI’s approach to security in a critical light — to the point where she tried to push it back the table.
Over the last year or so, OpenAI has let its chatbot store fill up with spam and (supposedly) data scraping from YouTube against the platform’s terms of service, while expressing ambitions to let its AI generate porn and moaning imagery. Certainly, security seems to have taken a back seat at the company — and a growing number of OpenAI security researchers have concluded that their work would be better supported elsewhere.
Here are some other notable AI stories from the past few days:
- OpenAI + Reddit: In more OpenAI news, the company has reached an agreement with Reddit to use the social networking site’s data to train artificial intelligence models. Wall Street welcomed the deal with open arms — but Reddit users might not be so happy.
- Google AI: Google hosted its annual I/O developer conference this week, during which it debuted a ton artificial intelligence products. We’ve rounded them up here, from video-generating Veo to AI-curated results in Google Search to upgrades to Google’s Gemini chatbot apps.
- Anthropic hires Krieger: Mike Krieger, one of the co-founders of Instagram and, most recently, the co-founder of personalized news app Artifact (which TechCrunch was recently acquired by Yahoo), is joining Anthropic as the company’s first chief product officer. He will oversee both the company’s consumer and business efforts.
- AI for kids: Anthropic announced last week that it will begin allowing developers to create kid-focused apps and tools based on its AI models — as long as they follow certain rules. In particular, competitors such as Google do not allow their artificial intelligence to be integrated into applications aimed at younger ages.
- AI Film Festival: AI startup Runway held its second AI film festival earlier this month. The takeaway? Some of the most powerful moments in the showcase came not from the AI, but from the more human elements.
More machine learning
AI security is obviously in the spotlight this week with OpenAI departures, but Google Deepmind is plowing ahead with a new ‘Border Security Framework’. Basically, it’s the organization’s strategy to detect and hopefully prevent any explosive potential — it doesn’t have to be AGI, it could be a malware generator gone mad or something like that.
The framework has three steps: 1. Identify potentially harmful features in a model by simulating its development paths. 2. Regularly evaluate models to identify when they have reached known “critical capability levels”. 3. Implement a mitigation plan to prevent infiltration (by others or self) or problematic development. There are more details here. It might sound like an obvious course of action, but it’s important to formalize them, otherwise everyone is doing it. This is how you get the evil AI.
A rather different danger has been identified by Cambridge researchers, who are rightly concerned about the proliferation of chatbots trained on the data of a dead person in order to provide a superficial simulation of that person. You might (like me) find the whole idea a bit repulsive, but it could be used in grief management and other scenarios if we’re careful. The problem is that we are not paying attention.
“This area of artificial intelligence is an ethical minefield,” said lead researcher Katarzyna Nowaczyk-Basińska. “We need to start thinking now about how to mitigate the social and psychological risks of digital immortality, because the technology is already here.” The team identifies many scams, potential bad and good results and discusses the concept in general (including fake services) in a paper published in Philosophy & Technology. Black Mirror predicts the future once again!
In less creepy AI applications, physicists at MIT they consider a useful (to them) tool for predicting the phase or state of a physical system, usually a statistical task that can become burdensome with more complex systems. But train a machine learning model on the right data and ground it with some known hardware characteristics of a system and you have a much more efficient way to achieve this. Another example of how ML is finding places even in advanced science.
At CU Boulder, they talk about how artificial intelligence can be used in disaster management. The technology can be useful for quickly predicting where resources will be needed, mapping damage, and even helping train responders, but people are (reasonably) hesitant to apply it to life-and-death scenarios.
Professor Amir Behzadan tries to advance this, saying that “Human-centered AI leads to more effective disaster response and recovery practices by promoting collaboration, understanding, and participation among team members, survivors, and stakeholders.” They’re still in the lab phase, but it’s important to think deeply about these things before trying to, say, automate the distribution of aid after a hurricane.
Finally, some interesting work from Disney Research, which looked at how to vary the output of diffusion imaging models, which can produce similar results over and over again for certain prompts. Their solution? “Our sampling strategy anneals the conditioning signal by adding programmed, monotonically reduced Gaussian noise to the conditioning vector during diversity equalization and condition alignment.” I just couldn’t have put it better.
The result is a much greater variety in angles, settings and general appearance in the image outputs. Sometimes you want this, sometimes you don’t, but it’s nice to have the option.