Billionaire media mogul Barry Diller doesn’t think OpenAI CEO Sam Altman is untrustworthy, but recent reports to the contrary. On stage at the Wall Street Journal’s “Future of Everything” conference this week, Diller vouched for the AI executive, who has been accused by some former colleagues and board members of being manipulative and deceptive at times.
Diller, who is friendly with Altman, was responding to a question about whether or not people should trust Altman to ensure that AI benefits humanity.
Specifically, he was asked about the theoretical form of artificial intelligence known as artificial general intelligence, or AGI, which could one day outperform humans at any task.
The media executive, co-founder of Fox Broadcasting and chairman of IAC and Expedia Group, said that while he believes Altman is sincere in his pursuits, it’s not really the area people are concerned about. Rather, it is the unknown consequences that will arise from artificial intelligence.
“One of the big problems with artificial intelligence is that it goes beyond trust,” Diller said. “Maybe trust is irrelevant because the things that happen are a surprise to the people who make those things happen. And I’ve spent a lot of time with various people who have been in AI creation mode, and they have a sense of wonder. So… it’s the great unknown. We don’t know. They don’t know,” he explained.
“We’ve started something that’s going to change almost everything. It’s very little reported. Now if these huge investments are going to happen — I couldn’t care less. I’m not invested in it, but progress is going to be made,” Diller added.
But the media mogul said he believes most of the people leading the charge are good managers, saying he believes Altman is honest and “a decent person with good values.” (Diller wouldn’t say which of the AI leaders he thinks is being disingenuous, we should note.)
Techcrunch event
San Francisco, California
|
13-15 October 2026
“But it’s not about managing them. It’s about … really dealing with the unknown. They don’t know what can happen when you get AGI, and we’re close to that. We’re not there yet, but we’re getting closer and closer, faster and faster. And we have to think about guardrails,” Diller noted.
Furthermore, he warned, if people don’t think about guardrails, then the alternative is that “another force, an AGI force, will do it on its own. And once that happens, once you release it, there’s no going back,” Diller said.
When you purchase through links in our articles, we may earn a small commission. This does not affect our editorial independence.
