Since Openai Sam Altman’s chief executive went on stage, it was clear that this would not be a regular interview.
Altman and its head, Brad Lightcap, stood clumsy towards the back of the stage in a full jam in San Francisco, who usually hosts jazz concerts. Hundreds of people are filled with a steep theater in theater on Tuesday night to watch Kevin Roose, a New York Times column, and Casey Newton of Platformer Newton, a live episode of their popular Podcast technology, Hard Fork.
Altman and Lightcap were the main event, but they had come out very early. Roose explained that he and Newton were planning – ideally, before Openai executives were supposed to come out – a list of several titles written for Openai in the weeks that led to the event.
“This is more fun we are here for it,” Altman said. Seconds later, Openai’s chief executive asked, “Will you talk about where you sue us because you don’t like users’ privacy?”
Within a few minutes of the start of the program, Altman received the debate to talk about the New York Times lawsuit against Openai and Microsoft’s largest investor, in which the publisher claims that Altman’s company used its articles inappropriately to train large linguistic models. Altman was particularly peeved for a recent development in the lawsuit in which lawyers representing the New York Times Asked Openai to maintain consumer data Chatgpt and API API.
“The New York Times, one of the great institutions, really, for a long time, is taking a place that we need to maintain our users’ logs, even if they chat in private operation, even if they asked us to delete them,” Altman said. “They still love the New York Times, but what we feel strongly.”
For a few minutes, Openai’s chief executive pressed the Podcasters to share their personal views on the New York Times lawsuit – they implied, noting that as journalists whose work is appearing in the New York Times, they are not involved in the lawsuit.
The entrance of Altman and Lightcap only lasted a few minutes and the rest of the interview went on, apparently, as planned. However, inflammation felt indicative of Silicon Valley’s turning point seems to be approaching its relationship with the media industry.
In recent years, many publishers have brought lawsuits against Openai, Anthropic, Google and Meta for the preparation of AI models in copyright -protected projects. At a high level, these lawsuits argue that AI models have the ability to underestimate and even replace copyright -protected projects produced by media institutions.
But tides can turn to technology companies. Earlier this week, Openai Anthropic competitor received a major victory in his legal battle against the publisher. A federal judge ruled that the use of books by Anthropic to train AI models was legal in some cases, which could have a wide impact on other publishers against Openai, Google and Meta.
Perhaps Altman and Lightcap felt encouraged by the industry’s victory that focuses on their live interview with New York Times reporters. But these days, Openai is struggling with threats from every direction, and this became clear all night.
Mark Zuckerberg has recently tried to hire Openai’s top talent, offering them $ 100 million compensation packages to participate in Meta’s Superintelligence Laboratory, Altman revealed weeks ago on his brother’s podcast.
When asked if Meta’s CEO really believes in Superintelligent AI Systems, or if it’s just a recruitment strategy, Lightcap said: “I think [Zuckerberg] He believes he is a transmissible. ”
Later, Roose asked Altman about Openai’s relationship with Microsoft, which is allegedly pushed to a boil in recent months, as partners are negotiating a new contract. While Microsoft was once a significant acceleration in Openai, the two are now competing in business software and other areas.
“In any deep cooperation, there are tension points and we certainly have these,” Altman said. “We are both ambitious companies, so we find some ignition points, but I would expect it to be something we find deep for both sides for a long time.”
Openai’s leadership today seems to spend a lot of time to fly competitors and lawsuits. This can prevent Openai’s ability to solve broader issues around AI, such as the way they develop extremely smart AI systems on a scale.
At one point, Newton asked Openai’s leaders how they thought of his recent stories Mentally unstable people using chatgpt to cross dangerous rabbit holesincluding discussion of conspiracy or suicide theories with Chatbot.
Altman said Openai is taking many steps to prevent these conversations, such as cutting off early or by directing users to professional services where they can receive help.
“We do not want to slip into mistakes that I think the previous generation of technology companies that do not react quickly quickly,” Altman said. In a follow -up question, Openai’s chief executive added: “However, users who are in a fragile spiritual part who are on the edge of a psychotic break, we have not yet understood how a warning is going through.”
