Chris Lehane is one of the best in business to disappear bad news. Al Gore’s Press Secretary during Clinton’s years, Airbnb’s head director through every regulatory nightmare from here in Brussels – Lehane knows how to return. He is now two years in what could be his weakest concert yet: as Vice President of Openai’s global policy, his job is to convince the world that Openai really gives a dirty for democratic artificial intelligence, while the company is increasingly behaving, as it is more and more.
I had 20 minutes with him on stage at Lift Conference in Toronto earlier this week – 20 minutes to overcome the talk points and the real contradictions that eat away in the carefully constructed image of Openai. It was not easy or completely successful. Lehane is really good at his job. He is sympathetic. It sounds reasonable. Admits uncertainty. He even talks about awakening at 3am. worried about whether one of them will really benefit humanity.
But good intentions do not mean much when your company assumes critics, draining economically depressing cities of water and electricity and bringing dead celebrities back to life to claim your market domination.
The problem of the company’s SORA is really at the root of everyone else. The video creation tool launched last week with material protected by seemingly baked in it. It was a bold move for a company already under the New York Times, the Toronto Star and half of the publishing industry. In terms of businesses and marketing, it was also brilliant. The application only for the invitation increased to the top of the application store, as people created digital versions of themselves, CEO of Openai Sam Altman. Characters like Pikachu, Mario and Cartman of “South Park”. And dead celebrities like Tupac Shakur.
Asked what led Openai’s decision to launch this newer version of Sora with these characters, Lehane gave me the model Pitch: Sora is a “general purpose technology” such as electricity or printing type, democratization of creativity for people without talent or resources. Even he-a self-written creative zero-can make a video now, he said on stage.
What he danced is that Openai initially “let” the holders of rights to leave their work to train Sora, which is not how the use of copyright usually works. Then, after Openai noticed that people really liked to use copyright-protected images (of course they did), it “evolved” to an opt-in model. This is not really indeuring. This tries how much you can escape. (And by the way, although the cinematic union made some noise Last week on legal threats, Openai seems to have been removed with many.)
Of course, the situation brings to mind the deterioration of the publisher who accuses Openai of training in their work without sharing the financial spoils. When I pressed Lehane for publishers to be cut off from finances, the US legal doctrine that is supposed to balance the creator’s rights against the public’s access to knowledge was quoted. He called it the secret weapon of US technology domination.
TechCrunch event
Francisco
|
27-29 October 2025
Perhaps. But recently interview with Al Gore – Lehane’s old boss – and I realized that someone could just ask for chatgpt so instead of reading my piece at TechCrunch. “It’s” Repeat “,” I said, “but he is also a replacement.”
For the first time, Lehane threw his spiel. “We should all understand it,” he said. “It’s really glib and easy to sit here on stage and say we need to understand new economic revenue models. But I think we’ll do it.” (We do it as we go, briefly.)
Then there is the infrastructure question that no one wants to answer honestly. Openai is already operating a Data Center campus in Abilene, Texas and recently broke the ground at a huge data center in Lordstown, Ohio, in collaboration with Oracle and Softbank. Lehane has been assimilated to access the AI ββin the appearance of electricity-saying that those who have access to the latter still play a catch-up-however, the Openai Stargate project is seemingly aimed at some of these same economic parts with the places for the creation of installations.
Asked during our sitting if these communities will benefit or just pass the bill, Lehane went to Gigawatts and Geopolitics. Openai is needed for an energy gigawatt per week, he noted. China brought 450 gigawatts last year plus 33 nuclear facilities. If democracies want the democratic AI, they must compete. “The optimist in me says that this will modernize our energy systems,” he said, painting a picture of the reassitation of America with transformed electricity networks.
Was inspired. But it wasn’t an answer as to whether people in Lordstown and Abilene are going to watch their utility accounts, while Openai creates a video by John F. Kennedy and the notorious big (the generation of video is the most AI intensity energy out there.)
That brought me to my most inconvenient example. Zelda Williams spent the day before our interview to ask foreigners on Instagram to stop sending videos created by Robin Williams’ last father. “You don’t make art,” he wrote. “Make disgusting. Excessively processed hotdogs from people’s lives.”
When I asked how the company reconciles this kind of close damage to its mission, Lehane responded by talking about procedures, including the responsible planning, framework and government cooperation. “There’s no playbook for these things. So?”
Lehane showed vulnerability in a few moments, saying he wakes up at 3. I believe every night, worrying about democratization, geopolitical and infrastructure. “There are huge responsibilities that come with it.”
Whether these moments were designed for the public, I believe him. Indeed, I left Toronto to think that I had watched a main class in political messages – Lehane is pushing a weak needle, while bypassing questions about the decisions of the company that, for everything I know, does not even agree. Then it happened on Friday.
Nathan Calvin, a lawyer working in AI policy in a non -profit defense organization, codes AI, revealed that at the same time I was talking to Lehane in Toronto, Openai had sent one His deputy sheriff in his house In Washington, DC, during dinner to serve a summons. They wanted his private messages with the California legislators, students and former Openai officials.
Calvin accuses OpenAI of being intimidated by a new part of California’s AI Regulation, SB 53. He says the company armed his legal battle with Elon Musk as a pretext to target critics, implying that the coding was secretly funded by Musk. In fact, Calvin says he fought Openai’s opposition to California’s SB 53, a AI security bill and that when he saw the company claim that he “worked to improve the bill”, “literally laughed loudly.” In a social media Skein, he continued to call Lehane in particular the “master of the political dark arts”.
In Washington, this can be a compliment. In a company such as Openai, whose mission is “build AI that benefits all humanity”, it sounds like a indictment.
What is much more is that even the Openai people themselves conflict about what is happening.
As my colleague Max said last week, several today and former employees took on social media after the liberation of Sora 2, expressing their concerns, such as Boaz Barak, an Openai researcher and Harvard professor who wrote for Sora 2 That it is “technically amazing, but it is premature to congratulate ourselves on avoiding the traps of other social media and depth applications”.
On Friday, Josh Achiam – the head of the OpenAi’s mission – tweeted something even more remarkable for the Calvin category. Creaming his comments saying it was “possibly a danger to my whole career”, Achiam continued to write Openai: “We can’t do things that make us make a frightening force instead of a virtuous.
That’s it. . .something. An Openai Executive Officer who disputes whether his company becomes “a frightening force instead of a virtuous”, is not at the same level as a competitor who takes footage or journalist asking questions. He is someone who has chosen to work in Openai, who believes in his mission and now recognizes a crisis of consciousness despite the professional danger.
It’s a crystalline moment. You can be the best political officer at Tech, a captain in navigating weak situations, and still ends up working for a company whose actions are increasingly conflicting with its declared values ββ- contradictions that can only intensify as openai struggles for artificial intelligence.
It has thought that the real question is not whether Chris Lehane can sell Openai’s mission. It is if others – including critically, of other people who work there – still believe it.
