Chris Lehane is one of the best in the industry at drowning out bad news. Mr. Lehane, Al Gore’s press secretary during the Clinton era and Airbnb’s crisis manager who weathered every regulatory nightmare from here to Brussels, knows how to spin. Now, he’s been in what might be the most impossible job ever for two years. As OpenAI’s vice president of global policy, his job is to convince the world that OpenAI is serious about democratizing artificial intelligence, even as the company is behaving in the same way as other tech giants that have long claimed to be different.
I spent 20 minutes with him on stage at the Elevate conference in Toronto earlier this week. Twenty minutes was enough time to get past the talking points and into the real contradictions that are undermining OpenAI’s carefully constructed image. It wasn’t easy, and it wasn’t completely successful either. Lehane is really good at his job. He’s likable. He sounds reasonable. He acknowledges the uncertainty. He even talks about waking up at 3am wondering if this would actually benefit humanity.
But good intentions don’t mean much when your company summons detractors, drains water and electricity from economically depressed towns, or brings dead celebrities back to life to assert market dominance.
The company’s Sora problem is actually at the root of it all. The video generation tool was released last week and appears to include copyrighted material intact. It was a bold move for the company, which has already been sued by the New York Times, the Toronto Star and half of the publishing industry. It was also great from a business and marketing perspective. OpenAI CEO Sam Altman said the invite-only app has rocketed to the top of the App Store as people create digital versions of themselves. Characters like Pikachu and Cartman from “South Park.” There are also famous people who have passed away, like Tupac Shakur.
When asked what motivated OpenAI’s decision to release an updated version of Sora that includes these characters, Lehane said Sora is a “general purpose technology” like a printing press that democratizes creativity for people without the talent or resources. Even though he calls himself a creative zero, he said on stage that he can now make videos.
He Danced was originally “allowed” by rights holders to opt out of having their work used to train Sora, which is not how copyrights are typically used. It then “evolved” towards an opt-in model after OpenAI realized that people really liked using copyrighted images. It’s not a repetition. I’m trying to see how far I can escape. (By the way, the Motion Picture Association made some noise last week about legal threats, but OpenAI seems to have gotten away with a lot.)
Unsurprisingly, this situation is a frustrating reminder for publishers who accuse OpenAI of training their work without sharing the financial spoils. When I confronted Lehane about publishers being shut out of the economy, he brought up fair use. Fair use is an American legal doctrine that balances the rights of creators with the public’s access to knowledge. He called it the secret weapon of America’s technological superiority.
tech crunch event
san francisco
|
October 27-29, 2025
perhaps. But I just recently interviewed Lehane’s old boss, Al Gore, and I realized that instead of reading my article on TechCrunch, anyone can ask about it on ChatGPT. “It’s ‘repetitive,'” I said, “but it’s also displacement.”
Lehane listened and let out a rant. “We’re all going to need to understand this,” he said. “It’s really clumsy and easy to sit here on stage and say we need to come up with a new economic revenue model, but I think we will.” (We’re making it up as we go, that’s what I’ve heard.)
Then there are the infrastructure questions that no one wants to answer honestly. OpenAI already operates a data center campus in Abilene, Texas, and recently broke ground on a large data center in Lordstown, Ohio in partnership with Oracle and SoftBank. While Lehane likens the introduction of AI to the advent of electricity, saying those who had access to it last are still playing catch-up, OpenAI’s Stargate project appears to be targeting some of the same economically challenged regions, installing facilities with high demand for water and electricity.
During the roundtable, Lehane talked about gigawatts and geopolitics when asked if these communities would benefit or just foot the bill. He noted that OpenAI requires about gigawatts of energy per week. China installed 450 gigawatts plus 33 nuclear facilities last year. He said if democracies want democratic AI, they must compete. “The optimist in me said this would modernize our energy system,” he said, painting a picture of a re-industrialized America with a transformed power grid.
While this was impressive, it didn’t answer whether people in Lordstown and Abilene would suffer higher utility bills while OpenAI produced the infamous BIG video. It’s worth noting that video generation is the most energy-consuming AI.
There are also human costs. It became clearer the day before the interview, when Zelda Williams logged onto Instagram and begged strangers to stop sending her an AI-generated video of her late father, Robin Williams. “You are not making art,” she wrote. “You’re using human lives to make disgusting, over-processed hot dogs.”
When I asked how the company reconciles its mission with this type of close harm, Lehane responded by talking about processes such as responsible design, testing frameworks, and government partnerships. “There’s no playbook for this, right?”
Lehane showed weakness at one point, saying he recognizes the “significant responsibility that comes with” everything OpenAI does.
Whether or not that moment was designed for the audience, I believe him at his word. In fact, I left Toronto thinking I had seen a masterclass in political messaging. For all I know, it’s Lehane threading an impossible needle while dodging questions about company decisions he doesn’t even agree with. Then news came out that made an already complicated situation even more complicated.
Nathan Calvin, an AI policy lawyer at the nonprofit advocacy group Encode AI, said that around the same time I was speaking with Lehane in Toronto, OpenAI was sending sheriff’s deputies to Calvin’s home in Washington, D.C., to serve a subpoena over dinner. They wanted his personal messages with California state legislators, college students, and former OpenAI employees.
Calvin said the move was part of OpenAI’s scare tactics over California’s new AI regulation, SB 53. He said the company was weaponizing its ongoing legal battle with Elon Musk as a pretext to target its critics, suggesting Encode was secretly funded by Musk. Calvin added that he fought OpenAI’s opposition to California’s AI safety bill, SB 53, and that he “literally laughed out loud” when he saw OpenAI claim that it “worked to improve the bill.” In a skein of social media, he went on to specifically call Lehane a “master of the political dark arts.”
In Washington, that might be a compliment. For a company like OpenAI, whose mission is to “build AI that benefits all humanity,” this sounds like an indictment.
But more importantly, even OpenAI employees are conflicted about what they are becoming.
As my colleague Max reported last week, after the release of Sora 2, many current and former employees expressed their concerns on social media. Among them is OpenAI researcher and Harvard professor Boaz Barak, who wrote that Sora 2 is “technically impressive, but it’s too early to praise it for avoiding the pitfalls of other social media apps and deepfakes.”
On Friday, Josh Achiam, OpenAI’s director of mission coordination, tweeted something more notable about Calvin’s accusations. Atiam prefaced his comments with “probably a risk to my entire career,” writing about OpenAI: “We cannot afford to do anything that makes us a fearful power rather than a benevolent power. We have a duty and a mission to all humanity, and the bar to pursue that duty is very high.”
It’s worth stopping and thinking about it. For OpenAI executives to publicly question whether their company is becoming a “terrible force rather than a good company” is not the same as a competitor firing a gun or a reporter asking a question. This is someone who chooses to work at OpenAI, believes in its mission, and recognizes a crisis of conscience despite the occupational risks.
This is a crystallizing moment, and its contradictions are likely to intensify as OpenAI races towards artificial general intelligence. It also makes me think that the real question isn’t whether Chris Lehane can sell OpenAI’s mission. What matters is whether other people (including, critically, others who work there) still believe it.