The race for high-quality AI-generated videos is heating up.
On Monday, Runway, a company that makes productive artificial intelligence tools aimed at film and image content creators, introduced the Gen-3 Alpha. The company’s latest AI model creates video clips from text descriptions and still images. Runway says the model offers a “significant” improvement in production speed and fidelity over Runway’s previous video model, Gen-2, as well as detailed control over the structure, style and movement of the videos it creates.
Gen-3 will be available in the coming days to Runway subscribers, including corporate customers and creators in Runway’s Creative Partner Program.
“Gen-3 Alpha excels at creating expressive human characters with a wide range of actions, gestures and emotions,” wrote Runway. in a post on her blog. “It was designed to interpret a wide range of styles and cinematic terminology [and enable] imaginative transitions and precise framing of the elements in the scene”.
The Gen-3 Alpha has its limitations, including the fact that its shots are as long as 10 seconds. However, Runway co-founder Anastasis Germanidis promises that the Gen-3 is just the first — and smallest — of many video production models to come in a family of next-generation models trained on upgraded infrastructure.
“The model can struggle with complex interactions of characters and objects, and generations don’t always follow the laws of physics precisely,” Germanides told TechCrunch this morning in an interview. “This initial release will support high-resolution 5- and 10-second generations, with significantly faster production times than Gen-2. A 5 second clip takes 45 seconds to create and a 10 second clip takes 90 seconds to create.”
Gen-3 Alpha, like all video generation models, was trained on a huge number of video examples — and images — so that it could “learn” the patterns in those examples to create new clips. Where did the training data come from? Runway wouldn’t say. Few productive AI vendors volunteer such information these days, in part because they see training data as a competitive advantage and thus keep it and related information close to their chest.
“We have an in-house research team that oversees all of our training, and we use curated, in-house datasets to train our models,” Germanides said. He left it at that.
Training data details are also a potential source of IP-related lawsuits if the vendor trains on public data, including copyrighted data from the web — and thus another disincentive to disclose much. Enough cases going through the courts reject the sellers fair use training data defensesarguing that AI creation tools reproduce artists’ styles without the artists’ permission and allow users to create new works that resemble artists’ originals for which the artists receive no payment.
Runway somewhat addressed the copyright issue, saying it consulted with the artists in developing the model. (Which artists? It’s not clear.) This echoes what Germanides told me during a fireside at TechCrunch’s 2023 Disrupt conference:
“We’re working closely with artists to figure out the best approaches to address this,” he said. “We are exploring various data partnerships to be able to develop further … and create the next generation of models.”
Runway also says it plans to release Gen-3 with a new set of safeguards, including a moderation system to block attempts to create videos from copyrighted images and content that doesn’t agree with Runway’s terms of service. Also in the works is a provenance system — compliant with the C2PA standard, supported by Microsoft, Adobe, OpenAI and others — to identify videos as coming from Gen-3.
“Our new and improved internal visual and text moderation system uses automatic moderation to filter out inappropriate or harmful content,” Germanides said. “C2PA authentication verifies the origin and authenticity of media created with all Gen-3 models. As the model’s capabilities and ability to create high-fidelity content grow, we will continue to invest significantly in our alignment and security efforts.”


Runway also revealed that it is collaborating and partnering with “leading entertainment and media organizations” to create custom versions of Gen-3 that allow for more “stylistically controlled” and consistent characters, targeting “specific artistic and narrative requirements.” The company adds: “This means characters, backgrounds and generated assets can maintain a consistent look and behavior across different scenes.”
A major unsolved problem with models that generate video is control — that is, getting a model to generate consistent video aligned with a creator’s artistic intentions. As my colleague Devin Coldewey wrote recently, simple issues in traditional filmmaking, such as choosing a color for a character’s clothing, require solutions with genetic models because each shot is created independently of the others. Sometimes even workarounds don’t do the trick — leaving extensive manual work for editors.
Runway has raised over $236.5 million from investors including Google (with which it has cloud computing credits) and Nvidia, as well as VCs such as Amplify Partners, Felicis and Coatue. The company has closely aligned itself with the creative industry as it increases its investment in genetic AI technology. Runway operates Runway Studios, an entertainment division that acts as a production partner for business customers and hosts the AI Film Festival, one of the first events dedicated to screening films produced entirely – or in part – by AI.
But the competition is getting tougher.


Generative AI startup Luma last week was announced Dream Machine, a video production device that has gone viral for its ability to create memes. And just a few months ago, Adobe revealed that it was developing its own video creation model, trained on content in the Adobe Stock media library.
Elsewhere, there are incumbents like OpenAI’s Sora, which remains strictly limited, but OpenAI has partnered with marketing agencies and indie and Hollywood filmmakers. (OpenAI director Mira Murati was in attendance at the 2024 Cannes Film Festival.) This year’s Tribeca Festival — which also has a partnership with Runway to curate films created with AI tools — featured short films produced with Sora by directors who were given early access.
Google has also put its image creation model, Veo, in the hands of select creators, including Donald Glover (aka Childish Gambino) and his creative agency Gilga, as it works to bring Veo to products like YouTube Shorts .
However the various partnerships shake out, one thing is becoming clear: AI-powered video creation tools threaten to upend the film and television industry as we know it.


Director Tyler Perry he said recently that he put a planned $800 million expansion of his production studio on hold after seeing what Sora could do. Joe Russo, the director of Marvel films such as “Avengers: Endgame,” predict that within a year, artificial intelligence will be able to create a complete movie.
A 2024 study commissioned by the Animation Guild, a union representing Hollywood animators and cartoonists, found that 75% of film production companies that have adopted artificial intelligence have reduced, consolidated or eliminated jobs after incorporating the technology. The study also estimates that by 2026, more than 100,000 US entertainment jobs will be disrupted by genetic artificial intelligence.
Strong labor protection measures will be seriously needed to ensure that video creation tools do not follow in the footsteps of other genetic AI technology and lead to steep falls in demand for creative work.