The race to launch global models is on as AI-powered image and video production company Runway joins a growing number of startups and big tech companies by launching its first. Called GWM-1, the model works through frame-by-frame prediction, creating a simulation with an understanding of physics and how the world actually behaves over time, the company said.
A world model is an artificial intelligence system that learns an internal simulation of how the world works so that it can reason, plan, and act without having to be trained in every possible scenario in real life.
Runway, which was released earlier this month Gen 4.5 video model which beat both Google and OpenAI on the Video Arena leaderboard, said the GWM-1 global model is more “general” than Google’s Genie-3 and other competitors. The company pitches it as a model that can create simulations to train agents in fields as diverse as robotics and life sciences.
“To create a global model, we first needed to create a really great video model. We believe that the right way to create a global model is to teach models to directly predict pixels is the best way to achieve general-purpose simulation. At a sufficient scale and with the right data, you can create a model that has a sufficient understanding of how the world works,” said the company’s CTO, Anastasis, during the German livestream.
Runway released specific slants or versions of the new world model called GWM-Worlds, GWM-Robotics and GWM-Avatars.
GWM-Worlds is an application for the model that allows you to create an interactive project. Users can define a scene via a prompt or image reference, and as you explore the space, the model creates the world with an understanding of geometry, physics, and lighting. The company said the simulation runs at 24 fps and 720p resolution. Runway said that while Worlds could be useful for games, it’s also well-positioned to teach agents how to navigate and behave in the physical world.
With GWM-Robotics, the company aims to use synthetic data enriched with new parameters, such as changing weather conditions or obstacles. Runway says this method could also reveal when and how bots might be violating policies and guidelines in different scenarios.
Techcrunch event
San Francisco
|
13-15 October 2026
Runway also builds realistic avatars under GWM-Avatars to simulate human behavior. Companies like D-ID, Synthesia, Soul Machines, and even Google have worked to create human avatars that look real and work in areas like communication and education.
The company noted that technically Worlds, Robotics, and Avatars are separate models, but eventually plans to merge them all into one model.
In addition to launching a new global model, the company is also updating its foundation The Gen 4.5 model was released earlier in the month. The new update brings native audio and large format, multi-shot capabilities to the model. The company said that with this model, users can create one-minute videos with character consistency, native dialogue, background audio and composite shots from various angles. The company said you can also edit existing audio and add dialogue. In addition, you can edit multi-shot videos of any length.
The Gen 4.5 update pushes Runway closer to competitor Kling’s all-in-one video suite, which was also released earlier this monthparticularly around native audio and multi-shot storytelling. It also signals that video production models are moving from prototypes to production-ready tools. The updated Gen 4.5 model of Runway is available to all users of the paid plan.


The company said it will make GWM-Robotics available through an SDK. He added that he is in active conversation with several robotics companies and businesses to use GWM-Robotics and GWM-Avatars.
