Forget Sora, Runway is the AI video maker coming to blow your mind

Runway Gen 3 Alpha
(Image credit: RunwayML)

Artificial intelligence-powered video maker Runway has officially launched its new Gen-3 Alpha model after teasing its debut a few weeks ago. The Gen-3 Alpha video creator offers major upgrades in creating hyper-realistic videos from user prompts. It's a significant advancement over the Gen-2 model released early last year. 

Runway's Gen-3 Alpha is aimed at a range of content creators, including marketing and advertising groups. The startup claims to outdo any competition when it comes to handling complex transitions, as well as key-framing and human characters with expressive faces. The model was trained on a large video and image dataset annotated with descriptive captions, enabling it to generate highly realistic video clips. As of this writing, the company is not revealing the sources of its video and image datasets.

The new model is accessible to all users signed up on the RunwayML platform, but unlike Gen-1 and Gen-2, Gen-3 Alpha is not free. Users must upgrade to a paid plan, with prices starting at $12 per month per editor. This move suggests Runway is ready to professionalize its products after having the chance to refine them, thanks to all of the people playing with the free models. 

Initially, Gen-3 Alpha will power Runway's text-to-video mode, allowing users to create videos using natural language prompts. In the coming days, the model's capabilities will expand to include image-to-video and video-to-video modes. Additionally, Gen-3 Alpha will integrate with Runway's control features, such as Motion Brush, Advanced Camera Controls, and Director Mode.

Runway stated that Gen-3 Alpha is only the first in a new line of models built for large-scale multimodal training. The end goal is what the company calls "General World Models," which will be capable of representing and simulating a wide range of real-world situations and interactions.

Gen-3 Alpha: Available Now | Runway - YouTube Gen-3 Alpha: Available Now | Runway - YouTube
Watch On

AI Video Race

The immediate question is whether Runway's advancements can meet or exceed what OpenAI is doing with its attention-grabbing Sora model. While Sora promises one-minute-long videos, Runway's Gen-3 Alpha currently supports video clips that are only up to 10 seconds long. Despite this limitation, Runway is betting on Gen-3 Alpha's speed and quality to set it apart from Sora, at least until it can augment the model as they have planned, making it capable of producing longer videos. 

The race isn't just about Sora. Stability AI, Pika, Luma Labs, and others are all eager to claim the title of best AI video creator. As the competition heats up, Runway's release of Gen-3 Alpha is a strategic move to assert a leading position in the market.

You might also like...

Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.