The Wizard of Oz is coming to the Las Vegas sphere in 16K thanks to the power of Google DeepMind AI

A screenshot from the 1939 Wizard of Oz movie
(Image credit: MGM)

  • 1939 classic The Wizard of Oz is coming to the Las Vegas Sphere
  • Using the power of AI, Google is reimagining the film for the 16k spherical screen
  • The Wizard of Oz at The Sphere opens on August 28

The Wizard of Oz is coming to the Las Vegas Sphere, and it's all thanks to Google's incredible AI technology.

Following last week's announcement that the 1939 classic The Wizard of Oz is being reimagined for Las Vegas' iconic 16K LED screen spherical theater, set to open on August 28, Google is now giving us a behind-the-scenes look at the magic behind the production.

While The Wizard of Oz was not the first film to be shot in color, it's often referenced as one of the first true movie experiences to capture color efficiently, thanks to its incredible mix of colors and use of black-and-white in the film's Kansas scenes.

In Google's blog post, the company says, "Likewise, “The Wizard of Oz” may not be the first film to be reconceptualized with AI, but it may soon be known for that, too."

This is a massive project combining the teams at Google DeepMind, Google Cloud, Sphere Studios, Magnopus, and Warner Bros. Discovery to create an incredible experience, coming off the success of Wicked, which is set in the same world as The Wizard of Oz.

With the launch of Wicked: For Good set for November 2025, it's the perfect time to put eyes on the movie that inspired Elphaba and Glinda's epic two-part musical.

The power of tech and AI will showcase The Wizard of Oz in the "venue's 17,600-seat spherical space to create an immersive sensory experience," and Google says "generative AI will take center stage, alongside Dorothy, Toto and more munchkins than could ever fit in a multiplex."

The Wizard of Oz The Sphere

(Image credit: Google)

How to turn a classic into a modern epic

Elphaba and Glinda looking at something magical off-camera in Universal's Wicked Part One movie

(Image credit: Universal Pictures)

Google's blog post on the work that has gone into bringing The Wizard of Oz to The Sphere is nothing short of mind-blowing.

The man behind the project, Buzz Hays, is the global lead for entertainment industry solutions at Google Cloud and a veteran producer in the world of Hollywood.

He said, "We’re starting with the original four-by-three image on a 35mm piece of celluloid — it’s actually three separate, grainy film negatives; that’s how they shot Technicolor,” Hays says. “That obviously won’t work on a screen that is 160,000 square feet. So we’re working with Sphere Studios, Magnopus and visual effects artists around the world, alongside our AI models, to effectively bring the original characters and environments to life on a whole new canvas — creating an immersive entertainment experience that still respects the original in every way.”

The Sphere has the highest resolution screen in the world, which means The Wizard of OZ's grainy 1939 imagery would've caused a huge issue for the experience. Luckily, the teams found solutions using Veo, Imagen, and Gemini to completely transform the movie using an "AI-based 'super resolution' tool to turn those tiny celluloid frames from 1939 into ultra-ultra-high definition imagery that will pop inside Sphere."

Following the upscaling, the teams then perform a process called AI outpainting, which essentially expands the scenes of The Wizard of Oz to fit the larger space found on the massive screen. AI then generates elements of the performances to fill out the created space and make the shots look and feel seamless.

Keeping the soul of the original

While I don't blame you if you think this sounds like an AI-generated catastrophe, ruining a classic that shouldn't be messed with, Google emphasises how the team has the traditions of cinema at the forefront of every decision.

"In addition to old footage, the team scoured archives to build a vast collection of supplementary material, such as the shooting script, production illustrations, photographs, set plans and scores."

Then, these materials were uploaded to Veo and Gemini to train the models and build on the "specific details of the original characters, their environments and even elements of the production, like camera focal lengths for specific scenes."

"With far more source material than just the 102-minute film to work with, the quality of the outputs dramatically improved. Now, Dorothy’s freckles snap into focus, and Toto can scamper more seamlessly through more scenes."

You might also like

TOPICS
John-Anthony Disotto
Senior Writer AI

John-Anthony Disotto is TechRadar's Senior Writer, AI, bringing you the latest news on, and comprehensive coverage of, tech's biggest buzzword. An expert on all things Apple, he was previously iMore's How To Editor, and has a monthly column in MacFormat. He's based in Edinburgh, Scotland, where he worked for Apple as a technician focused on iOS and iPhone repairs at the Genius Bar. John-Anthony has used the Apple ecosystem for over a decade, and is an award-winning journalist with years of experience in editorial.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.