Google’s amazing new photo AI brings light to darkness and much much more
Google MultiNeRF adds new tools to outperform other deep raw denoisers
Photographers may soon be able to effectively ‘see in the dark’ after Google Research added a new AI noise reduction tool to its MultiNeRF project.
The RawNeRF program can read images, using artificial intelligence to add higher levels of detail (and far fewer unsightly artifacts) to photos taken in darker conditions and low-light settings. According to the team behind the project, it works better than any other noise reduction tool out there.
“When optimized over many noisy raw inputs, NeRF produces a scene representation so accurate that its rendered novel views outperform dedicated single and multi-image deep raw denoisers run on the same wide baseline input images,” the researchers explained in a Cornell University paper.
What is NeRF?
NeRF is a view synthesizer - a tool capable of scanning thousands of photographs to reconstruct accurate 3D renders.
According to Ben Mildenhall, one of the project researchers, NeRF works best with well-lit photographs and low noise levels. In other words, it’s built for day-time shooting.
Low-light and night shoots proved problematic, hiding details in shadow or becoming noisier when upping the brightness in post. The issue Mildenhall and the team found was that denoising tools can somewhat reduce the noise, but at the cost of image quality.
With the advent of RawNeRF, artificial intelligence is set to quieten the noise without stripping away the detail - effectively letting shutterbugs ‘see in the dark’.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
In a video demonstration, NeRF in the Dark, - originally published in May 2022 and going largely unnoticed at the time - Mildenhall takes a cell phone lit only by candlelight. RawNeRF is “able to combine images taken from many different camera viewpoints to jointly denoise and reconstruct the scene,” the Google Researcher explains.
Reconstructed images are rendered in a linear HDR color space, letting users further manipulate angles, exposures, tonemapping, and focus. In his video, Mildenhall notes how varying each of these together “creates an atmospheric effect that can bring attention to different regions of the scene.”
While still in the research phase and not an officially supported Google product (yet), RawNeRF offers a tantalizing glimpse of how AI could help creatives better reflect the world around them.
- Breathe new life into your images with the best photo editors
Steve is TechRadar Pro’s B2B Editor for Creative & Hardware. He began in tech journalism reviewing photo editors and video editing software at the magazine Web User, where he also covered technology news, features, and how-to guides. Today, he and his team of reviewers test out a range of creative software, hardware, and office furniture. Once upon a time, he wrote TV commercials and movie trailers. Relentless champion of the Oxford comma.