Google is trying to delete and bury explicit deepfakes from search results

A person holding an iPhone close to the camera with the Google search homepage displayed onscreen
(Image credit: Unsplash/Solen Feyissa)

Google is dramatically upping its efforts to combat the appearance of explicit images and videos created with AI in search results. The company wants to make it clear that AI-produced non-consensual deepfakes are not welcome in its search engine.

The actual images may be prurient or offensive in some other way, but regardless of the details, Google has a new approach to removing this type of material and burying it far from the page one results if erasure isn't possible. Notably, Google has experimented with using its own AI to generate images for search results, but those pictures don't include real people and especially nothing racy. Google partnered with experts on the issue and those who have been targets of non-consensual deepfakes to make its system of response more robust. 

Google has allowed individuals to request the removal of explicit deepfakes for a while, but the proliferation and improvement of generative AI image creators means there's a need to do more. The request for removal system has been streamlined to make it easier to submit requests and speed up the response. When a request is received and confirmed as valid, Google's algorithms will also work to filter out any similar explicit results related to the individual. 

The victim won't have to manually comb through every variation of a search request that might pull up the content, either. Google's systems will automatically scan for and remove any duplicates of that image. And it won't be limited to one specific image file. Google will proactively put a lid on related content. This is particularly important given the nature of the internet, where content can be duplicated and spread across multiple platforms and websites. This is something Google already does when it comes to real but non-consensual imagery, but the system will now cover deepfakes, too.

The method also shares some similarities with recent efforts by Google to combat unauthorized deepfakes, explicit or otherwise, on YouTube. Previously, YouTube would just label such content as created by AI or potentially misleading, but now, the person depicted or their lawyer can submit a privacy complaint, and YouTube will give the video's owner a couple of days to remove it themselves before YouTube reviews the complaint for merit.

Deepfakes Buried Deep

Content removal isn’t 100% perfect, as Google well knows. That’s why the explicit deepfake search results hunt also includes an updated ranking system. The new ranking pushes back against search terms with a chance of pulling up explicit deepfakes. Google Search will now try to lower the visibility of explicit fake content and websites associated with spreading them in search results, especially if the search has someone’s name. 

For instance, say you were searching for a news article about how a specific celebrity’s deepfakes went viral, and they are testifying to lawmakers about the need for regulation. Google Search will attempt to make sure you see those news stories and related articles about the issue and not the deepfakes under discussion. 

Google's not alone

Given the complex and evolving nature of generative AI technology and its potential for abuse, addressing the spread of harmful content requires a multifaceted approach. And Google is hardly unique in facing the issue or working on solutions. They’ve appeared on Facebook, Instagram, and other Meta platforms, and the company has updated its policies as a result, with its Oversight Board recently recommending changing its guidelines to directly cover AI-generated explicit content and improve its own appeals process. 

Lawmakers are responding to the issue as well, with New York State’s legislature passing a bill targeting AI-generated non-consensual pornography as part of its “revenge porn” laws. At the national level this week, the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024 (NO FAKES Act) was introduced in the U.S. Senate to deal with both explicit content and non-consensual use of deepfake visuals and voices. Similarly, Australia’s legislature is working on a bill to criminalize the creation and distribution of non-consensual explicit deepfakes

Still, Google can already point to some success in combatting explicit deepfakes. The company claims its early tests with these changes are succeeding in reducing the appearance of deepfake explicit images by more than 70%. Google hasn’t declared victory over explicitly deepfakes quite yet, however.

“These changes are major updates to our protections on Search, but there’s more work to do to address this issue, and we’ll keep developing new solutions to help people affected by this content,” Google product manager Emma Higham explained in a blog post. “And given that this challenge goes beyond search engines, we’ll continue investing in industry-wide partnerships and expert engagement to tackle it as a society.”

You might also like

TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.