Google has a plan to save us from AI deepfakes during the US presidential race
Google to enforce AI transparency policy
Amidst the rise of AI popularity, Google has decided that political ads that make use of artificial intelligence have to clearly disclose when imagery or audio has been manipulated synthetically.
Campaigns that put out AI-generated ads on YouTube and any other Google platforms will have to show an obvious disclaimer that users are unlikely to miss, as reported by the Associated Press.
Experts have already been sounding off about the need for widespread regulation and the raising of awareness among the wider public ahead of elections, and it seems they're not the only ones with concerns.
When and where the new policy will kick in
This political policy update was made by Google last week, with the policy officially kicking into effect in mid-November. Google also announced that it will adopt similar policies for campaign ads in time for elections in the European Union, India, South Africa, and other regions for which Google has a verification process in place.
AI-generated and falsified media clips have become an everyday occurrence in political media, and generative AI tools are a new way to assist with that. Not only do these tools make it easier and faster to produce misinformation, they also enable bad actors to mimic speech or appearance in photos and videos more realistically.
AI-generated video has already been used by the political campaign of one of the current forerunners for the Republican party in the US, Gov. Ron DeSantis of Florida.
DeSantis’ campaign put out an ad that depicted his GOP opponent and Republican frontrunner, Donald Trump, positively embracing Dr. Anthony Fauci, who served as one of the chief medical experts who advised Trump during the COVID pandemic. In a similar vein, the Republican National Committee (RNC) released a wholly AI-generated ad depicting what it imagines the future to be under Joe Biden.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Looking at AI and deepfakes on a federal level
In an effort echoing Google’s new policies, the Federal Election Commission (FEC) has begun looking at implementing regulation to moderate AI-generated ads such as ‘deepfakes’ (doctored videos and images of real people). Advocates on the issue say this should help steer voters away from misinformation. It’s easy to see how regulation of this sort could help - deepfakes can come in the form of political figures saying or doing things they never expressed in real life.
Democratic senator Amy Klobuchar is a co-sponsor of legislation that would demand similar requirements to Google’s policy law; potentially deceptive AI-generated political ads will have to include disclaimers disclosing the fact. Sen. Klobuchar commented on Google’s policy in a statement praising the company’s move but also stating that “we can’t solely rely on voluntary commitments.”
Multiple states have already passed or have begun discussing legislation to address deepfake technology.
This new policy does not mean all use of AI by political campaigns is banned - there are notable exceptions for altering content in ways that don’t change the substance and content of the advert. For instance, this includes using AI tools for media editing and quality improvement purposes. It also will apply largely to YouTube, along with the rest of Google’s platforms, and whatever third-party sites exist within Google’s ad display network.
What are other tech giants' policies?
As of this week, Google is still the only platform to put a policy like this in place in what is probably a proactive effort. I expect other social media platforms will have to follow if their existing policy is insufficient, especially if more widespread legislation comes into place.
Meta, parent company of Instagram and Facebook, doesn’t have an AI-specific prescriptive policy but does have a general blanket policy against “faked, manipulated or transformed” audio and imagery for misinformation purposes. TikTok bans political ads altogether. The Associated Press reached out to X (formerly Twitter) last week for comment on the issue, but it seems the X team is a little busy just keeping the platform from falling apart and didn’t issue a comment.
This is concerning. Right now, it’s still very much a wild west of sorts when it comes to the use of AI for political gains. I very much appreciate any proactive efforts, even by tech companies, because to me, it shows they’re thinking about the future - and not just capturing audiences in the present.
You might also like
Kristina is a UK-based Computing Writer, and is interested in all things computing, software, tech, mathematics and science. Previously, she has written articles about popular culture, economics, and miscellaneous other topics.
She has a personal interest in the history of mathematics, science, and technology; in particular, she closely follows AI and philosophically-motivated discussions.