Google thinks a US Supreme Court case could radically change the internet

 Equal Justice Under Law engraving above entrance to US Supreme Court Building
(Image credit: Shutterstock / Bob Korn)

Google has warned that a ruling against it in an ongoing Supreme Court (SC) case could put the entire internet at risk by removing a key protection against lawsuits over content moderation decisions that involve artificial intelligence (AI).

Section 230 of the Communications Decency Act of 1996 currently offers a blanket ‘liability shield’ in regards to how companies moderate content on their platforms. 

However, as reported by CNN, Google wrote in a legal filing that, should the SC rule in favour of the plaintiff in the case of Gonzalez v. Google, which revolves around YouTube’s algorithms recommending pro-ISIS content to users, the internet could become overrun with dangerous, offensive, and extremist content.

Automation in moderation

Being part of an almost 27-year-old law, already targeted for reform by US President Joe Biden, Section 230 isn’t equipped to legislate on modern developments such as artificially intelligent algorithms, and that’s where the problems start.

The crux of Google’s argument is that the internet has grown so much since 1996 that incorporating artificial intelligence into content moderation solutions has become a necessity. “Virtually no modern website would function if users had to sort through content themselves,” it said in the filing. 

“An abundance of content” means that tech companies have to use algorithms in order to present it to users in a manageable way, from search engine results, to flight deals, to job recommendations on employment websites. 

Google also addressed that under existing law, tech companies simply refusing to moderate their platforms is a perfectly legal route to avoid liability, but that this puts the internet at risk of being a “virtual cesspool”. 

The tech giant also pointed out that YouTube’s community guidelines expressly disavow terrorism, adult content, violence and “other dangerous or offensive content” and that it is continually tweaking its algorithms to pre-emptively block prohibited content. 

It also claimed that “approximately”  95% of videos violating YouTube’s ‘Violent Extremism policy’ were automatically detected in Q2 2022.

Nevertheless, the petitioners in the case maintain that YouTube has failed to remove all Isis-related content, and in doing so, has assisted “the rise of ISIS” to prominence. 

In an attempt to further distance itself from any liability on this point, Google responded by saying that YouTube’s algorithms recommends content to users based on similarities between a piece of content and the content a user is already interested in.

This is a complicated case and, although it’s easy to subscribe to the idea that the internet has gotten too big for manual moderation, it’s just as convincing to suggest that companies should be held accountable when their automated solutions fall short. 

After all, if even tech giants can’t guarantee what’s on their website, users of filters and parental controls can’t be sure that they’re taking effective action to block offensive content.

TOPICS
Luke Hughes
Staff Writer

 Luke Hughes holds the role of Staff Writer at TechRadar Pro, producing news, features and deals content across topics ranging from computing to cloud services, cybersecurity, data privacy and business software.

Read more
Vector illustration of the word Censored in a glitch distorted style
Google, Apple, and internet restriction – how Big Tech is making censorship "much worse" according to experts
Google Chrome logo on a mobile phone's screen
Google asks US government to drop breakup plan over national security fears
A person holding out their hand with a digital AI symbol.
AI safety at a crossroads: why US leadership hinges on stronger industry guidelines
AI Education
The AI lie: how trillion-dollar hype is killing humanity
Participants hold up signs in support of TikTok at a news conference outside the U.S. Capitol Building on March 12, 2024 in Washington, DC.
US TikTok ban: the clock is ticking for Americans' digital freedoms
A man holds a smartphone iPhone screen showing various social media apps including YouTube, TikTok, Facebook, Threads, Instagram and X
Ofcom cracks down on UK tech firms, will issue sanctions for illegal content
Latest in Software & Services
TinEye website
I like this reverse image search service the most
A person in a wheelchair working at a computer.
Here’s a free way to find long lost relatives and friends
A white woman with long brown hair in a ponytail looks down at her computer in a distressed manner. She is holding her forehead with one hand and a credit card with the other
This people search finder covers all the bases, but it's not perfect
That's Them home page
Is That's Them worth it? My honest review
woman listening to computer
AWS vs Azure: choosing the right platform to maximize your company's investment
A person at a desktop computer working on spreadsheet tables.
Trello vs Jira: which project management solution is best for you?
Latest in News
Three angles of the Apple MacBook Air 15-inch M4 laptop above a desk
Apple MacBook Air 15-inch (M4) review roundup – should you buy Apple's new lightweight laptop?
Witchbrook
Witchbrook, the life-sim I've been waiting years for, finally has a release window and it's sooner than you think
Shigeru Miyamoto presents Nintendo Today app
Nintendo Today smartphone app is out now on iOS and Android devices – and here's what it does
Nintendo Virtual Game Card
Nintendo reveals the new Virtual Game Card feature, an easier way to manage your digital Switch games
Nintendo Switch 2
The Nintendo Switch 2 pre-order date has seemingly been confirmed by Best Buy Canada – here's when you'll be able to order yours
Person printing
Microsoft’s latest Windows 11 update exorcises possessed printers that spewed out pages of random characters