YouTube bans videos that promote superiority and discrimination
Finally looking to define and stamp out hate speech
Google is engaged in an ongoing battle against the misuse of its YouTube video platform, with the tech giant changing its policies around what content is allowed on its service on a semi-regular basis.
The latest amendment to what’s known as YouTube’s “hate speech policy” specifically prohibits “videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.”
The YouTube blog post announcing this policy change points at “videos that promote or glorify Nazi ideology” as a specific example of content that will be banned, citing it as “inherently discriminatory”.
On top of this metric, YouTube videos that feature “content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place”, will also be removed.
- YouTube is disabling comments on videos featuring children
- YouTube bans 'harmful or dangerous' prank videos
- YouTube could disable downvotes to stop 'dislike mobs' spamming videos
Here’s how Google will punish users that are deemed to have any content on their channels that violates the aforementioned hate speech policy:
“If your content violates this policy, we’ll remove the content and send you an email to let you know. If this is the first time you’ve posted content that violates our Community Guidelines, you’ll get a warning with no penalty to your channel."
"If it’s not, we’ll issue a strike against your channel. Your channel will be terminated if you receive 3 strikes. If we think your content comes close to hate speech, we may limit YouTube features available for that content.”
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Ongoing struggle
Google first announced a tougher stance on terrorist, hate speech and discriminatory content in 2017, where the internet giant grappled with the reality of operating a free and open system while still being able to monitor it for harmful activity.
Since then, Google has increasingly tightened its rules on the type of content that is allowed to appear on its platforms, such as YouTube, and heightened its efforts in moderating such media.
Today, the strategy for moderation relies on removing content that explicitly breaches its policies, reducing the spread of “borderline” content (which could contain harmful misinformation, for instance) by demoting it, raising up authoritative voices by promoting them in recommended videos and such, and rewarding trusted creators with monetization.