Meta, Snapchat, and TikTok launch the new Thrive mental health initiative, and it's about time

In this photo illustration, the Meta Platforms, Inc. logo is displayed on a smartphone screen.
(Image credit: Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images)

Meta, Snapchat and TikTok are finally banding together to do something about the harmful effects of some of the content hosted on their platforms – and it’s about time.  

In partnership with the Mental Health Coalition, the three brands are using a program called Thrive which is designed to flag and securely share information about harmful content, targeting content around suicide and self-harm.

A Meta blog post reads: “Like many other types of potentially problematic content, suicide and self-harm content is not limited to any one platform… That’s why we’ve worked with the Mental Health Coalition to establish Thrive, the first signal-sharing program to share signals about violating suicide and self-harm content. 

“Through Thrive, participating tech companies will be able to share signals about violating suicide or self-harm content so that other companies can investigate and take action if the same or similar content is being shared on their platforms. Meta is providing the technical infrastructure that underpins Thrive… which enables signals to be shared securely.”

When a participating company like Meta discovers harmful content on its app, it shares hashes (anonymized code pertaining to pieces of content relating to self-harm or suicide) with other tech companies, so they can examine their own databases for the same content, as it tends to spread across platforms. 

 Analysis: A good start

Social media logos on an Apple iPhone

(Image credit: Getty Images)

As long as there are platforms that rely on users uploading their own content, there will be those that violate regulations and spread harmful messages online. This could come in the form of grifters attempting to sell bogus courses, inappropriate content on channels aimed at kids, and content relating to suicide or self-harm. Accounts posting this kind of content are generally very good at skirting the rules and flying under the radar to reach their target audience; the content often being taken down too late. 

It’s good to see social media platforms – which use comprehensive algorithms and casino-like architecture to keep their users addicted and automatically serve up content they’ll engage with – actually taking some responsibility and working together. This sort of ethical cooperation between the most popular social media apps is sorely needed. However, this should just be the first step on the road to success. 

The problem with user-generated content is that it needs to be policed constantly. Artificial intelligence can certainly help to flag harmful content automatically, but some will still slip through – much of this content is nuanced, containing subtext that a human somewhere in the chain will need to view and flag up as harmful. I’ll certainly be keeping an eye on Meta, TikTok and other companies when it comes to their evolving policies on harmful content. 

You might also like

Matt Evans
Fitness, Wellness, and Wearables Editor

Matt is TechRadar's expert on all things fitness, wellness and wearable tech. A former staffer at Men's Health, he holds a Master's Degree in journalism from Cardiff and has written for brands like Runner's World, Women's Health, Men's Fitness, LiveScience and Fit&Well on everything fitness tech, exercise, nutrition and mental wellbeing.

Matt's a keen runner, ex-kickboxer, not averse to the odd yoga flow, and insists everyone should stretch every morning. When he’s not training or writing about health and fitness, he can be found reading doorstop-thick fantasy books with lots of fictional maps in them.