LinkedIn says if you share fake or false AI-generated content, that's on you

In this photo illustration, the business and employment-oriented network and platform owned by Microsoft, LinkedIn, logo seen displayed on a smartphone with an Artificial intelligence (AI) chip and symbol in the background.
(Image credit: Photo Illustration by Budrul Chukrut/SOPA Images/LightRocket via Getty Images)

LinkedIn is passing the responsibility onto users for sharing misleading or inaccurate information made by its own AI tools, instead of the tools themselves.

A November 2024 update to its Service Agreement will hold users accountable for sharing any misinformation created by AI tools that violate the privacy agreement.

Since no one can guarantee that the content generative AI produces is truthful or correct, companies are covering themselves by putting the onus on users to moderate the content they share.

Inaccurate, misleading, or not fit for purpose

ThE update follows the footsteps of LinkedIn's parent company Microsoft, who earlier in 2024 updated its terms of service to remind users not to take AI services too seriously, and to address limitations to the AI, advising it is ‘not designed intended, or to be used as substitutes for professional advice’.

LinkedIn will continue to provide features which can generate automated content, but with the caveat that it may not be trustworthy.

“Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes," the updated passage will read.

The new policy reminds users to double check any information and make edits where necessary to adhere to community guidelines,

“Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.”

The social network site is probably expecting its genAI models to improve in future, especially since it now uses user data to train its models by default, requiring users to opt out if they don’t want their data used.

There was pretty significant backlash against this move, as GDPR concerns clash with generative AI models across the board, but the recent policy update shows the models still have a fair bit of training needed.

Via The Register

More from TechRadar Pro

Ellen Jennings-Trace
Staff Writer

Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.

Read more
In this photo illustration, the business and employment-oriented network and platform owned by Microsoft, LinkedIn, logo seen displayed on a smartphone with an Artificial intelligence (AI) chip and symbol in the background.
LinkedIn facing lawsuit over accusations private messages used to train AI
AI Education
The AI lie: how trillion-dollar hype is killing humanity
Meta AI on a smartphone
Meta wants to fill your social media feeds with bots – here's why I think it's wrong
Sam Altman and OpenAI
Open AI bans multiple accounts found to be misusing ChatGPT
An AI-generated image of the colosseum with slides coming out of it.
AI slop is taking over the internet and I've had enough of it
Half man, half AI.
Ensuring your organization uses AI responsibly: a how-to guide
Latest in Pro
Isometric demonstrating multi-factor authentication using a mobile device.
NCSC gets influencers to sing the praises of 2FA
Sam Altman and OpenAI
OpenAI is upping its bug bounty rewards as security worries rise
Context Windows
Why are AI context windows important?
BERT
What is BERT, and why should we care?
A person holding out their hand with a digital AI symbol.
AI is booming — but are businesses seeing real impact?
A stylized depiction of a padlocked WiFi symbol sitting in the centre of an interlocking vault.
Dangerous new CoffeeLoader malware executes on your GPU to get past security tools
Latest in News
cheap Nintendo Switch game deals sales
Nintendo didn't anticipate that Mario Kart 8 Deluxe was 'going to be the juggernaut' for the Nintendo Switch when it was ported to the console, according to former employees
Three angles of the Apple MacBook Air 15-inch M4 laptop above a desk
Apple MacBook Air 15-inch (M4) review roundup – should you buy Apple's new lightweight laptop?
Witchbrook
Witchbrook, the life-sim I've been waiting years for, finally has a release window and it's sooner than you think
Amazon Echo Smart Speaker
Amazon is experimenting with renaming Echo speakers to Alexa speakers, and it's about time
Shigeru Miyamoto presents Nintendo Today app
Nintendo Today smartphone app is out now on iOS and Android devices – and here's what it does
iPhone 13 mini
The iPhone mini won't be returning, according to rumors – and you think that's a mistake