Exclusive: Lots of us still really don't trust content written by AI
Many of us still find it hard to tell what is AI and what isn't
A majority of people remain wary of AI-generated content, with many still not sure when they see it, and want to know when it is presented them.
OnePulse surveyed 1,000 people for TechRadar Pro regarding AI-produced content to see how often they are exposed to it and how they feel about it.
Over half of our respondents said that they wanted content written by AI - such as news, reviews and features - to be clearly labelled as such, and when asked how often they had been come across AI-produced content, a third said they weren't sure.
TechRadar Pro needs you!
We want to build a better website for our readers, and we need your help! You can do your bit by filling out our survey and telling us your opinions and views about the tech industry in 2023. It will only take a few minutes and all your answers will be anonymous and confidential. Thank you again for helping us make TechRadar Pro even better.
D. Athow, Managing Editor
Exposure and transparency
Another third said that they were exposed to such content every day, and a fifth encountered it every month. Only 8.7% said that they hadn't come across AI-produced content recently.
In considering what constituted AI-produced content, most felt it had to be the work of artificial intelligence either entirely or to a large degree (43.1% and 47.4% respectively). Only 9.5% thought it had to have only minimal input from AI.
Given the amount of content people are exposed to that is generated by AI, its no wonder they want to have it confirmed, especially when popular AI tools, such as ChatGPT, are capable of writing content in various styles which can be indistinguishable from that crafted by humans.
What's more, AI has been known to get things wrong. This is yet another reason why people want to know when when the content they read is AI-generated. Their judgment and trust in it will alter depending on that fact, just as people naturally judge a piece of content's validity based on its source and who it was written by.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Popular tech site CNET recently started experimenting with an AI engine to write certain articles on its site, causing outrage as it didn't exactly disclose the fact in a transparent fashion. Also, articles that were written by it contained some pretty basic errors.
The popular chatbot ChatGPT fairs no better when it comes to factual accuracy. Its fumbles so far include getting basic geographical facts wrong and giving coding advice so erroneous that it was banned altogether from Stack Overflow.
Of the other opinions concerning AI content, a fifth of respondents said they were fine with it and didn't care whether or not its creator was made known to them, and only 7.5% thought it was superior to human writers and should be encouraged. 17%, on the other hand, were quite against it, saying autogenerated content should be banned as they felt it crossed an acceptable line.
- If you fancy having a go, here are our best AI writers to use right now
Lewis Maddison is a Reviews Writer for TechRadar. He previously worked as a Staff Writer for our business section, TechRadar Pro, where he had experience with productivity-enhancing hardware, ranging from keyboards to standing desks. His area of expertise lies in computer peripherals and audio hardware, having spent over a decade exploring the murky depths of both PC building and music production. He also revels in picking up on the finest details and niggles that ultimately make a big difference to the user experience.