ChatGPT and other AI chatbots aren't the problem - we are

A robot hand reaching towards a yellow sign that reads "Caution" in black letters
(Image credit: Getty Images)

I don't live in fear of AI. Instead, I fear people and what they will do with chatbots like ChatGPT and the best AI image-generators like Stable Diffusion, just as I fear what they'll do with any other powerful technology or object. Case in point, how some are using artificial intelligence and chatbots to generate fake images and stories and, naturally, present them as fact.

It's horrible, shameless, and dangerous.

Recently, a magazine in Germany (Die Aktuelle) printed what appeared to be a new and exclusive interview with former Formula 1 driver Michael Schumacher who was injured in a 2013 skiing accident, and splashed it across its front page. It didn't take long for people to figure out that the interview was a total fabrication, one that may have fooled more than the normal number of gullible types because Schumacher's answers were generated by an AI chatbot called character.ai.

Character.ai is a ChatGPT rival that lets you talk to famous figures both dead and alive. While not quite in the same Large Language Model class as the Bards and Bing AIs of the world, it's smart enough to recreate conversations with a wide swath of historical people, and to do a pretty convincing job of it. 

What it isn't meant to be, is accurate: it's available largely for entertainment, and its creators are careful to state that "Everything Characters say is made up! Don't trust everything they say or take them too seriously."

Naturally, someone had to abuse it.

In recent months, I've spent a lot of time with all the major chatbots, including ChatGPT, Google Bard, and Bing AI, asking them to create all manner of creative, silly, and mostly useless content. I take nothing at face value, knowing that there is still a tendency toward AI chatbot hallucinations (presenting fiction as fact).

What I didn't imagine is that anyone would start using content spit out by these AIs to share as original stories, articles, videos, and even music from existing and long-deceased artists. It's not just distasteful, but dangerous.

Again, the idea that this is somehow the fault of AIs (not people with the ability to have faults, by the way) is ludicrous. The fault is, as always, in our humanity. People don't know how to responsibly use a good, bad or indifferent thing.

As a species, we are prone to excess, and whatever pleasurable thing we're offered, be it sex, money, drugs, or AI, it's a clarion call to do more of it – a lot more of it. 

We're currently so obsessed with the incredible capabilities of these chatbots that, like any addition, we cannot stop using them. I admit, I have a habit of loading up ChatGPT (the GPT-3 version) and asking it a mix of important and ridiculous questions.

The other day I asked it to create a logline for a new TV series about a time-traveling mailman. It did a nice job, though I do wonder how much was sourced from other people's work. You just never know how much ChatGPT is inadvertently plagiarizing. At least this isn't important stuff (unless the show gets picked up. 😁).

This week, I asked it to solve the climate crisis. ChatGPT's ideas about moving to a plant-based diet were interesting but, as people noted on Twitter, ChatGPT left out a lot of other real climate-altering factors. As always, I was careful to present this as ChatGPT's response and not use it, say, in my reporting.

Sure, I'm not a climate reporter, but what if I was? It would be insane for me to blend what I 'learned' from ChatGPT about climate change into a news story that is intended to inform millions of people.

That is my point. ChatGPT and other chatbots like it are not a substitute for real, original research and reporting. In journalism, we often say, "Go to the source." That means you find where the information or interview originated and/or conduct the interview yourself and use that as the foundation of your story.

Simply put, nothing an AI chatbot produces can be used in a serious way. Honestly, I do not care if the German magazine was yellow journalism at its finest and "no one cares." We all should care, because someone who is not necessarily a bad journalist will eventually stumble on something that stupidly re-reported those AI-generated quotes and possibly try presenting them as fact.

Decades from now, Schumacher might be quoted as saying things he never spoke of in his life. Will we know the difference?

The point is, we have to draw a hard line now. AI Chatbots are recreational tools that can help us build products, maybe code, and offer insights and direction, but they cannot be trusted as sources. They just can't. So don't. Please.

TOPICS
Lance Ulanoff
Editor At Large

A 38-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.

Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Mark, the Today Show, Good Morning America, CNBC, CNN, and the BBC. 

Read more
An AI-generated image of the colosseum with slides coming out of it.
AI slop is taking over the internet and I've had enough of it
AI Education
The AI lie: how trillion-dollar hype is killing humanity
AI Learning for kids
AI doesn't belong in the classroom unless you want kids to learn all the wrong lessons
EDMONTON, CANADA - FEBRUARY 10: A woman uses a cell phone displaying the Open AI logo, with the same logo visible on a computer screen in the background, on February 10, 2025, in Edmonton, Canada
The surprising reason ChatGPT and other AI tools make things up – and why it’s not just a glitch
Man with headache
I asked ChatGPT to invent 6 philosophical thought experiments – and now my brain hurts
AI-generated image of an android standing in front of a circuit board background with a giant padlock in the middle
We’re locked inside a creative bubble, will AI burst it or throw away the key?
Latest in Artificial Intelligence
The Claude, ChatGPT, Google Gemini and Perplexity logos, clockwise from top left
The ultimate AI search face-off - I pitted Claude's new search tool against ChatGPT Search, Perplexity, and Gemini, the results might surprise you
Dream Machine on a laptop.
What is Dream Machine: everything you need to know about the AI video generator
Apple Intelligence Bella Ramsey ad
The Bella Ramsey Apple Intelligence ad that disappeared, and why Apple is now facing a false advertising lawsuit
Google Gemini Canvas
Is Gemini Canvas better than ChatGPT Canvas? I tested out both AI writing tools to find out which is king
Hugging Snap
This AI app claims it can see what I'm looking at – which it mostly can
Apple's Craig Federighi presents Apple Intelligence at the 2024 Worldwide Developers Conference (WWDC).
Apple Intelligence might finally transform Siri into the ultimate AI assistant if these leadership changes are true
Latest in Opinion
Polar Pacer
Polar's latest software update might have finally convinced me to ditch my Garmin
An image of the Samsung Display concept games console
Forget the Nintendo Switch 2 – I want a foldable games console
Image of Naoe in AC Shadows
Assassin's Creed Shadows is hands-down one of the most beautiful PC ports I've ever seen
Apple CEO Tim Cook
Forget Siri, Apple needs to launch a folding iPhone and get back on track
construction
Building in the digital age: why construction’s future depends on scaling jobsite intelligence
Concept art representing cybersecurity principles
Navigating the rise of DeepSeek: balancing AI innovation and security