Google Bard content should be fact-checked, recommends current Google VP

Google Bard on phone
(Image credit: Mojahid Mottakin/Unsplash)

If you need any more reason to be skeptical of generative AI, look no further than a recent BBC interview with Debbie Weinstein, Vice President of Google UK. She recommends people use Google Search to fact-check content generated by the Bard AI.

Weinstein says in the interview that Bard should be considered more of an “experiment” better suited for “collaboration around problem solving” and “creating new ideas”. It seems like Google didn’t really intend for the AI to be used as a resource for "specific information". Besides fact-checking any information offered by Bard, she suggests using the thumbs up and thumbs down buttons at the bottom of generated content to give feedback to improve the chatbot. As the BBC points out, Bard’s homepage states “it has limitations and won’t always get it right, but doesn’t repeat Ms. Weinstein’s advice” to double-check results via Google Search.

On one hand, Debbie Weinstein is giving some sound advice. Generative AIs have a massive problem when it comes to getting things right. They hallucinate, meaning that a chatbot may come up with totally false information when generating text that fits a prompt. This issue has even gotten two lawyers from New York in trouble as they used ChatGPT in a case and presenting “fictitious legal research” that the AI cited.

So it's certainly not a bad idea to double-check whatever Bard says. However, considering these comments are coming from a vice president of the company, it's a little concerning.

Analysis: So, what's the point?

The thing is Bard is essentially a fancy search engine. One of its main function is be "a launchpad for curiosity"; a resource for factual information. The main difference between Bard and Google Search is the former is relatively easier to use. It's a lot more conversational, plus the AI offers important context. Whether Google likes it or not, people are going to be using Bard for looking up stuff. 

What’s particularly strange about Weinstein’s comments is it contradicts with the company's plans for Bard. During I/O 2023, we saw all the different ways the AI model could enhance Google Search from providing in-depth results on a topic to even creating a fitness plan. Both of these use cases and more require factual information to work. Is Weinstein saying this update is all for naught since it uses Google's AI tech?

While it's just one person from Google asserting this on the record (so far), she is a vice president at Google. If you’re not supposed to use the chatbot for important information, then why is  it being added to the search engine as way to further enhance? Why implement something that's apparently untrustworthy?

It’s a strange statement; one that we hope is not echoed throughout the company. Generative AI is here to stay after all, and it’s important that we trust it to output accurate information. We reached out to the tech giant for comment. This story will be updated at a later time.

Cesar Cadenas
Contributor

Cesar Cadenas has been writing about the tech industry for several years now specializing in consumer electronics, entertainment devices, Windows, and the gaming industry. But he’s also passionate about smartphones, GPUs, and cybersecurity. 

Read more
ChatGPT app on an iPhone
ChatGPT and Google Gemini are terrible at summarizing news, according to a new study
A hand reaching out to touch a futuristic rendering of an AI processor.
What are AI Hallucinations? When AI goes wrong
EDMONTON, CANADA - FEBRUARY 10: A woman uses a cell phone displaying the Open AI logo, with the same logo visible on a computer screen in the background, on February 10, 2025, in Edmonton, Canada
The surprising reason ChatGPT and other AI tools make things up – and why it’s not just a glitch
AI hallucinations
We're already trusting AI with too much – I just hope AI hallucinations disappear before it's too late
AI Education
The AI lie: how trillion-dollar hype is killing humanity
ChatGPT
ChatGPT wants to write your next novel, and readers and writers alike should be very worried
Latest in Artificial Intelligence
AI hallucinations
We're already trusting AI with too much – I just hope AI hallucinations disappear before it's too late
Google Gemini AI
Gemini can now see your screen and judge your tabs
A phone showing a ChatGPT app error message
ChatGPT was down for many – here's what happened
ChatGPT app on an iPhone
5 things you should ask ChatGPT today – oh, and 1 you should never ask it!
Hume AI
What is Hume: Bring emotional understanding to AI-generated voices
Beautiful.ai
What is Beautiful.ai: Create modern presentations in as little time as possible
Latest in News
Google Pixel Watch 3
Google Pixel Watches hit with delayed notifications, crashing, and performance issues following Wear OS 5.1 update
Zendesk Relate 2025
Zendesk Relate 2025 - everything you need to know as the event unfolds
Disney Plus logo with popcorn
You can finally tell Disney+ to stop bugging you about that terrible Marvel show you regret starting
Google Gemini AI
Gemini can now see your screen and judge your tabs
Girl wearing Meta Quest 3 headset interacting with a jungle playset
Latest Meta Quest 3 software beta teases a major design overhaul and VR screen sharing – and I need these updates now
Philips Hue
Philips Hue might be working on a video doorbell, and according to a new report, we just got our first look at it