Generative AI bias may be far worse than we thought. Here's what it'll take to fix it

a person in a suit with a digital scale in front of them
(Image credit: Shutterstock / Sansoen Saengsakaorat)

While bias in generative AI is a well-known phenomenon, it’s still surprising what kinds of biases sometimes get unearthed. TechCrunch recently ran a test using Meta’s AI chatbot, which launched in April 2024 for over a dozen countries including India, and found an odd and disturbing trend.

When generating images using the prompt “Indian men,” the vast majority of the results feature said men wearing turbans. While a large number of Indian men do wear turbans (mainly if they’re practicing Sikhs), according to the 2011 census, India’s capital city Delhi has a Sikh population of about 3.4%, while the generative AI image results deliver three to four out of five men.

Unfortunately, this isn’t the first time generative AI has been caught up in a controversy related to race and other sensitive topics, and this is far from the worst example either. 

How far does the rabbit hole go?

In August 2023, Google’s SGE and Bard AI (the latter now called Gemini) were caught with their pants down arguing the ‘benefits’ of genocide, slavery, fascism, and more. It also listed Hitler, Stalin, and Mussolini on a list of "greatest" leaders, with Hitler also making its list of "most effective leaders."

Later on that year in December 2023, there were multiple incidents involving AI, with the most awful of them including Stanford researchers finding CSAM (child abuse images) in the popular LAION-5B image dataset that many LLMs train on. That study found more than 3,000 known or suspected CSAM images in that dataset. Stable diffusion maker Stability AI, which uses that set, claims that it filters out any harmful images. But how can that be determined to be true — those images could easily have been incorporated into more benign searches for ‘child’ or ‘children.’

There’s also the danger of AI being used in facial recognition, including and especially with law enforcement. Countless studies have already proven that there is clear and absolute bias when it comes to what race and ethnicity are arrested at the highest rates, despite whether any wrongdoing has occurred. Combine that with the bias that AI is trained on from humans and you have technology that could result in even more false and unjust arrests. It’s to the point that Microsoft doesn’t want its Azure AI being used by police forces.

It’s rather unsettling how AI has quickly taken over the tech landscape, and how many hurdles remain in its way before it advances enough to be finally rid of these issues. But, one could argue that these issues have only arisen in the first place due to AI training on literally any datasets it can access without properly filtering the content. If we’re to address AI's massive bias, we need to start properly vetting its datasets — not only for copyrighted sources but for actively harmful material that poisons the information well.

You might also like

Allisa James
Computing Staff Writer

Named by the CTA as a CES 2023 Media Trailblazer, Allisa is a Computing Staff Writer who covers breaking news and rumors in the computing industry, as well as reviews, hands-on previews, featured articles, and the latest deals and trends. In her spare time you can find her chatting it up on her two podcasts, Megaten Marathon and Combo Chain, as well as playing any JRPGs she can get her hands on.

Read more
A scale with AI on one side and a brain on the other
What is AI bias? Almost everything you should know about bias in AI results
AI Education
The AI lie: how trillion-dollar hype is killing humanity
An AI face in profile against a digital background.
Navigating transparency, bias, and the human imperative in the age of democratized AI
AI
5 massive AI trends I'm looking out for in 2025
Google Gemini AI logo on a smartphone with Google background
Here's why Google's Gemini AI getting a proper memory could save lives
With an AI android by her side, a young woman studies on a couch in her modern home, highlighting the benefits of artificial intelligence.
Generative AI should be used to transform society, not put dogs in Van Gogh paintings
Latest in Artificial Intelligence
The Claude, ChatGPT, Google Gemini and Perplexity logos, clockwise from top left
The ultimate AI search face-off - I pitted Claude's new search tool against ChatGPT Search, Perplexity, and Gemini, the results might surprise you
Dream Machine on a laptop.
What is Dream Machine: everything you need to know about the AI video generator
Apple Intelligence Bella Ramsey ad
The Bella Ramsey Apple Intelligence ad that disappeared, and why Apple is now facing a false advertising lawsuit
Google Gemini Canvas
Is Gemini Canvas better than ChatGPT Canvas? I tested out both AI writing tools to find out which is king
Hugging Snap
This AI app claims it can see what I'm looking at – which it mostly can
Apple's Craig Federighi presents Apple Intelligence at the 2024 Worldwide Developers Conference (WWDC).
Apple Intelligence might finally transform Siri into the ultimate AI assistant if these leadership changes are true
Latest in Opinion
An image of the Samsung Display concept games console
Forget the Nintendo Switch 2 – I want a foldable games console
Image of Naoe in AC Shadows
Assassin's Creed Shadows is hands-down one of the most beautiful PC ports I've ever seen
Apple CEO Tim Cook
Forget Siri, Apple needs to launch a folding iPhone and get back on track
construction
Building in the digital age: why construction’s future depends on scaling jobsite intelligence
Concept art representing cybersecurity principles
Navigating the rise of DeepSeek: balancing AI innovation and security
A person holding out their hand with a digital AI symbol.
Taking AI to the edge for smaller, smarter, and more secure applications