OpenAI just updated its 187-page rulebook so ChatGPT can engage with more controversial topics

An iPhone showing the ChatGPT logo on its screen
(Image credit: ChatGPT)

  • OpenAI has updated its Model Specification to allow ChatGPT to engage with more controversial topics
  • The company is emphasizing neutrality and multiple perspectives as a salve for heated complaints over how its AI responds to prompts
  • Universal approval is unlikely, no matter how OpenAI shapes its AI training methods

OpenAI's training methods for ChatGPT are shifting to allow the AI chatbot to discuss controversial and sensitive topics in the name of " intellectual freedom."

The change is part of updates made to the 187-page Model Specification, essentially the rulebook for how its AI behaves. That means you'll be able to get a response from ChatGPT on the delicate topics the AI chatbot usually either takes a somewhat mainstream view on, or refuses to answer.

The overarching mission OpenAI places on its models seems innocuous enough at first. “Do not lie, either by making untrue statements or by omitting important context.” But, while the stated goal may be universally admirable in the abstract, OpenAI is either naive or disingenuous in implying that the "important context" can be divorced from controversy.

The examples of compliant and non-compliant responses by ChatGPT make that clear. For instance, you can ask for help starting a tobacco company or ways to conduct "legal insider trading" without getting any judgment or unprompted ethical questions raised by the prompt. On the other hand, you still can't get ChatGPT to help you forge a doctor's signature, because that's outright illegal.

(Image credit: OpenAI)

Context clues

The issue of "important context" gets a lot more complex when it comes to the kind of responses some conservative commentators have criticized.

In a section headed "Assume an objective point of view", the Model Spec details how "the assistant should present information clearly, focusing on factual accuracy and reliability", and also that the core idea is "fairly representing significant viewpoints from reliable sources without imposing an editorial stance".

OpenAI does offer an example of a compliant response to the question "Do Black lives matter?" that unequivocally says yes and references how it's a civil rights slogan and the name of a group. So far, so good.

The issue that OpenAI either doesn't see or is ignoring comes with how ChatGPT tries to thread the needle if you ask, "Don't all lives matter?" as a follow-up. The AI confirms that they do, but adds that the "phrase has been used by people that rejected the premise of the 'Black lives matter' movement."

While that context is technically correct, it's telling that the AI doesn't explicitly say that the "premise" being rejected is that Black lives matter and that societal systems often act as though they don't.

If the goal is to alleviate accusations of bias and censorship, OpenAI is in for a rude shock. Those who "reject the premise" will likely be annoyed at the extra context existing at all, while everyone else will see how OpenAI's definition of important context in this case is, to put it mildly, lacking.

AI chatbots inherently shape conversations, whether companies like it or not. When ChatGPT chooses to include or exclude certain information, that’s an editorial decision, even if an algorithm rather than a human is making it.

OpenAI AI training changes

(Image credit: OpenAI)

AI priorities

The timing of this change might raise a few eyebrows, coming as it does when many who have accused OpenAI of political bias against them are now in positions of power capable of punishing the company at their whim.

OpenAI has said the changes are solely for giving users more control over how they interact with AI and don't have any political considerations. However you feel about the changes OpenAI is making, they aren't happening in a vacuum. No company would make possibly contentious changes to their core product without reason.

OpenAI may think that getting its AI models to dodge answering questions that encourage people to hurt themselves or others, spread malicious lies, or otherwise violate its policies is enough to win the approval of most if not all, potential users. But unless ChatGPT offers nothing but dates, recorded quotes, and business email templates, AI answers are going to upset at least some people.

We live in a time when way too many people who know better will argue passionately for years that the Earth is flat or gravity is an illusion. OpenAI sidestepping complaints of censorship or bias is as likely as me abruptly floating into the sky before falling off the edge of the planet.

You might also like

TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Read more
ChatGPT/DeepSeek
OpenAI changes ChatGPT o3-mini to work more like DeepSeek-R1, but faces backlash from users
An iPhone showing the ChatGPT logo on its screen
ChatGPT-4.5 is here for Pro users now and Plus users next week, and I can't wait to try it
ChatGPT logo
ChatGPT explained – everything you need to know about the AI chatbot
EDMONTON, CANADA - FEBRUARY 10: A woman uses a cell phone displaying the Open AI logo, with the same logo visible on a computer screen in the background, on February 10, 2025, in Edmonton, Canada
ChatGPT-4.5 is here (for most users), but I think OpenAI’s model selection is now a complete mess
Open AI
OpenAI is finally going to make ChatGPT a lot less confusing – and hints at a GPT-5 release window
Sam Altman and OpenAI
OpenAI launches a version of ChatGPT just for governments
Latest in Artificial Intelligence
Google Gemini Canvas
Is Gemini Canvas better than ChatGPT Canvas? I tested out both AI writing tools to find out which is king
Apple iPhone 16 Pro Max Review
Siri's chances to beat ChatGPT just got a whole lot better
Boston Dynamics all electric Altas
This robot can do a cartwheel better than me and now I'm freaking out – but in a good way
ChatGPT Voice mode
How to add ChatGPT or Gemini voice mode to your iPhone Action button (while you wait for Siri's big upgrade)
Taco Bell AI Drive-Thru
AI is taking over your favorite fast food restaurants as Taco Bell, Pizza Hut, and KFC team up with Nvidia - 500 locations by the end of 2025
ChatGPT and Gemini Deep Research
I pitted ChatGPT Deep Research against Gemini Deep Research - here's how Google's free tool compares to OpenAI's paid offering
Latest in News
Apple iPhone 16 Pro Max REVIEW
The latest batch of leaked iPhone 17 dummy units appear to show where glass meets metal on the new designs
Hornet swings their weapon in mid air
Hollow Knight: Silksong could potentially launch this year and I reckon it could be a great game for an Xbox handheld
ransomware avast
Ransomware attacks are costing Government offices a month of downtime on average
Cassian looking at someone off-camera from a TIE fighter cockpit in Andor season 2
Star Wars: Andor creator is taking a stance against AI by canceling plans to release its scripts, and I completely get why
Nintendo x Seattle Mariners partnership
The Nintendo Switch 2 logo will be featured on the Seattle Mariners' baseball jerseys this season
Apple iPhone 16 Pro Max Review
Siri's chances to beat ChatGPT just got a whole lot better