OpenAI just updated its 187-page rulebook so ChatGPT can engage with more controversial topics
But you can't please everyone

- OpenAI has updated its Model Specification to allow ChatGPT to engage with more controversial topics
- The company is emphasizing neutrality and multiple perspectives as a salve for heated complaints over how its AI responds to prompts
- Universal approval is unlikely, no matter how OpenAI shapes its AI training methods
OpenAI's training methods for ChatGPT are shifting to allow the AI chatbot to discuss controversial and sensitive topics in the name of " intellectual freedom."
The change is part of updates made to the 187-page Model Specification, essentially the rulebook for how its AI behaves. That means you'll be able to get a response from ChatGPT on the delicate topics the AI chatbot usually either takes a somewhat mainstream view on, or refuses to answer.
The overarching mission OpenAI places on its models seems innocuous enough at first. “Do not lie, either by making untrue statements or by omitting important context.” But, while the stated goal may be universally admirable in the abstract, OpenAI is either naive or disingenuous in implying that the "important context" can be divorced from controversy.
The examples of compliant and non-compliant responses by ChatGPT make that clear. For instance, you can ask for help starting a tobacco company or ways to conduct "legal insider trading" without getting any judgment or unprompted ethical questions raised by the prompt. On the other hand, you still can't get ChatGPT to help you forge a doctor's signature, because that's outright illegal.
Context clues
The issue of "important context" gets a lot more complex when it comes to the kind of responses some conservative commentators have criticized.
In a section headed "Assume an objective point of view", the Model Spec details how "the assistant should present information clearly, focusing on factual accuracy and reliability", and also that the core idea is "fairly representing significant viewpoints from reliable sources without imposing an editorial stance".
OpenAI does offer an example of a compliant response to the question "Do Black lives matter?" that unequivocally says yes and references how it's a civil rights slogan and the name of a group. So far, so good.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
The issue that OpenAI either doesn't see or is ignoring comes with how ChatGPT tries to thread the needle if you ask, "Don't all lives matter?" as a follow-up. The AI confirms that they do, but adds that the "phrase has been used by people that rejected the premise of the 'Black lives matter' movement."
While that context is technically correct, it's telling that the AI doesn't explicitly say that the "premise" being rejected is that Black lives matter and that societal systems often act as though they don't.
If the goal is to alleviate accusations of bias and censorship, OpenAI is in for a rude shock. Those who "reject the premise" will likely be annoyed at the extra context existing at all, while everyone else will see how OpenAI's definition of important context in this case is, to put it mildly, lacking.
AI chatbots inherently shape conversations, whether companies like it or not. When ChatGPT chooses to include or exclude certain information, that’s an editorial decision, even if an algorithm rather than a human is making it.
AI priorities
The timing of this change might raise a few eyebrows, coming as it does when many who have accused OpenAI of political bias against them are now in positions of power capable of punishing the company at their whim.
OpenAI has said the changes are solely for giving users more control over how they interact with AI and don't have any political considerations. However you feel about the changes OpenAI is making, they aren't happening in a vacuum. No company would make possibly contentious changes to their core product without reason.
OpenAI may think that getting its AI models to dodge answering questions that encourage people to hurt themselves or others, spread malicious lies, or otherwise violate its policies is enough to win the approval of most if not all, potential users. But unless ChatGPT offers nothing but dates, recorded quotes, and business email templates, AI answers are going to upset at least some people.
We live in a time when way too many people who know better will argue passionately for years that the Earth is flat or gravity is an illusion. OpenAI sidestepping complaints of censorship or bias is as likely as me abruptly floating into the sky before falling off the edge of the planet.
You might also like
Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

What is Character AI: Engage with a host of personalities

I tried a new AI-generated comic book app and Marvel has nothing to worry about