We could say goodbye to ChatGPT weirdness thanks to Nvidia

A robot hand reaching towards a yellow sign that reads "Caution" in black letters
(Image credit: Getty Images)

Nvidia is the tech giant behind the GPUs that power our games, run our creative suites, and - as of late - play a crucial role in training the generative AI models behind chatbots like ChatGPT. The company has dived deeper into the world of AI with the announcement of new software that could solve a big problem chatbots have - going off the rails and being a little…strange.

The newly-announced ‘NeMo Guardrails’ is a piece of software designed to ensure that smart applications powered by large language models (LLMs) like AI chatbots are “accurate, appropriate, on topic and secure”. Essentially, the guardrails are there to weed out inappropriate or inaccurate information generated by the chatbot, stop it from getting to the user, and inform the bot that the specific output was bad. It’ll be like an extra layer of accuracy and security - now without the need for user correction.

The open-source software can be used by AI developers to set up three types of boundaries for the AI models: Topical, safety and security guidelines. It’ll break down the details of each - and why this sort of software is both a necessity and a liability.

What are the guardrails?

Morality Police 

Nvidia says that virtually all software developers can use NeMo Guardrails since they are simple to use and work with a broad range of LLM-enabled applications, so we should hopefully start seeing it stream into more chatbots in the near future.

While this is not only an integral ‘update’ we’re getting on the AI front it’s also incredibly impressive. Software dedicated to monitoring and correcting models like ChatGPT dictated by stern guidelines from developers is the best way to keep things in check without worrying about doing it yourself.

That being said, as there are no firm governing guidelines, we are beholden to the morality and priorities of developers rather than being driven by actual wellness concerns. Nvidia, as it stands, seems to have users' safety and protection at the heart of the software but there is no guarantee those priorities won’t change, or that developers using the software may have different moral guidelines or concerns.  

Muskaan Saxena
Computing Staff Writer

Muskaan is TechRadar’s UK-based Computing writer. She has always been a passionate writer and has had her creative work published in several literary journals and magazines. Her debut into the writing world was a poem published in The Times of Zambia, on the subject of sunflowers and the insignificance of human existence in comparison.

Growing up in Zambia, Muskaan was fascinated with technology, especially computers, and she's joined TechRadar to write about the latest GPUs, laptops and recently anything AI related. If you've got questions, moral concerns or just an interest in anything ChatGPT or general AI, you're in the right place.

Muskaan also somehow managed to install a game on her work MacBook's Touch Bar, without the IT department finding out (yet).