Will LLMs become the ultimate mediators for better and for worse? DeepMind researchers and Reddit users seem to agree on that

ChatGPT logo with circuitry in the background.
(Image credit: Shutterstock/Sir David)

AI experts believe large language models (LLMs) could serve a purpose as mediators in certain scenarios where agreements can’t be reached between individuals.

A recent study by researchers at Google DeepMind sought to explore the potential for LLMs to be used in this regard, particularly in terms of solving incendiary disputes amidst the contentious political climate globally.

“Finding agreements through a free exchange of views is often difficult,” the study authors noted. “Collective deliberation can be slow, difficult to scale, and unequally attentive to different voices.”

Winning over the group

As part of the project, the team at DeepMind trained a series of LLMs dubbed ‘Habermas Machines’ (HM) to act as mediators. These models were trained specifically to identify common, overlapping beliefs between individuals on either end of the political spectrum.

Topics covered by the LLM included divisive issues such as immigration, Brexit, minimum wages, universal childcare, and climate change.

“Using participants’ personal opinions and critiques, the AI mediator iteratively generates and refines statements that express common ground among the group on social or political issues,” the authors wrote.

The project also saw volunteers engage with the model, which drew upon the opinions and perspectives of each individual on certain political topics.

Summarized documents on volunteer political views were then collated by the model, which provided further context to help bridge divides.

The results were very promising, with the study revealing volunteers rated statements made by the HM higher than those made by human statements on the same issues.

Moreover, after volunteers were split into groups to further discuss these topics, researchers discovered that participants were less divided on these issues after reading statements from the HMs compared to human mediator documents.

“Group opinion statements generated by the Habermas Machine were consistently preferred by group members over those written by human mediators and received higher ratings from external judges for quality, clarity, informativeness, and perceived fairness,” researchers concluded.

“AI-mediated deliberation also reduced division within groups, with participants’ reported stances converging toward a common position on the issue after deliberation; this result did not occur when discussants directly exchanged views, unmediated.”

The study noted that “support for the majority position” on certain topics increased after AI-supported deliberation. However, the HMs “demonstrably incorporated minority critiques into revised statements”.

What this suggests, researchers said, is that during AI-mediated deliberation, the “views of groups of discussants tended to move in a similar direction on controversial issues”.

“These shifts were not attributable to biases in the AI, suggesting that the deliberation process genuinely aided the emergence of shared perspectives on potentially polarizing social and political issues.”

AI mediation in domestic disputes can be a tricky balancing act

There are already real-world examples of LLMs being used to solve disputes, particularly in relationships, with some users on Reddit having reported using the ChatGPT, for example.

One user reported their partner used the chatbot “every time” they have a disagreement and that this was causing friction.

“Me (25) and my girlfriend (28) have been dating for the past 8 months. We’ve had a couple of big arguments and some smaller disagreements recently,” the user wrote. “Each time we argue my girlfriend will go away and discuss the argument with ChatGPT, even doing so in the same room sometimes.”

Notably, the user found on these occasions, their partner could “come back with a well constructed argument” breaking down everything said or done during a previous argument.

It’s this aspect of the situation that’s caused significant tension though.

“I’ve explained to her that I don’t like her doing so as it can feel like I’m being ambushed with thoughts and opinions from a robot,” they wrote. “It’s nearly impossible for a human being to remember every small detail and break it down bit by bit, but AI has no issue doing so.”

“Whenever I've voiced my upset I've been told that ‘ChatGPT says you’re insecure’ or ‘ChatGPT says you don’t have the emotional bandwidth to understand what I’m saying’.”

More from TechRadar Pro

News and Analysis Editor, ITPro

Ross Kelly is News & Analysis Editor at ITPro, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape.

TOPICS