Trust in AI and the future of information gathering

An AI face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

Companies today operate in a world defined by information overload. The figures are mindboggling. In 2010, the global datasphere totaled 2 trillion gigabytes (2 zettabytes). By 2020, it had expanded to 64 zettabytes, and by 2026, the International Data Corporation (IDC) expects that it will reach 221 zettabytes.

With approximately 252,000 new websites being created every day, it is becoming increasingly difficult for knowledge workers to locate the right resources and information needed to make informed decisions.

Paul Teather

CEO of AMPLYFI.

Solving the information overload problem

To solve this problem, many knowledge workers have begun leveraging generative AI platforms as a new information finding resource. It’s easy to see why – when provided with a simple query, AI can quickly provide answers, removing the need for knowledge workers to endlessly scour through search engine results to no avail. And so it’s no surprise that the launch of OpenAI’s new search platform, ChatGPT Search, was highly anticipated, touted by many as a competitor that could truly take on Google as the primary resource for knowledge workers.

However, while the potential is there, significant concerns about the use of generative AI as an information gathering tool remain. Yes, these new tools are fast, scalable and cheap. Yet, in a very human way, they can also lie. In their eagerness to respond, we’ve seen many AI platforms hallucinating information which to date has caused several notable, high-profile blunders.

In multiple incidents, lawyers using ChatGPT have been found to cite non-existent legal cases – something that can result in significant impacts, from case dismissals and fines to a broader erosion of trust in the legal system. Elsewhere, meanwhile, an NYC chatbot was found to be providing incorrect and illegal information and advice to business owners.

As a result, skepticism over the use of AI rightly remains. In fact, in a survey of 1,000 business decision makers, we found that over three quarters (78%) of knowledge workers say popular generative AI models like ChatGPT are eroding people’s trust in AI.

While these are powerful tools for consumers, they’re simply not built to drive effective decision making in the business world, as these high-profile blunders have shown.

Instead, many business leaders continue to put their faith in trusty search engine, with this being the second most trusted information gathering method behind official/third party reports. In fact, almost three quarters of decision makers (72%) never or rarely go past the first page of a search engine when seeking information, showing just how big of an influence search engines have on decision making.

Four steps to improving trust in AI as an information gathering tool

Such is the degree of trust in Google, the Department of Justice is considering breaking up the tech giant’s monopoly as an antitrust remedy, driving debate over where people can and should source information. However, if AI is to rival Google, we need to build trust in it, finding ways to eliminate key issues such as hallucinations.

I personally see a future in which generative AI will augment our knowledge, advise on potential choices, interrogate our thoughts to expose weaknesses in our thinking, and even make decisions autonomously. But for it to do any of these things, we first need to trust it wholly – from the content that trains it, to the references it uses and analyses it applies.

Can we bridge the gap that currently exists, and turn AI into a viable tool for supporting effective, trusted decision making? To even begin to do so, it is critical that several steps are taken:

1 – Craft effective inputs to guide AI responses The first step is to ensure we are guiding AI in the right way. By providing clear context and specific instructions, using examples to demonstrate desired output formats and implementing constraints to limit unwanted responses, we can reduce the scope for ambiguity and misinterpretation and boost output relevance and accuracy.

2 – Retrieve relevant information from external knowledge bases Second, it’s also important to leverage relevant information from external sources to guide more effective outputs. By integrating up-to-date, curated information sources into the input process, ideally through efficient retrieval mechanisms, we can both increase factual accuracy and benefit from verifiable sources for generated content.

3 – Guide AI to break down complex problems with reasoning processes Third, it’s possible to assist AI in solving complex problems with the right processes. By prompting the AI to show its work or explain its reasoning, encouraging intermediate steps in problem solving, and implementing self-correction mechanisms, logical consistency will be improved.

4 – Implement self-awareness and self-evaluation capabilities We can also develop mechanisms for the AI to assess its own confidence levels and recognize where knowledge gaps exist. Doing so can help encourage the AI to provide caveats or qualifications with its outputs, serving to enhance transparency into AI certainty and limitations.

If trust can be achieved, then the opportunity is massive

For AI to become an effective information gathering tool, it is vital that guardrails such as these are put in place to ensure that it can be trusted. To reiterate, decision makers are to be rightly wary of AI right now. Indeed, our survey shows that 80% have knowingly made a business decision based on information they were not sure about, with 88% of decision makers having discovered inaccuracies in information used for business decisions post decision.

However, if current issues can be addressed, and the trust gap that currently exists can be bridged, then the opportunity for AI to excel in supporting knowledge workers is significant.

We’re talking about a powerful tool that can quickly answer queries. If the right mechanisms can be put in place to ensure those answers are credible, logical and accurate, then users will be able to source exactly the information they need at speed. Critically, 95% of decision makers believe that better access to information will improve decision making. By taking the right steps to ensure that AI becomes a trustworthy information gathering asset, the decision making process can be vastly accelerated for knowledge workers.

We've featured the best AI website builder.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Paul Teather is the CEO of AMPLYFI.