Approach to AI models needs to be specialized

Ai tech, businessman show virtual graphic
(Image credit: Shutterstock/SomYuZu)

AI has been all the rage in 2023. Online, at conferences, in articles like this, you can’t get away from the subject. But AI has been around for a while. So, going beyond the hype and the headlines, what is behind the sudden emergence of AI as the concern for businesses around the world?

We’ve reached a critical mass of global connectivity and the computing power that’s now available is seeing the rise of massive datasets. With extreme computing power, extreme networking, and large data sets (such as those used to train the best large language models (LLMs), AI has moved into the mainstream. It is now both more available and more necessary, which is why there’s so much hubbub around it.

And the hubbub seems to go beyond the normal clamor when a new technology arrives on the scene. AI looks set to shape all aspects of the future. Not just what it means to do business, but also calling into question what it means to be human.

These are the big, esoteric questions behind AI. But what does it all mean in practice, in the day-to-day?

Underpinning AI is, as I said, vast amounts of data. And now, managing this constant downpour of data has become one of the biggest information challenges for businesses to overcome. And while interacting with AI may seem simple from the user’s perspective, it involves many sophisticated technologies working together behind the scenes—big data, natural language processing (NLP), machine learning (ML) and more. But integrating this componentry—ethically and effectively— requires expertise, strategy, and insight.

Mark Morley

Senior Director, Product Marketing, OpenText.

Specialized vs generalized: Making the most of AI

The most high-profile AI tools, such as ChatGPT or Bard, are examples of generalized AI. These work by ingesting datasets from publicly available sources – i.e., the entirety of the internet – and processing that data to turn it into output that appears plausible to humans.

But the problem with using generalized AI models in business is that they are subject to the same inaccuracies and biases that we’ve become accustomed to with the internet more broadly.

That’s why, for maximum impact, businesses should not use general AI models. Instead, leveraging specialized AI models is the way to most effectively manage the data deluge that comes along with AI. Specialized AI tools are like generalized ones in that they’re also LLMs. But the big difference is that they are trained on specialized data, which is verified by subject matter experts before it’s fed into the LLM.

Specialized AI algorithms can, therefore, analyze, understand, and output content that can be trusted for specialist accuracy. This kind of capability is crucial to avoiding the kind of pitfalls we’ve seen so far with generalized AI, such as lawyers including inaccurate, ChatGPT-supplied information in legal filings. But the question remains: how can companies best manage the huge amounts of data created when taking a specialized approach to AI?

Managing the data deluge with specialized AI models

Any successful approach will involve effective strategies for data collection, storage, processing, and analysis. As with any technology project, defining clear objectives and governance policies is key. But the quality of data is arguably even more important. The old adage of ‘garbage in, garbage out’ applies here; the success of any specialized AI model relies on the quality of data, so companies must implement data validation and cleaning processes.

Data storage infrastructure, lifecycle management, integration across systems and version control must also be considered and planned for prior to deployment of a specialized AI model. Ensuring all of this is in place will help companies better handle the large volumes of data generated at the other end, with continuous monitoring also required to assess the performance of the model.

But companies must also consider AI ethics here, just as they would with generalized AI. Specialized AI models can be prone to domain-specific biases, while what is considered ethical in one industry may not be in another, requiring judicious use of any specialized AI output. Also, specialized LLMs may find it hard to understand nuanced or context-specific aspects of language. This could lead to misinterpretation of input and generate inappropriate or inaccurate outputs.

This complexity of course dictates that human input and continuous monitoring is key. But it also reinforces the importance of both departmental and industry collaboration in ensuring any use of AI is both ethical and effective. Data and knowledge sharing can be a key step in improving the quality of underlying data and, when done right, can also help to keep that data secure.

Ultimately, as AI becomes more and more integrated into our daily work and lives, we are going to need to develop processes to deal with its output in a scalable and ethical way. Partnership and collaboration lie at the heart of doing so, especially with a technology that impacts so many of us simultaneously.

We've featured the best data visualization tool.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

Mark Morley, Senior Director, Product Marketing, OpenText.