Why you almost certainly have a shadow AI problem
Battling the rising Shadow AI epidemic

Even the next-door neighbor's dog knows not to click a link in an unsolicited email, but how many of us really understand how to use AI safely?
In short, shadow AI is the use of unapproved AI in an organization, similar to shadow IT which focuses on IT devices and services. Where employees might use a personal email address or laptop, shadow AI refers to the use of AI technology which hasn’t been approved for a business use case, particularly where it may constitute a risk to the business.
These risks include the leakage of sensitive or proprietary data, which is a common issue when employees upload documents to an AI service such as ChatGPT, for example, and its contents become available to users outside of the company. But it could also lead to serious data quality problems where incorrect information is retrieved from an unapproved AI source which may then lead to bad business decisions.
Generative AI is well known for its potential to hallucinate, giving plausible but ultimately incorrect information. Take Google’s AI summary as an example of search results getting things wrong. It might be obvious to the interested party with the contextual knowledge to recognize the summary may be wrong, but to the uninitiated, this won’t be the case.
Analysts at Datactics have seen on several occasions that a leading AI tool produces a fictitious LEI (legal entity identifier, required for regulatory reporting), and fictitious annual revenue figures for a corporate entity. The potential consequences of this kind of hallucination should be obvious, but because of the plausibility of the ‘bad data’, it is very easy for it to slip into the system and lead to further unexpected downstream problems, highlighting the need for robust data quality controls and data provenance.
There are technical, economic and cultural reasons for the rise of shadow AI, from cultural normalization and accessibility, pressure to perform, information overload and aggressive AI everywhere. There is very little resistance to these drivers, and most organizations don’t have very comprehensive AI governance in place, or AI awareness training.
Chief Technical Architect at Datactics.
What is AI governance, and doesn’t this solve the problem?
Part of the remit of AI governance is to address the problem of shadow AI. There’s a plethora of governance policy frameworks and tech platforms that can help with this, and perhaps it is this governance or risk mitigation that is partly to blame for slowing down the adoption of AI as businesses cautiously adopt third-party solutions.
But in the race between AI capability and AI governance, AI capability is accelerating, showing no signs of fatigue, and its benefits are obvious to end users. Meanwhile, by comparison, AI governance is still putting on its running shoes and users aren’t always clear on what does and does not constitute risk.
AI governance covers a broad spectrum, from the ad-hoc mandate of “please do not upload corporate or client information to ” to governance tools and strict policies prohibiting AI usage without prior approval. Many vendors now offer AI governance tools and frameworks to enable this, and the trick is to implement something that provides a high degree of protection without stifling innovation or productivity, depending on the size and type of the business.
How to address the problem of Shadow AI?
Using the dimensions of people, processes and technology, we can easily see a holistic way to address the issue of shadow AI that minimizes risks to organizations.
Many companies are now addressing the information leakage issue by implementing a technical architecture called RAG (Retrieval Augmented Generation), where a language model, either large or small, can be augmented with proprietary data in a way that keeps proprietary data securely within the organization, along with the added benefit of reducing AI hallucinations.
Specifically, to shadow AI, businesses can implement controls and detection, usually existing cybersecurity controls which can be easily extended, for example, firewall or proxy server controls or a single sign-on for third-party AI services. Furthermore, if these controls can be integrated with governance, then a much clearer picture of risk exposure can be achieved.
Perhaps most importantly, there needs to be greater cultural awareness of the risks of AI. In the same way that we have cyber security training for all staff, we need to strive for a reality where even the next-door neighbor's dog understands that AI is prevalent across a wide range of software and the potential risks. Understanding of the risks of divulging sensitive data to AI services and the possibility of hallucination and censorship in AI responses, and the importance of treating AI responses as data that informs an answer rather than taking the response as an infallible answer.
Data quality awareness is crucial. The information that goes into an AI model and the information that comes out must be validated and this understanding is something we need to adopt sooner rather than later.
We've featured the best IT infrastructure management service.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Chief Technical Architect at Datactics.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

















