GenAI and Shadow IT combine for serious security concerns

Security padlock and circuit board to protect data
(Image credit: Getty Images)

The explosive popularity of generative artificial intelligence is disrupting the business world as enterprises race to apply the transformative power of GenAI chatbots to supercharge their business processes.

Yet as more employees adopt new generative AI tools like ChatGPT and Copilot in their daily roles, they’re usually doing so without a second thought to the larger security implications. IT teams are challenged to monitor each new software instance with limited visibility among sprawling networks of SaaS tools. Many AI projects that employees spin up go undetected by IT, exposing their organizations to shadow IT.

The concept of shadow IT involves the use of IT systems, devices, software, and services without explicit approval from the IT department. Most shadow IT is not introduced into an organization with malicious intent. Workers are burdened with a growing list of responsibilities in an ever-accelerating business market, so many of them turn to shadow IT to get their jobs done. Shadow IT is often easier to use than internal alternatives, has less red tape, or is a better fit for their style of work.

However, many IT teams are not prepared for the risks that these programs pose to network management and data security. Consider that 90% of employees who use unsecure practices do so despite knowing that their actions will increase risks for their organizations, according to Gartner. And fully 70% of employees who use ChatGPT hide that use from their employers, according to a survey by Fishbowl.

John Harden

Auvik's Senior Product Marketing Manager.

Risky climate

In addition, 9% of workers have admitted to pasting their company data into ChatGPT, and an average company leaks confidential information to the chatbot hundreds of times each week, according to Cyberhaven. ChatGPT then incorporates all that data into its public knowledge base for sharing with other users.

In this risky climate, budgets for generative AI projects are expected to almost triple between 2023 and 2025, rising from an average of 1.5% of IT budgets to 4.3% within two years, according to survey data from Glean and ISG. Larger companies will allocate still more for AI, with 26% of firms over $5 billion in revenue budgeting more than 10% toward generative AI by 2025. And more than one-third of survey respondents (34%) said they were willing to implement generative AI quickly despite the risks of negative outcomes.

SaaS shadow IT is probably one of the biggest hidden risk factors that IT leaders face today. Most people who utilize shadow IT tend to think that they’re just using a productivity tool. However, organizations have found over and over again that there is a high risk associated with shadow IT adoption.

Detecting Shadow IT and protecting data security

Every cyber program is built around defending data, but if that data exists within shadow IT tools, then it remains unprotected. That’s why it is so important to discover what shadow IT exists in your environment, build a plan for when it happens – not if – and foster a culture that still promotes employee problem-solving while adhering to IT policy.

IT teams can apply several important considerations and precautions to maintain control over AI tools and protect their organizations from potential risks. The most effective way to detect shadow IT is on-device where the user is, as other forms of detection can miss critical information. Going to the source of shadow IT, which is the user, is the most effective approach.

After developing an inventory of shadow IT, organizations can compare the anomalies to sanctioned IT tools, survey the anomalous users, and use this information to better understand work trends, problems, and solutions. It is important to approach shadow IT users with an open mind versus shutting down the adoption. There are business problems being solved with the use of these tools, and IT teams need to understand what that need is and work collaboratively with users to ensure they have the tools they need, while keeping data secure.

Remember, shadow IT tools are only “shadowy” until they’re not. Once discovered and brought out of the shadows, the next step is to move these IT tools through procurement and internal processes for sanctioned purchases to ensure visibility and compliance.

All new AI tools should be properly managed, as shadow IT within an organization can introduce serious compliance, security, and business risks. However, recognize that shadow IT users are really just “intrapreneurs” who are seeking new solutions to existing problems. By trying to understand the reasons behind their adoption of shadow IT, organizations can identify opportunities to solve business problems that they may not currently comprehend.

Of course, you may find that some of these shadow IT tools don’t fit within the proper IT framework of rigorous controls. But once you’ve discovered the underlying user problems being solved along the way, the users and central IT can develop plans to solve these issues in a more formal and productive way.

We've featured the best online cybersecurity course.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

John Harden is Auvik's Senior Product Marketing Manager. He has spent 15+ years in the IT/MSP industry, with experience in MSP NOC as well as software engineering and operations.