Employees in most industries are using ChatGPT in their day to day work, but they could be putting businesses at risk

generative ai business use
(Image credit: Shutterstock / thanmano)

New research by Indusface shows that ChatGPT is seeing increased usage across industries despite its usage in the workplace being heavily questioned in recent months.

ChatGPT can be a very useful productivity tool, helping to gather, summarize, and simplify information - but there are a number of issues that could land workers in hot water.

The advertising industry came out on top, with 39% of respondents stating that they actively use ChatGPT at work.

Jack of all trades, but master of none

In the rankings, the legal sector came in a close second, with 38% of respondents using ChatGPT in their work. This is followed by the Arts & Media industry at 33%, both the Information & Communication Technology industry and Construction industry ranking at 30%, with Real Estate & Property, Manufacturing and Call Centers & Customer service all seeing around 29% of respondents using ChatGPT in the workplace.

The Healthcare & Medical industry matched the Government & Defence usage at 28%. Among all industries, the most common use of the generative AI was to write up reports (27%), closely followed by using ChatGPT to translate information (25%), with research purposes not fair behind (17%).

Venky Sundar, Founder and President of Indusface points out that there are a number of troubling issues in the usage of ChatGPT within the workplace stating that, “Specific to business documents the risks are: legal clauses have a lot of subjectivity, and it is always better to get these vetted by an expert.

“The second risk is when you share proprietary information into chatGPT and there’s always a risk that this data is available for the general public, and you may lose your IP. So never ask chatGPT for documentation on proprietary documents including product roadmaps, patents and so on.

Sundar also points out that the use of generative AI and large language models (LLM) have shortened development times across industries, allowing an idea to become a product in a very short amount of time.

“The risk though is that proof of concept (POC) should just be used for that purpose. If you go to market with the POC, there could be serious consequences around application security and data privacy. The other risk is with just using LLMs as an input interface for the products and there could be prompt injections and the risk is unknown there.”

Interestingly, over half (55%) of respondents stated that they would not trust working with another business that used ChatGPT or a similar AI in their day to day work.

More from TechRadar Pro

TOPICS
Benedict Collins
Staff Writer (Security)

Benedict has been writing about security issues for over 7 years, first focusing on geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division),  then continuing his studies at a postgraduate level, achieving a distinction in MA Security, Intelligence and Diplomacy. Upon joining TechRadar Pro as a Staff Writer, Benedict transitioned his focus towards cybersecurity, exploring state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.