ChatGPT is exciting, but Microsoft’s influence is cause for concern

Microsoft head quarters in France
(Image credit: HJBC / Shutterstock)

The Artificial Intelligence dream has landed in our everyday lives, and the ethical discussions around AI have ramped up as a consequence, especially concerning how much data these AI services are collecting from users. After all, where there is mass storage of possibly sensitive information, there are cybersecurity and privacy concerns. 

Microsoft’s Bing search engine, which is newly equipped with OpenAI’s ChatGPT, and is currently being rolled out, has brought its own set of concerns, as Microsoft hasn’t had the best track record when it comes to respecting its customers’ privacy.

Microsoft has occasionally been challenged about its management and access to user data, although notably less so than its contemporaries like Apple, Google, and Facebook, even though it deals in a great deal of user information - including when it sells targeted ads. 

It’s been targeted by certain government regulatory bodies and organizations, such as when France demanded that Microsoft ceases tracking users through Windows 10, and the company responded with a set of comprehensive measures. 

Jennifer King, director of consumer privacy at the Center for Internet and Society Stanford Law School, speculated that this is partly due to Microsoft’s longstanding position both in its respective market and long-time relationships with governments afforded to it because of its legacy. It has more experience when dealing with regulators, so might have avoided the same level of scrutiny as its competitors.

An influx of data

Microsoft, as well as other companies, is now finding itself having to react to a mass influx of user chat data due to the popularity of chatbots like ChatGPT. According to the Telegraph, Microsoft has reviewers who analyze user submissions to limit harm and respond to potentially dangerous user inputs by combing through user conversation logs with the chatbot and stepping in to moderate “inappropriate behavior.” 

The company claims that it strips submissions of personal information, users' chat texts are only accessible to certain reviewers, and these efforts protect users even when their conversations with the chatbot are under review.

A Microsoft spokesperson elaborated that it employs both automated review efforts (as there is a great deal of data to comb through) and manual reviewers. It goes on to state that this is the standard for search engines, and is also included in Microsoft’s privacy statement. 

The spokesperson is at pains to reassure those concerned that Microsoft employs industry-standard user privacy measures such as “pseudonymization, encryption at rest, secured and approved data access management, and data retention procedures.” 

Additionally, the reviewers can only view user data on the basis of “a verified business need only, and not any third parties.” Microsoft has since updated its privacy statement to summarize and clarify the above - user information is being collected and human employees at Microsoft may be able to see it.

Under the spotlight

Microsoft isn’t the only company under scrutiny over how it collects and handles user data when it comes to AI chatbots. OpenAI, the company that created ChatGPT, also disclosed that it reviews user conversations. 

Recently, the company behind Snapchat announced that it was introducing a chatbot equipped with ChatGPT that will resemble its already-familiar messenger chat format. It has warned users not to submit personal sensitive information, possibly for similar reasons. 

These concerns are multiplied when considering the usage of ChatGPT and ChatGPT-equipped bots by those working at companies with their own sensitive and confidential information, many of which have warned employees not to submit confidential company information into these chatbots. Some companies, such as JP Morgan and Amazon, have restricted or banned their use at work altogether. 

Personal user data has been, and continues to be, a key issue in tech in general. Misuse of data, or even malicious use of data, can have dire consequences both for individual people and for organizations. With every introduction of a new technology, these risks are increased – but so is the potential reward. 

Tech companies would do well to pay extra attention to make sure our personal data is as secure as possible - or lose the trust of their customers and potentially kill off their fledgling AI ambitions.

TOPICS
Computing Writer

Kristina is a UK-based Computing Writer, and is interested in all things computing, software, tech, mathematics and science. Previously, she has written articles about popular culture, economics, and miscellaneous other topics.

She has a personal interest in the history of mathematics, science, and technology; in particular, she closely follows AI and philosophically-motivated discussions.

Read more
A person holding out their hand with a digital AI symbol.
DeepSeek kicks off the next wave of the AI rush
An AI face in profile against a digital background.
Worried about DeepSeek? Well, Google Gemini collects even more of your personal data
Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence.
What companies can learn from the gold rush for the AI boom
DeepSeek on a mobile phone
Australian and Indian governments block DeepSeek from worker devices
Sam Altman and OpenAI
OpenAI launches a version of ChatGPT just for governments
Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence.
ChatGPT vs. DeepSeek: which AI model Is more sustainable?
Latest in Artificial Intelligence
A super close up image of the Google Gemini app in the Play Store
It's official: Google Assistant will be retired for phones this year, with Gemini taking over
Super Mario Odyssey
ChatGPT is the ultimate gaming tool - here's 4 ways you can use AI to help with your next playthrough
Apple CEO Tim Cook delivers remarks before the start of an Apple event at Apple headquarters on September 09, 2024 in Cupertino, California. Apple held an event to showcase the new iPhone 16, Airpods and Apple Watch models. (Photo by Justin Sullivan/Getty Images)
The big Siri Apple Intelligence delay proves that maybe we really don't know Apple at all
AI writer
Coding AI tells developer to write it himself
Apple iPhone 16 Pro Max REVIEW
Apple Intelligence is a fever dream that I bet Apple wishes we could all forget about
DeepSeek on an iPhone
OpenAI calls on US government to ban DeepSeek, calling it ‘state-subsidized’ and ‘state-controlled’
Latest in News
A super close up image of the Google Gemini app in the Play Store
It's official: Google Assistant will be retired for phones this year, with Gemini taking over
Quordle on a smartphone held in a hand
Quordle hints and answers for Sunday, March 16 (game #1147)
NYT Strands homescreen on a mobile phone screen, on a light blue background
NYT Strands hints and answers for Sunday, March 16 (game #378)
NYT Connections homescreen on a phone, on a purple background
NYT Connections hints and answers for Sunday, March 16 (game #644)
Three iPhone 16 handsets on show
Apple could launch an iPhone 17 Ultra this year – but we've heard these rumors before
Super Mario Odyssey
ChatGPT is the ultimate gaming tool - here's 4 ways you can use AI to help with your next playthrough