Microsofts latest layoffs could be the beginning of the end for ‘ethical AI’

Image of traffic light with text 'ChatGPT Ahead"
(Image credit: Urban Images via Shutterstock)

The entire ethics and society team responsible for guiding the artificial intelligence organization in product production has been sacked as part of Microsoft’s recent layoffs. This move comes at a time when AI ethics discourse is at an all-time high with the increase in global popularity of bots like ChatGPT and could spell trouble in the very near future. 

Microsoft still has an active ‘Office of Responsible AI’ which is responsible for creating rules and principles to guide AI initiatives, and says there is still a committed investment in ethical development despite a considerable downsizing in staff in the area. 

The ethics and society team was at its highest capacity in 2020, with about 30 employees consisting of engineers, designers, and philosophers. The variation in expertise and skills on the team provides a larger, more varied bank of knowledge to draw from when deciding on the ‘rules’ and principles that will be reflected in future AI products. As part of a reorganization, the team was sliced down to just seven people, who have now been laid off. 

As we have seen in the short time AI chatbots have been available to the public, a lot can go wrong. Microsoft already had to rein in the new Bing AI almost as soon as it was launched. We’ve touched on many shortcomings and oddities that have come out of ChatGPT and other chatbots, including blatant misinformation, emotional spirals, and of course an incredibly convincing tool for scammers of all kinds. As more companies rush to burp out more AI-enhanced products, making the team in charge of ethics and development seems like a very strange decision that looks like the company is prioritizing speed over safety.

According to Verge, the terminated employees stated that “People would look at the principles coming out of our offices and say ‘I don't know how this applies’” and it was their job to “show and create rules in areas where there were none”. The team had recently worked on a larger ‘responsible innovation toolkit’ that included a roleplaying game called ‘Judgement Call’ that helped designers think about potential issues or harm that could come about during product development. 

The remaining ethics and society members have said the smaller crew has made implementing their future plans difficult. 

Microsoft became focused on shipping AI tools more quickly than its rivals

Microsoft Employees in their open memo

The forecast calls for clear skies and moral draughts 

Last year the ethics and society team put out a memo that outlined the brand risks that would be associated with the Bing Image Creator, which uses DALL-E (powered by OpenAI, which also created ChatGPT). The image generator has become incredibly popular and has proved to have a plethora of uses - we made an interesting valentines day card with it last month. The team accurately pointed out that the tech could potentially damage artists' livelihoods and creative integrity by allowing anyone to not only copy their work but produce unauthorized duplicates based on artists' work without permission. 

Clearly, Microsoft is rushing to put unstable technology into the hands of the general public and it doesn’t take a team of experts to point out what kind of damage can be done with that kind of mindset. Without taking into consideration the problems ChatGPT and DALL-E have already presented, both for direct users and people in surrounding communities (think artists but also writers, examiners, and journalists) Microsoft is opening the floodgates for a lot of bad to come very soon.

We can turn to Elon Musk’s vision for his own ‘anti-woke’ ChatGPT competitor, which will apparently be stripped of safeguarding, anti-hate, and discrimination protections, to get a glimpse into Microsofts’s possible future. We have already seen pornographic ‘deepfakes’ of streamers, celebrities, and the general public, and with the possibility of video coming to ChatGPT, we can absolutely expect more to come. 

If the company is not going to even try to pretend to care about these issues and have a semblance of an ethics team on board, who exactly is to blame when people face real-world consequences? If not Microsoft, who are we supposed to look to for control over such sensitive and tender technology? 

The risk of brand damage ... is real and significant enough to require redress

Microsoft Employees in their open memo

The layoffs and their consequences do allow the mind to wander to many dark and scary places, and while we do not want to fearmonger we have to accept that people will use everyday tools for malicious reasons regardless of the original product intention. Scammers, misogynists, racists, and cheaters have already made a home with AI-generated text, and if Microsoft and other companies gunning for the AI market continue on this path of reckless abandon we will inevitably be caught in a horror movie none of us will be able to control. How many sci-fi movies have been made - and likely will continue to be made - that start off just like this: a new, impressive, and frightening technology released to the public without any foresight? Isn’t that the plot of the first Jurassic Park movies?

It is already hard to avoid ChatGPT and AI bots as it is; if we flood the internet and our devices with unregulated, unethically developed technology we may end up changing the digital landscape irreversibly. With Microsoft laying off the ethics and society team, we could see other companies follow suit - and then we’ll be in a whole lot of trouble. 

To quote the wise words of a certain fictional mathematician: “Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”

TOPICS
Muskaan Saxena
Computing Staff Writer

Muskaan is TechRadar’s UK-based Computing writer. She has always been a passionate writer and has had her creative work published in several literary journals and magazines. Her debut into the writing world was a poem published in The Times of Zambia, on the subject of sunflowers and the insignificance of human existence in comparison. Growing up in Zambia, Muskaan was fascinated with technology, especially computers, and she's joined TechRadar to write about the latest GPUs, laptops and recently anything AI related. If you've got questions, moral concerns or just an interest in anything ChatGPT or general AI, you're in the right place. Muskaan also somehow managed to install a game on her work MacBook's Touch Bar, without the IT department finding out (yet).

Read more
AI Education
The AI lie: how trillion-dollar hype is killing humanity
An AI face in profile against a digital background.
Navigating transparency, bias, and the human imperative in the age of democratized AI
An AI-generated image of the colosseum with slides coming out of it.
AI slop is taking over the internet and I've had enough of it
A person holding out their hand with a digital AI symbol.
AI safety at a crossroads: why US leadership hinges on stronger industry guidelines
Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence.
What companies can learn from the gold rush for the AI boom
Bored frustrated business people working in the office with an efficient robot.
Shut it all down? Microsoft research suggests AI usage is making us feel dumber – but you don't need to panic yet
Latest in Computing
AI hallucinations
We're already trusting AI with too much – I just hope AI hallucinations disappear before it's too late
Girl wearing Meta Quest 3 headset interacting with a jungle playset
Latest Meta Quest 3 software beta teases a major design overhaul and VR screen sharing – and I need these updates now
Microsoft Surface Laptop 7 on the left side and Dell XPS 13 (2024) on the right side of a TechRadar versus background
Microsoft Surface Laptop 7 vs. Dell XPS 13 (2024): Which laptop should you trust to fuel your productivity?
A phone showing a ChatGPT app error message
ChatGPT was down for many – here's what happened
A woman sitting in a chair looking at a Windows 11 laptop
It looks like Microsoft might have thought better about banishing Copilot AI shortcut from Windows 11
ChatGPT app on an iPhone
5 things you should ask ChatGPT today – oh, and 1 you should never ask it!
Latest in Opinion
AI hallucinations
We're already trusting AI with too much – I just hope AI hallucinations disappear before it's too late
Closing the cybersecurity skills gap
How CISOs can meet the demands of new privacy regulations
Half man, half AI.
Ensuring your organization uses AI responsibly: a how-to guide
Judge sitting behind laptop in office
A day in the life of an AI-augmented lawyer
Cyber-security
Why Windows End of Life deadlines require a change of mindset
Polar Pacer
Polar's latest software update might have finally convinced me to ditch my Garmin