ChatGPT has passed the Turing test and if you're freaked out, you're not alone

a robot speaking with a sitting person
(Image credit: Shutterstock)

Despite just releasing ChatGPT-4, OpenAI is already working on the fifth iteration of the immensely popular chat application, GPT-5. According to a new report from BGR, we could be seeing those major upgrades as soon as the end of the year. 

One milestone, in particular, could be within reach if this turns out to be true: the ability to be indistinguishable from humans in conversation. And it doesn’t help that we’ve essentially been training this AI chatbot with hundreds of thousands, if not millions, of conversations a day.

Computer and AI pioneer Alan Turing famously proposed a test for artificial intelligence that if you could speak to a computer and not know that you weren't speaking to a human, the computer could be said to be artificially intelligent. With OpenAI's ChatGPT, we've certainly crossed that threshold to a large degree (it can still be occassionally wonky, but so can humans), but for everyday use, ChatGPT passes this test.

Considering the meteoric rise and development of ChatGPT technology since its debut in November 2022, the rumors of even greater advances are likely to be true. And while seeing such tech improve so quickly can be exciting, hilarious, and sometimes insightful, there are also plenty of dangers and legal pitfalls that can easily cause harm.

For instance, the amount of malware scams being pushed has steadily increased since the chatbot tech’s introduction, and its rapid integration into applications calls into question privacy and data collection issues, not to mention rampant plagiarism issues. But it’s not just me seeing the issue with ChatGPT being pushed so rapidly and aggressively. Tech leaders and experts in AI have also been sounding the alarm.

AI development needs to be culled

The Future of Life Institute (FLI), an organization that is dedicated to minimizing the risk and misuse of new technologies, has published an open letter calling for AI labs and companies to immediately halt their work on OpenAI systems beyond ChatGPT-4. Notable figures like Apple co-founder Steve Wozniak and OpenAI co-founder Elon Musk have agreed that progress should be paused in order to ensure that people can enjoy existing systems and that said systems are benefiting everyone.

The letter states: "Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

As we’re seeing, the rush for companies to integrate and use this new technology is causing a plethora of issues. These include CNET using it to generate articles that sometimes contained inaccuracies, to credit card information potentially being leaked on ChatGPT. There were several deepfake images created through AI image generator Midjourney that quickly went viral, and rampant user abuse that resulted in a pause in free trials. There’s very little being done in the way of protecting privacy, intellectual property rights of smart artists, or preventing personal information stored from leaking.

And until we get some kind of handle on this developing technology and how companies using it do so safely and responsibly, then development should pause until we do.

TOPICS
Allisa James
Computing Staff Writer

Named by the CTA as a CES 2023 Media Trailblazer, Allisa is a Computing Staff Writer who covers breaking news and rumors in the computing industry, as well as reviews, hands-on previews, featured articles, and the latest deals and trends. In her spare time you can find her chatting it up on her two podcasts, Megaten Marathon and Combo Chain, as well as playing any JRPGs she can get her hands on.