Beware, another ChatGPT trend threatens your privacy – here's how to stay safe
The latest viral AI trend is doing "reverse location search" from photos

After the Ghibli-style mania and AI dolls frenzy, a new viral ChatGPT trend threatens everyone's privacy. People are using the chatbot to do "reverse location search" from photos.
This latest trend kicked off among social media users after OpenAI released o3 and o4-mini AI models. These versions are the first to integrate image reasoning capabilities that, paired with the chatbot's ability to search the web, have turned ChatGPT into a powerful and easy-to-use location-guess tool.
Virtually anyone can simply upload a photo into the LLM, ask where it was taken, and get a mostly reliable answer, even when you strip the location data attached to your pictures. This is an obvious privacy and security concern. From stalking to hacking and doxing, we asked some experts what's at stake and how to stay safe in the face of this new viral AI trend.
Another privacy and security nightmare?
As a cybersecurity expert at Surfshark, Miguel Fornés pointed out, there's nothing really new in reverse image search.
"However, AI models pushed the limits even further, becoming your private, and sometimes unethical, OSINT analyst," said Fornés.
According to Fornés, the main risks are linked to the "misuse of the seemingly endless capabilities of AI models." People can upload the picture you share on your Instagram stories or feed to figure out where you live, work, or are in real-time, for example.
And, if before, you had to work hard to find your answer, ChatGPT and similar tools, other AI models have made this task straightforward and, most crucially, something fun to do.
Commenting on this point, Norton's Director of AI & Innovation, Iskander Sanchez-Rola, told TechRadar: "It’s a mix of timing and tech maturity. AI has reached a point where it can combine visual recognition with contextual reasoning, and more people are becoming aware that these tools can analyze images in surprisingly detailed ways.
The geoguessing power of o3 is a really good sample of its agentic abilities. Between its smart guessing and its ability to zoom into images, to do web searches, and read text, the results can be very freaky. I stripped location info from the photo & prompted “geoguess this” pic.twitter.com/KaQiXHUvYLApril 17, 2025
One of the biggest concerns is that these models do not need picture metadata to successfully extract the needed information.
As Ethan Mollick, a professor who researches AI, noted in a post on X, ChatGPT's latest model was able to guess where his picture was taken despite having removed the location info from the picture. "Between its smart guessing and its ability to zoom into images, to do web searches, and read text, the results can be very freaky," he wrote.
Besides people using the tool, ChatGPT may also share this data with third parties. Sensitive location details could also be leaked in data breaches.
According to Proton's Head of Security, Eamonn Maguire, this is yet another attempt by AI companies to harvest more photos and information, including locations, architectural styles, and environments from around the world, to improve their models.
He said, "Almost every week, we see a new trend. These trends are highly engaging, driving a growing public interest in understanding AI's capabilities and limitations. More cynically, there seems to be a concerted effort by AI companies to create these viral trends that each provide a different dataset for their models to work with."
How to stay safe
As mentioned earlier, ChatGPT isn't the only AI model able to do reverse image location. For example, Google Gemini has had similar image analysis features for about a year. Claude and MOLMO can also perform similar tasks. While other chatbots such as Grok, Claude, Perplexity, or DeepSeek are now catching up.
The danger here is that, if with the Ghibli or AI dool viral trends, you can simply stop sharing your face with the chatbot, anyone can upload the photos you post on social media to figure out your exact location.
Worse still, this may be only the start of these creepy social manias. This means you need to actively take some steps to stay safe online in the face of AI-powered trends. Below are all the experts' tips to stay safe:
- Remove metadata: You should use tools that strip image metadata (including GPS coordinates) before posting. But remember: don’t rely on this alone.
- Blur or crop identifying details: As a rule of thumb, you should hide personal details such as house numbers, license plates, ID cards, or anything else that could identify a location or individual. Pay special attention to identifiables that may be in the background.
- Avoid real-time photos: Don't post in real-time if your location security is important, especially when you’re traveling or you are somewhere private.
- Be mindful of reflective surfaces: Windows, mirrors, and screens can reveal sensitive details you didn’t intend to capture. Double-check for any identifiables in those reflections.
- Beware of high-resolution or raw pictures/videos: The better the quality, the easier if for small information in the background to be revelaed by AI capabilities to zoom into pictures.
- Use secure editing apps: Remember to opt for privacy-conscious tools for blurring or redacting content. You should especially avoid apps that require uploading your photos to unencrypted cloud servers.
- Review privacy permissions: You should consider limiting the users who can see the images you post on your social media account by heading to the platform's settings.
You might also like

Chiara is a multimedia journalist committed to covering stories to help promote the rights and denounce the abuses of the digital side of life – wherever cybersecurity, markets, and politics tangle up. She believes an open, uncensored, and private internet is a basic human need and wants to use her knowledge of VPNs to help readers take back control. She writes news, interviews, and analysis on data privacy, online censorship, digital rights, tech policies, and security software, with a special focus on VPNs, for TechRadar and TechRadar Pro. Got a story, tip-off, or something tech-interesting to say? Reach out to chiara.castro@futurenet.com
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.