Google’s AI Mode can explain what you’re seeing even if you can’t

Google AI Mode Lens
(Image credit: Google)

  • Google’s AI Mode now lets users upload images and photos to go with text queries
  • The feature combines Google Gemini and Lens
  • AI Mode can understand entire scenes, not just objects

Google is adding a new dimension to its experimental AI Mode by connecting Google Lens's visual abilities with Gemini.

AI Mode is a part of Google Search that can break down complex topics, compare options, and suggest follow-ups. Now, that search includes uploaded images and photos taken on your smartphone.

The result is a way to search through images the way you would text but with much more complex and detailed answers than just putting a picture into reverse image search.

You can literally snap a photo of a weird-looking kitchen tool and ask, “What is this, and how do I use it?” and get a helpful answer, complete with shopping links and YouTube demos.

AI Eyes

If you take a picture of a bookshelf, a plate of food, or the chaotic interior of your junk drawer, the AI won’t just recognize individual objects; it will also explain their relationship to each other.

You might get a suggestion of other dishes you can make with the same ingredients, whether your old phone charger is in the drawer or what order you should read those books on the shelf. You can see how it works above.

Essenitally, the feature fires off multiple related questions in the background about the entire scene and each individual object. So when you upload a picture of your living room and ask how to redecorate it, you’re not just getting one generic answer. You’re getting a group of responses from mini AI agents asking about everything in the room.

Google isn't unique in this pursuit. ChatGPT includes image recognition, for instance. However, Google’s advantage is decades of search data, visual indexing, and other data storage and organization.

If you're a Google One AI Premium subscriber or are approved to test it through Search Labs, you can test out the feature on the Google mobile app.

You might also like

TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.