Google has just announced the ability to chain actions in Gemini and it could change the way we use AI for good
Gemini just got even better
- Gemini can now chain actions together to complete complex tasks
- Gemini Live is gaining multimodal abilities on the newest phones
- Gemini will evolve into a fully-powered AI assistant with Project Astra
To coincide with the launch of the Samsung S25 range of devices, at today's Galaxy Unpacked, Google has announced some impressive updates to its Gemini AI platform. Many of the improvements are specific to devices like the new Samsung S25, but some also work on the older Samsung S24 and the Pixel 9 phones.
The stand-out feature is Gemini's new ability to chain actions together. This means you can now do things like connect to Google Maps to search for nearby restaurants, then draft a text in Google Messages to send to people you’d like to invite to lunch, all through Gemini commands.
The chaining ability is being added to all devices that run Gemini, “depending on extensions”, which means that the extensions to link the particular app to Gemini will need to be written by a developer for them to be included. Naturally, all the major Google apps have extensions for Gemini already, but extensions are also available for the Samsung Reminder, Samsung Calendar, Samsung Notes, and Samsung Clock apps.
Gemini Live goes multimodal
Google’s Gemini Live, the part of Gemini that gives you the opportunity to have a natural, human-like conversation with the AI, is also getting some major multimodal upgrades. You will now be able to upload images, files, and YouTube videos to the conversation you’re having, so, for example, you could ask Gemini Live, “Hey, take a look at this picture of my school project and tell me how I could make this better”, then upload the picture, and get a response.
The Gemini multimodal improvements are not available across the board, however, and will require a Galaxy S24, S25, or Pixel 9 to work.
Project Astra
Finally, Google has announced that Project Astra capabilities will be coming in the next few months, arriving first on Galaxy S25 and Pixel phones. Project Astra is Google’s prototype AI assistant that enables you to interact with the world around you, asking questions about what you’re looking at and where you are using your phone’s camera. So, you can simply point your phone at something and ask Gemini to tell you something about it, or ask it when the next stop on your bus route will be.
Project Astra works on mobile phones, but takes your experience to the next level when combined with Google’s prototype hands-free AI glasses, so you can simply start asking Gemini questions about what you’re looking at, without having to interact with a screen at all.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
While there’s still no news about a release date for this next generation of Google glasses, they will join Meta Ray-Ban glasses in the emerging market for AI wearables when they finally become available.
You may also like
Graham is the Senior Editor for AI at TechRadar. With over 25 years of experience in both online and print journalism, Graham has worked for various market-leading tech brands including Computeractive, PC Pro, iMore, MacFormat, Mac|Life, Maximum PC, and more. He specializes in reporting on everything to do with AI and has appeared on BBC TV shows like BBC One Breakfast and on Radio 4 commenting on the latest trends in tech. Graham has an honors degree in Computer Science and spends his spare time podcasting and blogging.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.