Google might have a new AI-powered password-generating trick up its sleeve - but can Gemini keep your secrets safe?

A silhouette of a woman holding a smartphone with the Google Gemini logo in the background
(Image credit: Shutterstock)

If you’ve been using Google Chrome for the past few years, you may have noticed that whenever you’ve had to think up a new password, or change your existing one, for a site or app, a little “Suggest strong password” dialog box would pop up - and it looks like it could soon offer AI-powered password suggestions. 

A keen-eyed software development observer has spotted that Google might be gearing up to infuse this feature with the capabilities of Gemini, its latest large language model (LLM).

The discovery was made by @Leopeva64 on X. They found references to Gemini in patches of Gerrit, a web-based code review system developed by Google and used in the development of Google products like Android

These findings appear to be backed up by screenshots that show glimpses of how Gemini could be incorporated into Chrome to give you even better password suggestions when you’re looking to create a new password or change from one you’ve previously set.

Gemini guesswork

One line of code that caught my attention is that “deleting all passwords will turn this feature off.” I wonder if this does what it says on the tin: shutting the feature off if a user deletes all of their passwords, or if this just means all of the passwords generated by the “Suggest strong passwords” feature. 

The final screenshot that @Leopeva64 provides is also intriguing as it seems to show the prompt that Google engineers have included to get Gemini to generate a suitable password. 

This is a really interesting move by Google and it could play out well for Chrome users who use the strong password suggestion feature. I’m a little wary of the potential risks associated with this method of password generation, similar to risks you find with many such methods. LLMs are susceptible to information leaks caused by prompt or injection hacks. These hacks are designed to trick the AI models to give out information that their creators, individuals, or organizations might want to keep private, like someone’s login information.

A woman working on a laptop in a shared working space sitting next to a man working at a computer

(Image credit: Shutterstock/Gorodenkoff)

An important security consideration 

Now, that sounds scary and as far as we know, this hasn’t happened yet with any widely-deployed LLM, including Gemini. It’s a theoretical fear and there are standard password security practices that tech organizations like Google employ to prevent data breaches. 

These include encryption technologies, which encode data so that only authorized parties can access it for multiple stages of the password generation and storage process, and hashing, a one-way data conversion process that’s intended to make data reverse-engineering hard to do. 

You could also use any other LLM like ChatGPT to generate a strong password manually, although I feel like Google knows more about how to do this, and I’d only advise experimenting with that if you’re a software data professional. 

It’s not a bad idea as a proposition and a use of AI that could actually be very beneficial for users, but Google will have to put an equal (if not greater) amount of effort into making sure Gemini is bolted down and as impenetrable to outside attacks as can be. If it implements this and by some chance it does cause a huge data breach, that will likely damage people’s trust of LLMs and could impact the reputations of the tech companies, including Google, who are championing them.

YOU MIGHT ALSO LIKE...

TOPICS
Computing Writer

Kristina is a UK-based Computing Writer, and is interested in all things computing, software, tech, mathematics and science. Previously, she has written articles about popular culture, economics, and miscellaneous other topics.

She has a personal interest in the history of mathematics, science, and technology; in particular, she closely follows AI and philosophically-motivated discussions.