Adobe Firefly is doing generative AI differently and it may even be good for you
Using your own Stock
In generative AI imagery’s short lifespan it's been on a rollercoaster ride from excitement to pillory and back to excitement. With each update to leading generators like Midjourney and Stable Diffusion, the excitement grows, but – and especially in the creative community – so does the unease.
These concerns go back to the source – and I mean the literal source of all that imagery training data these systems used to build their models. The open internet provided a vast source of original creative data that OpenAI, for instance, used to train DALL-E. Artists discovered that their work was being used without their consent or their control. Anyone who’s used these systems knows they do not duplicate art; instead, they use their training to create something new, though they can ape anyone’s style and, in some cases, regurgitate – at least in part – the content they were trained on.
Some artists have tried, with little success, to sue companies like Stable Diffusion for copyright. However, the idea that success in the field of generative AI imagery always requires significant concessions from the artists' community is not necessarily accurate.
Just ask Adobe.
Recently, I sat down at CES 2024 with Adobe's VP of Generative AI, Alexandru Costin. A 20-year Adobe veteran, Costin’s been with the AI group for a year, and it’s been a whirlwind of activity that’s impacting not only Adobe Creative Cloud’s millions of users, but the future of the company.
“I do think a big chunk of our software stack will be replaced by models; we’re transforming from software company to AI company,” said Costin over breakfast.
He acknowledged that Adobe has been in the AI game for years, adding a Content Aware Fill tool to Photoshop back in 2016, but its approach has in general always been rooted not in creating something out of nothing, but in manipulating pre-existing imagery and creating a new verb in the process (was this Photoshopped?). “Our customers, they don’t want to generate images. They want to edit the image,” said Costin.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Adobe’s plan as it entered the generative imaging space was to find ways to use these powerful tools and models to enhance existing imagery while also protecting creators and their work.
Some consider Adobe late to the game, with its breakthrough platform, Firefly, following DALL-E, Stable Diffusion, and others in the space. Adobe, though, had enough experience with models, training, and output to know what it didn’t want to do.
Costin recounted VoCo, an early Adobe AI project that let you take an audio recording and then, using text prompts, refashion it so the speaker said something different. While the reception at the Adobe Max Conference that year appeared positive, concerns were quickly raised about VoCo being used to create audio deepfakes. The technology was never released, but it did lead to Adobe creating its own Content Authenticity Initiative.
“[We were] reacting in 2018, but since then we're trying to be ahead of the curve and be a thought leader in how AI is done,” said Costin.
The AI road less taken
When Adobe decided to explore generative AI imagery, it wanted to build a tool that Photoshop creators could use to output images that could be used anywhere, including in commercial content. To do so meant setting a high bar for training material: no copyrighted, trademarked, or branded material. Instead of scraping the internet for visual data, Adobe would look to a different, local source.
In addition to a wide collection of Adobe Creative Cloud apps that includes Photoshop, Premiere, After Effects, Illustrator, InDesign, Adobe has its own vast stock image library, Adobe Stock, with hundreds of thousands of contributors and untold assets across photos, illustrations, and 3D imagery. It’s free of commercial, branded, and trademark imagery, and is also moderated for hate and adult imagery. “We’re using hundreds of millions of assets, all trained and moderated to have no IP,” said Costin.
Adobe used that data to train Firefly, and then programmed the generative AI system so that it cannot render trademarked, recognizable characters (no putting Bart Simpson in a compromising situation). Costin noted that there is a gray area, as the copyright on some characters, like one version of Mickey Mouse, expires. There could be a case down the road where renders of that version of the iconic mouse are allowed.
There’s another wrinkle here that sets Adobe Firefly apart from Stable Diffusion and others: Adobe is paying its creators for the use of their work to train its AI. What they’re paid is based, in part, on how much of their creative output is used in the training.
Oddly, Adobe is also allowing creators to add generative imagery to Adobe Stock, which may mean the system is feeding itself. Regardless, Costin sees it all as a win-win. “Generative AI enables amplified creativity,” he told me.
Racing ahead and avoiding collisions
Costin, though, is no Pollyanna. He told me the new models are more powerful and “require more governance.” His team has trained new models to be safer for the education market – weeding out the ability to create NSFW content – while also providing a window for higher education, where adult artists might need more creative options.
Adobe’s tools also can’t afford to take a one-size-fits-all approach to generative AI. Costin explained to me how Firefly, for instance, handles location. The models consider where the requester lives and looks at “the skin color distribution of people living in your country,” to ensure that’s reflected in output. It does similar work with genders and age distribution. While it’s hard to know if these efforts entirely weed out bias, it’s clear that Adobe is putting in some effort to ensure its AI reflects its creators’ communities.
In Costin’s decades of software development experiences, this epoch stands out. “We’ve seen incredible progress in short periods of time,” he said. “This is the fastest innovation cycle we’ve all experienced. It requires new ways of building software.”
Adobe appears to be adjusting well on that front, but it’s hard to reconcile that speed with the required levels of governance – and with public perception.
“It is impossible to trust what you see,” warned Costin. “This will change how people perceive the internet.” His comments echoed those of his colleague, Adobe Chief Strategy Officer Scott Belsky who recently noted, "We're entering an era where we can no longer believe our eyes. Instead of 'trust but verify,' it's going to become 'verify then trust.'"
Giving them what they need
Perhaps, though, the road will be a bit easier for Adobe which, under Costin’s guidance, is focusing less on bespoke image creation and more on its fundamentals of image alteration and enhancement. I’ve rarely used Adobe Firefly to create entirely new images, but I'll often apply Generative Fill in Photoshop to match the image aspect ratio I need by extending an empty canvas without touching the photo’s subject (I did once use it to expand and alter meme images).
Adobe’s AI future looks very much like its recent past: adding more generative AI to key features and apps that speak directly to users’ primary tasks and needs. I asked Costin to outline where Adobe is going in AI. He didn’t announce any new products, but he did acknowledge a few things.
On the Adobe Premiere front: “Of course, we’re working on video.” Costin and his team are talking to the Premier and After Effects teams, asking them what they need. Whatever Adobe does there will follow the same AI playbook as Photoshop. I also asked about batch processing in Photoshop and Costin told me, “We’re thinking about it…. nothing to announce, but it’s a key workflow.”
Despite the breakneck pace of development, and the challenges involved, Costin remains positive about the trajectory of generative AI writ large.
“I’m an optimist,” he smiled.
You might also like
A 38-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.
Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Mark, the Today Show, Good Morning America, CNBC, CNN, and the BBC.