(Go: >> BACK << -|- >> HOME <<)

Skip to main content

Adobe is upgrading Photoshop’s generative AI model — and releasing more for Illustrator and Express

Adobe is upgrading Photoshop’s generative AI model — and releasing more for Illustrator and Express

/

Adobe’s new Firefly generative AI models can create high-quality images, editable vector graphics, and customizable design templates.

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Adobe Max logo surrounded by various graphics and shapes.
Adobe announces new Firefly Image 2, Firefly Vector, and Firefly Design models at MAX 2023.
Image: Adobe

Adobe is going all in on AI, announcing three new generative AI models today that add powerful features to Illustrator and Adobe Express and vastly improve Photoshop’s text-to-image capabilities. During the company’s Adobe Max event on Tuesday, Adobe unveiled its Firefly Image 2 model — the latest version of the original Firefly AI image generator that powers popular features like Photoshop’s Generative Fill — alongside two new Firefly models for generating vector images and design templates.

Adobe says its new Firefly Image 2 model generates significantly higher-quality images compared to its predecessor, particularly regarding high-frequency details like foliage, skin texture, hair, hands, and facial features when rendering photorealistic humans. Images generated using the Firefly Image 2 model are a higher resolution and feature more vivid colors and color contrast.

Two AI-generated images of a woman exploding into a cloud of marshmallows.
Here’s a comparison between the two Firefly image models — you can even see the sugary “wrinkles” on the marshmallows in the right image.
Image: Adobe

The Image 2 model also introduces new AI-powered editing capabilities to help users customize their results. Photo settings can be applied to either manually or automatically adjust the depth of field, motion blur, and field of view of a generated image, just like manual camera controls. A “Prompt Guidance” feature has also been added to help users improve the wording of their text descriptions, alongside automatically completing prompts to boost efficiency.

Adobe additionally introduced a new “Generative Match” feature that influences the style of generated content to match specific images. Users can select from a preselected list of images or upload their own references to replicate the style, controlling how close the resemblance is with a slider. Content credentials — a digital “nutrition label” that attaches attribution metadata and identifies the image as being AI-generated — will be automatically attached to the final outputs.

A screenshot of Adobe’s new Generative Match feature with two sloths.
Adobe is pitching Generative Match to companies that want to easily replicate their own brand style.
Image: Adobe

In a dedicated blog post, Adobe’s design leader, Scott Belsky, said the company has developed “new policies and safeguards” to protect Generative Match from being abused. The feature will prompt users to agree to Adobe’s terms of use and confirm they have rights to use the uploaded image, alongside storing a thumbnail image of the uploaded content (which isn’t used to train AI models) on Adobe’s servers to provide a level of accountability. Generative Match will also remain in beta while the company seeks feedback, and users won’t be permitted to use it for commercial purposes during this time.

Still, it seems there’s very little to actually prevent users from mimicking protected content right now, which may drive a bigger wedge between Adobe and creatives who oppose having their style replicated by AI. Currently, this system seems to be more about limiting Adobe’s liability than preventing copycat behavior in the first place.

An image of penguins in a desert, with content credentials attributed.
If the image you upload into Generative Match already has content credentials applied to identify it as protected content, you won’t be able to replicate its style.
Image: Adobe

Firefly Image 2 is available to try today via the web-based Firefly beta and is “coming soon” to Creative Cloud apps. That means it won’t be available in Photoshop (standard, beta, or for the web) yet, but you can at least compare it against the original Firefly image model while we wait for it to roll out.

Adobe also unveiled a new Firefly Vector model for Adobe Illustrator, which the company claims is the “world’s first generative AI model for vector graphics.” Available now in the Firefly beta, Adobe’s Firefly Vector model allows users to create editable vector images using text prompts that automatically split each element of the graphic into “logical” groups and layers. Unlike traditional JPEG and PNG files, vector graphics (otherwise known as SVG files) are ideal for creatives like logo designers because they can be scaled to any size without impacting the overall image quality.

A screenshot of Adobe’s text-to-vector feature in the Illustrator beta.
The Firefly Vector model generates three variations of the described image to allow users to select the best option.
Image: Adobe

Just like the original Firefly text-to-image model, Adobe says its Firefly Vector model is designed to be safe for commercial use (when it leaves beta) because it was trained on licensed content like Adobe Stock and public domain content where the copyright has expired. The Firefly Vector model is available to try today via the Adobe Illustrator beta, alongside additional beta features like Mockup (which realistically stages your designs on a 3D model) and Retype for identifying and editing vector fonts.

Lastly, Adobe introduced a Firefly Design model that generates customizable templates for print, social posts, online advertising, video, and more. Powering the new text-to-template beta feature in Adobe Express, Adobe’s Firefly Design model uses text prompts to generate fully editable templates for “all popular aspect ratios.” It shares some similarities with Canva’s Magic Design feature (another all-in-one design platform that rivals Adobe Express) in that users can describe something like a “beach holiday flyer” to generate unique templates instead of dropping individual text and image assets onto a blank canvas.

A screenshot of the Adobe Express text to template beta.
AI-generated templates provide a more personalized starting point for your projects, rather than hunting for preexisting designs or starting from scratch.
Image: Adobe

These new models may only be available in beta for now, but we may not have long to wait before they roll out to general availability if the release timeline for Photoshop’s Generative Fill feature is anything to go by. The original Firefly model has been used to generate over 3 billion images to date, according to Adobe, and it’s unusual to see such a popular product updated before its first birthday. Other companies like Canva and Microsoft have released various AI-powered creative tools over the last year, so perhaps Adobe releasing the floodgates for its own AI innovations is the best way for the company to remain competitive.