• chevron_right

      Adobe Photoshop’s new “Generative Fill” AI tool lets you manipulate photos with text / ArsTechnica · Tuesday, 23 May, 2023 - 19:07 · 1 minute

    An example of a 1983 file photo of the Apple Lisa computer that has been significantly enhanced by Generative Fill in the Adobe Photoshop beta.

    Enlarge / An example of a 1983 file photo of the Apple Lisa computer that has been significantly enhanced by the new "Generative Fill" AI tool in the Adobe Photoshop beta. (credit: Apple / Benj Edwards / Adobe)

    On Tuesday, Adobe added a new tool to its Photoshop beta called "Generative Fill," which uses cloud-based image synthesis to fill selected areas of an image with new AI-generated content based on a text description. Powered by Adobe Firefly, Generative Fill works similarly to a technique called "inpainting" used in DALL-E and Stable Diffusion releases since last year.

    At the core of Generative Fill is Adobe Firefly , which is Adobe's custom image-synthesis model. As a deep learning AI model, Firefly has been trained on millions of images in Adobe's stock library to associate certain imagery with text descriptions of them. Now part of Photoshop, people can type in what they want to see (i.e. "a clown on a computer monitor"), and Firefly will synthesize several options for the user to choose from. Generative Fill uses a well-known AI technique called " inpainting " to create a context-aware generation that can seamlessly blend synthesized imagery into an existing image.

    To use Generative Fill, users select an area of an existing image they want to modify. After selecting it, a "Contextual Task Bar" pops up that allows users to type in a description of what they want to see generated in the selected area. Photoshop sends this data to Adobe's servers for processing, then returns results in the app. After generating, the user has the option to select between several options of generations or to create more options to browse through.

    Read 7 remaining paragraphs | Comments

    • chevron_right

      Ethical AI art generation? Adobe Firefly may be the answer / ArsTechnica · Wednesday, 22 March, 2023 - 17:27 · 1 minute

    Adobe Firefly AI image generator example.

    Enlarge / An Adobe Firefly AI image generator example. (credit: Adobe)

    On Tuesday, Adobe unveiled Firefly, its new AI image synthesis generator. Unlike other AI art models such as Stable Diffusion and DALL-E , Adobe says its Firefly engine, which can generate new images from text descriptions, has been trained solely on legal and ethical sources, making its output clear for use by commercial artists. It will be integrated directly into Creative Cloud, but for now, it is only available as a beta.

    Since the mainstream debut of image synthesis models last year, the field has been fraught with issues around ethics and copyright. For example, the AI art generator called Stable Diffusion gained its ability to generate images from text descriptions after researchers trained an AI model to analyze hundreds of millions of images scraped from the Internet. Many (probably most) of those images were copyrighted and obtained without the consent of their rights holders, which led to lawsuits and protests from artists .

    To avoid those legal and ethical issues, Adobe created an AI art generator trained solely on Adobe Stock images, openly licensed content, and public domain content, ensuring the generated content is safe for commercial use. Adobe goes into more detail in its news release :

    Read 3 remaining paragraphs | Comments