Adobe Announces New Generative AI App That Doesn’t Steal Artists’ Work

Lon Harris
Lon Harris is a contributor to dot.LA. His work has also appeared on ScreenJunkies, RottenTomatoes and Inside Streaming.
Adobe Announces New Generative AI App That Doesn’t Steal Artists’ Work

Adobe jumped back into the white-hot generative AI field on Tuesday, announcing a new tool called Firefly that allows users to create and modify images exclusively via text. Though the company already has some basic tools that use generative AI techniques built into its Photoshop, Express and Lightroom software, Firefly represents a significant step forward, using multiple different AI models to generate a wide variety of formats and types of content.

Essentially, this is graphic design for people who never figured out how gradients or masking layers work. Just by typing simple phrases and written instructors, a user can tell Firefly to flip an image of a bright summer day into winter, automatically add or remove elements from a photo, design their own graphics and logos, or even create a custom “paintbrush” based on a texture or object already featured in an image. The first Firefly app to debut “transfers” different styles and artistic techniques onto a pre-existing image, and will also apply styles or texturing to letters and fonts based on written instructions; it launches soon in a private beta.

What's Firefly?

Adobe already makes widely-used visual and graphic design software, so their integration of generative AI tools makes a lot of common sense. While experimenting with AI art processes like Stable Diffusion and Midjourney can be fun for everyone – and have gone viral on social media purely for their ability to bring imaginative concepts and scenes to life – Adobe products present a more immediate and practical application.

One other element setting the company’s new Firefly tools apart is how they’re trained. According to the company, Firefly’s models have been trained exclusively utilizing the company’s own royalty-free media library, Adobe Stock, which contains content owned by the company as well as openly licensed or public domain images. Firefly users in the future will also be able to train the software on their own content and designs, and all material produced by the apps will contain metadata indicating whether or not it was entirely AI-generated, and if not, the original source of all the visual content.

Art vs. Theft

This is a major sea change from the technique employed by Stable Diffusion, Midjourney, OpenAI’s DALL-E, and other similar AI art tools. These apps were trained on databases of existing images culled from public image hosting websites, including copyrighted or privately owned material. This has naturally led to a multitude of murky ethical and legal debates about who really owns artwork and the sometimes subtle differences between influence, homage, and outright theft.

Some experts have argued that training software on artwork so that it can one day create original artwork of its own, falls under the definition of “fair use,” just like an artist going to a museum, studying paintings all day, and then going home to their own studio and applying what they have learned. But while human artists can learn about plagiarism and how to avoid it, and use the work of others to evolve and develop their own voices and styles, computers have no such understanding or “voice” of their own, and will unquestioningly re-apply techniques and styles they’ve borrowed/stolen from across the Web for use on new projects.

These issues are at the heart of the ongoing lawsuit against Stability AI and Midjourney, along with DeviantArt and its AI art generator, DreamUp. A trio of artists allege that these organizations are infringing on their rights, along with the rights of millions of other artists, by training AI tools on their work without consent. Getty Images has also filed a lawsuit against Stability AI, alleging “brazen infringement” of their intellectual property “on a staggering scale.” They claim Stability copied more than 12 million images from their database with no permission or compensation.

Pivoting to Video

If all of that sounds heady, multi-faceted and complex, just get ready, because the next major step in generative AI – text-to-video generation – appears to be right around the corner. AI startup Runway, which to date has focused on specialized applications like background removal and pose detection, announced its first AI video editing model, known as Gen-1, back in February, and uploaded a demo reel of the next iteration – Gen-2 – earlier this week. (Runway helped to develop the open-source text-to-image model Stable Diffusion.)

While Gen-1 requires a source image and video to produce content, it's not transforming existing video—it's generating an entirely new one based on multiple inputs. It's the same model that Gen-2 uses, just improved with the addition of text prompts. So far, the results are not exactly “Top Gun: Maverick” in 4K; they tend to look indistinct, blurry and pixelated. Many also have that hallucinatory, psychedelic effect that frequently results from constantly-swirling, uncanny AI animation. Still, nonetheless, the resulting videos are identifiable based on the prompts. The footage exists without ever being recorded with a camera.

For now, users wanting to check out Gen-2 can sign up for a waitlist on Runway’s Discord, but it’s just a matter of time before these tools go more widely public (despite the astounding amount of processing power required to generate original video). So whatever thorny ethical considerations remain around generative AI, the time to figure them out is now, because there’s apparently no stopping this proverbial train now that it’s left the station.

Subscribe to our newsletter to catch every headline.

How Women’s Purchasing Power Is Creating a New Wave of Economic Opportunities In Sports

Samson Amore

Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.

How Women’s Purchasing Power Is Creating a New Wave of Economic Opportunities In Sports
Samson Amore

According to a Forbes report last April, both the viewership and dollars behind women’s sports at a collegiate and professional level are growing.

Read moreShow less
https://twitter.com/samsonamore
samsonamore@dot.la
LA Tech Week Day 5: Social Highlights
Evan Xie

L.A. Tech Week has brought venture capitalists, founders and entrepreneurs from around the world to the California coast. With so many tech nerds in one place, it's easy to laugh, joke and reminisce about the future of tech in SoCal.

Here's what people are saying about the fifth day of L.A. Tech Week on social:

Read moreShow less

LA Tech Week: How These Six Greentech Startups Are Tackling Major Climate Issues

Samson Amore

Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.

LA Tech Week: How These Six Greentech Startups Are Tackling Major Climate Issues
Samson Amore

At Lowercarbon Capital’s LA Tech Week event Thursday, the synergy between the region’s aerospace industry and greentech startups was clear.

The event sponsored by Lowercarbon, Climate Draft (and the defunct Silicon Valley Bank’s Climate Technology & Sustainability team) brought together a handful of local startups in Hawthorne not far from LAX, and many of the companies shared DNA with arguably the region’s most famous tech resident: SpaceX.

Read moreShow less
https://twitter.com/samsonamore
samsonamore@dot.la
RELATEDEDITOR'S PICKS
LA TECH JOBS
interchangeLA
Trending