Adobe Announces New Generative AI App That Doesn’t Steal Artists’ Work

Lon Harris
Lon Harris is a contributor to dot.LA. His work has also appeared on ScreenJunkies, RottenTomatoes and Inside Streaming.
Adobe Announces New Generative AI App That Doesn’t Steal Artists’ Work

Adobe jumped back into the white-hot generative AI field on Tuesday, announcing a new tool called Firefly that allows users to create and modify images exclusively via text. Though the company already has some basic tools that use generative AI techniques built into its Photoshop, Express and Lightroom software, Firefly represents a significant step forward, using multiple different AI models to generate a wide variety of formats and types of content.

Essentially, this is graphic design for people who never figured out how gradients or masking layers work. Just by typing simple phrases and written instructors, a user can tell Firefly to flip an image of a bright summer day into winter, automatically add or remove elements from a photo, design their own graphics and logos, or even create a custom “paintbrush” based on a texture or object already featured in an image. The first Firefly app to debut “transfers” different styles and artistic techniques onto a pre-existing image, and will also apply styles or texturing to letters and fonts based on written instructions; it launches soon in a private beta.

What's Firefly?

Adobe already makes widely-used visual and graphic design software, so their integration of generative AI tools makes a lot of common sense. While experimenting with AI art processes like Stable Diffusion and Midjourney can be fun for everyone – and have gone viral on social media purely for their ability to bring imaginative concepts and scenes to life – Adobe products present a more immediate and practical application.

One other element setting the company’s new Firefly tools apart is how they’re trained. According to the company, Firefly’s models have been trained exclusively utilizing the company’s own royalty-free media library, Adobe Stock, which contains content owned by the company as well as openly licensed or public domain images. Firefly users in the future will also be able to train the software on their own content and designs, and all material produced by the apps will contain metadata indicating whether or not it was entirely AI-generated, and if not, the original source of all the visual content.

Art vs. Theft

This is a major sea change from the technique employed by Stable Diffusion, Midjourney, OpenAI’s DALL-E, and other similar AI art tools. These apps were trained on databases of existing images culled from public image hosting websites, including copyrighted or privately owned material. This has naturally led to a multitude of murky ethical and legal debates about who really owns artwork and the sometimes subtle differences between influence, homage, and outright theft.

Some experts have argued that training software on artwork so that it can one day create original artwork of its own, falls under the definition of “fair use,” just like an artist going to a museum, studying paintings all day, and then going home to their own studio and applying what they have learned. But while human artists can learn about plagiarism and how to avoid it, and use the work of others to evolve and develop their own voices and styles, computers have no such understanding or “voice” of their own, and will unquestioningly re-apply techniques and styles they’ve borrowed/stolen from across the Web for use on new projects.

These issues are at the heart of the ongoing lawsuit against Stability AI and Midjourney, along with DeviantArt and its AI art generator, DreamUp. A trio of artists allege that these organizations are infringing on their rights, along with the rights of millions of other artists, by training AI tools on their work without consent. Getty Images has also filed a lawsuit against Stability AI, alleging “brazen infringement” of their intellectual property “on a staggering scale.” They claim Stability copied more than 12 million images from their database with no permission or compensation.

Pivoting to Video

If all of that sounds heady, multi-faceted and complex, just get ready, because the next major step in generative AI – text-to-video generation – appears to be right around the corner. AI startup Runway, which to date has focused on specialized applications like background removal and pose detection, announced its first AI video editing model, known as Gen-1, back in February, and uploaded a demo reel of the next iteration – Gen-2 – earlier this week. (Runway helped to develop the open-source text-to-image model Stable Diffusion.)

While Gen-1 requires a source image and video to produce content, it's not transforming existing video—it's generating an entirely new one based on multiple inputs. It's the same model that Gen-2 uses, just improved with the addition of text prompts. So far, the results are not exactly “Top Gun: Maverick” in 4K; they tend to look indistinct, blurry and pixelated. Many also have that hallucinatory, psychedelic effect that frequently results from constantly-swirling, uncanny AI animation. Still, nonetheless, the resulting videos are identifiable based on the prompts. The footage exists without ever being recorded with a camera.

For now, users wanting to check out Gen-2 can sign up for a waitlist on Runway’s Discord, but it’s just a matter of time before these tools go more widely public (despite the astounding amount of processing power required to generate original video). So whatever thorny ethical considerations remain around generative AI, the time to figure them out is now, because there’s apparently no stopping this proverbial train now that it’s left the station.

Subscribe to our newsletter to catch every headline.

LA Tech Week Day Two: Social Highlights
Evan Xie

L.A. Tech Week has brought venture capitalists, founders and entrepreneurs from around the world to the California coast. With so many tech nerds in one place, it's easy to laugh, joke and reminisce about the future of tech in SoCal.

Here's what people are saying about day two of L.A. Tech Week on social:

Read moreShow less

LA Tech Week: Technology and Storytelling for Social Good

Decerry Donato

Decerry Donato is a reporter at dot.LA. Prior to that, she was an editorial fellow at the company. Decerry received her bachelor's degree in literary journalism from the University of California, Irvine. She continues to write stories to inform the community about issues or events that take place in the L.A. area. On the weekends, she can be found hiking in the Angeles National forest or sifting through racks at your local thrift store.

LA Tech Week: Technology and Storytelling for Social Good
Photo taken by Decerry Donato

On Monday, Los Angeles-based philanthropic organization Goldhirsh Foundation hosted the Technology and Storytelling For Social Good panel at Creative Visions studio to kick off LA Tech week.

Tara Roth, president of the foundation, moderated the panel and gathered nonprofit and tech leaders including Paul Lanctot, web developer of The Debt Collective; Alexis Cabrera, executive director of 9 Dots; Sabra Williams, co-founder of Creative Acts; and Laura Gonzalez, senior program manager of Los Angeles Cleantech Incubator (LACI).

Each of the panelists are grantees of Goldhirsh Foundation’s LA2050, an initiative launched in 2011 that is continuously trying to drive and track progress toward a shared vision for the future of Los Angeles. Goldhirsh’s vision is to make Los Angeles better for all and in order to achieve their goal, the foundation makes investments into organizations, creates partnerships and utilizes social capital through community events.

The panelists shared how the work they are doing in each of their respective sectors uses technology to solve some of society's most pressing challenges and highlight the importance of tech literacy across every community.

Read moreShow less

LA Tech Week Is Back! Here Are the Events We're Watching

Kristin Snyder

Kristin Snyder is dot.LA's 2022/23 Editorial Fellow. She previously interned with Tiger Oak Media and led the arts section for UCLA's Daily Bruin.

LA Tech Week Is Back! Here Are the Events We're Watching
Evan Xie

This is the web version of dot.LA’s daily newsletter. Sign up to get the latest news on Southern California’s tech, startup and venture capital scene.


MONDAY

LA Hardtech: Local Talent Meets CEOs: Want to see robots in action? This hardtech event will showcase product demos and feature conversations about all things aircrafts, satellites, electric vehicles, robots and medical devices. June 5 from 5 p.m. to 8 p.m. in El Segundo.

Read moreShow less
https://twitter.com/ksnyder_db
RELATEDEDITOR'S PICKS
LA TECH JOBS
interchangeLA
Trending