Lon Harris

Evan Xie

If you can believe it, it’s been more than a decade since rapper Macklemore extolled the virtues of thrift shopping in a viral music video. But while scouring the ranks of vintage clothing stores looking for the ultimate come-up may have waned in popularity since 2012, the online version of this activity is apparently thriving.

According to a new trend story from CNBC, interest in “reselling” platforms like Etsy-owned Depop and Poshmark has exploded in the years since the start of the COVID-19 pandemic and lockdown. In an article that spends a frankly surprising amount of time focused on sellers receiving death threats before concluding that they’re “not the norm,” the network cites the usual belt-tightening ecommerce suspects – housebound individuals doing more of their shopping online coupled with inflation woes and recession fears – as the causes behind the uptick.

As for data, there’s a survey from Depop themselves, finding that 53% of respondents in the UK are more inclined to shop secondhand as living costs continue to rise. Additional research from Advance Market Analytics confirms the trend, citing not just increased demand for cheap clothes but the pressing need for a sustainable alternative to recycling clothing materials at its core.

The major popularity of “thrift haul” videos across social media platforms like YouTube and TikTok has also boosted the visibility of vintage clothes shopping and hunting for buried treasures. Teenage TikToker Jacklyn Wells scores millions of views on her thrift haul videos, only to get routinely mass-accused of greed for ratching up the Depop resell prices for her coolest finds and discoveries. Nonetheless, viral clips like Wells’ have helped to embed secondhand shopping apps more generally within online fashion culture. Fashion and beauty magazine Hunger now features a regular list of the hottest items on the re-sale market, with a focus on how to use them to recreate hot runway looks.

As with a lot of consumer and technology trends, the sudden surge of interest in second-hand clothing retailers was only partly organic. According to The Drum, ecommerce apps Vinted, eBay, and Depop have collectively spent around $120 million on advertising throughout the last few years, promoting the recent vintage shopping boom and helping to normalize second-hand shopping. This includes conventional advertising, of course, but also deals with online influencers to post content like “thrift haul” videos, along with shoutouts for where to track down the best finds.

Reselling platforms have naturally responded to the increase in visibility with new features (as well as a predictable hike in transaction fees). Poshmark recently introduced livestreamed “Posh Shows” during which sellers can host auctions or provide deeper insight into their inventory. Depop, meanwhile, has introduced a “Make Offer” option to fully integrate the bartering and negotiation process into the app, rather than forcing buyers and sellers to text or Direct Message one another elsewhere. (The platform formerly had a comments section on product pages, but shut this option down after finding that it led to arguments, and wasn’t particularly helpful in making purchase decisions.)

Now that it’s clear there’s money to be made in online thrift stores, larger and more established brands and retailers are also pushing their way into the space. H&M and Target have both partnered with online thrift store ThredUp on featured collections of previously-worn clothing. A new “curated” resale collection from Tommy Hilfiger – featuring minorly damaged items that were returned to its retail stores – was developed and promoted through a partnership with Depop, which has also teamed with Kellogg’s on a line of Pop-Tarts-inspired wear. J.Crew is even bringing back its classic ‘80s Rollneck Sweater in a nod to the renewed interest in all things vintage.

Still, with any surge of popularity and visibility, there must also come an accompanying backlash. In a sharp editorial this week for Arizona University’s Daily Wildcat, thrift shopping enthusiast Luke Lawson makes the case that sites like Depop are “gentrifying fashion,” stripping communities of local thrift stores that provide a valuable public service, particularly for members of low-income communities. As well, UK tabloids are routinely filled with secondhand shopping horror stories these days, another evidence point as to their increased visibility among British consumers specifically, not to mention the general dangers of buying personal items from strangers you met over the internet.

Evan Xie

With rumors swirling this week about the potential (now delayed) arrest of former president Donald Trump, social media responded as it tends to do with any major news story: by memeing the heck out of it. In this case, imaginative online pranksters took to generative AI art apps like Midjourney and Stable Diffusion to create fake images of Trump being placed under arrest and taken to prison. One narrative thread of AI imagery – depicting Trump’s journey from arrest to prison to escape and ultimately seeking sanctuary in McDonald’s was apparently enough to get British journalist Eliot Higgings temporarily banned from the Midjourney app entirely.

Naturally, this led to another round of deep concern from the press about the potential future implications of AI art and other kinds of “deepfake” technology. Soon, these editorials warn, we may be completely incapable of distinguishing fact from fiction or trusting even evidence we can see and hear. With new AI apps and concepts flooding the internet every day, we’re now repeating this news cycle every few weeks. It was only late February when everyone was concerned about those vocal deepfakes, following the spread of clips in which Joe Biden was trapped in the “Skinamarink” house, or recalled the events of the 2011 film “We Bought a Zoo.”

Certainly, no one could deny the power a single potent image can have on public perception. How many times have social media users shared that memorable photograph of the Clintons and Trumps at a party together chatting it up, or Elon Musk posing next to convicted sex trafficker Ghislaine Maxwell, or those Charlotesville protesters with the tiki torches. The whole concept of photojournalism is built around the concept that a carefully-captured image can tell a story just as effectively as a 500-word article.

But is AI Art actually believable?

It’s nonetheless worth pointing out in light of the viral success of Higgings’ and others’ “Trump Arrest” AI art threads that we’re not yet in a world in which apps like Midjourney could potentially sway elections. Consumer-facing AI products can certainly produce compelling images based only on simple prompts, but once you get out of the realm of relatively simple portraits and straight-forward concepts, the results become exponentially less photorealistic. Even in Higgins’ own thread, static shots of Trump in a prison cell alone reading a book or slouching against a fence look way more compelling than action shots of him shooting hoops with other inmates or fleeing authorities to a fast food joint under cover of night. (Though the Golden Arches come through perfectly, even the McDonald’s name doesn’t translate into AI; Midjourney reproduces their logo as reading “Minonad.”)

AI art apps famously struggle to reproduce the more nuanced and complicated bits of human anatomy like faces and hands (though there have been recent signs of improvement here). Some shapes and textures, like liquids, also remain problematic for the apps, though again there are some signs of hope on the horizon.

All the “sky is falling” editorials about how one day soon, you won’t be able to tell if a photo is real or AI prompt-based, begin with the core assumption that these proposed solutions will work out, and generative AI art apps will essentially become perfect very soon. And look, there is no direct evidence that this is wrong, and the fact that these apps exist in the first place is impressive.

But is it a guarantee that Midjourney will definitely get a lot better at photorealism in the near future, such that we have to be actively concerned when we see a photo of President Trump about whether or not we can believe our eyes? Is this the kind of thing we can “teach” software just by showing it thousands of individual labeled photographs and telling it “this is what reality looks like”? Does anyone even know?

The Pixar Problem

I’m reminded of a San Diego Comic-Con panel I attended in 2008. (Bear with me! I swear this is gonna link up.) Pixar did a presentation in Hall H that year previewing their latest film, “Up,” and the conversation included some insights into some of the more complicated animation challenges the studio had encountered to date. “Up” director Pete Docter was a veteran of one of the studios’ first and most-beloved films, “Monsters Inc.,” and he said that one of the chief obstacles to animating that film was the character of Sully, who’s covered in thick blue fur. When Pixar began work on “Monsters Inc,” their computer animation software didn’t yet know how to reproduce realistic tufts of hair.

This makes sense when you think about the way hair behaves in the real world. There’s uniform direction; all of Sully’s fur follows him around wherever he goes, and is impacted by his momentum, the wind, other characters and objects moving around, and so forth. But “fur” is not a single uniform object; it’s actually made up of thousands upon thousands of individual strands of hair, which don’t all behave in exactly the same way all the time.

Computers aren’t naturally that good at reproducing this kind of randomized group movement; it took Pixar animators years of diligent work and a whole lot of computer processing power to sort it out. Other complex substances and surfaces like water have also buguiled animators for years. Disney animators working on “Moana” specifically had to address the challenges posed by a movie in which the ocean was both a setting and a supporting character with new techniques and technologies. It’s the same situation with large crowds; they move as a unit, yes, but they’re actually made up of individual people, who also move around on their own. That’s tough for a computer to animate without very specific instructions.

Which (finally!) brings me back to Midjourney and AI art apps. The assumption that the computer will “figure out” all of these challenges on its own, just by being trained and retrained on more and more images, strikes me as a pretty significant one. We tend to view the advancement of technology as purely linear, a straight line from where we are now to “the future.” But in fact, a lot of innovations develop in starts and stops. An intractable problem presents itself, and it can take a relatively long time to sort out, if in fact it ever gets resolved. (It’s been more than a decade since we were first promised self-driving cars and truly immersive virtual reality were just a few years out, after all.)

Perhaps Midjourney will have an easier time with fur and juice and Times Square on New Years Eve than Pixar and Disney’s software had, and won’t require as much patient and careful direction and processing power to sort all of this out. But I’ve yet to see any evidence that it’s a guaranteed sure thing either.

Adobe jumped back into the white-hot generative AI field on Tuesday, announcing a new tool called Firefly that allows users to create and modify images exclusively via text. Though the company already has some basic tools that use generative AI techniques built into its Photoshop, Express and Lightroom software, Firefly represents a significant step forward, using multiple different AI models to generate a wide variety of formats and types of content.

Essentially, this is graphic design for people who never figured out how gradients or masking layers work. Just by typing simple phrases and written instructors, a user can tell Firefly to flip an image of a bright summer day into winter, automatically add or remove elements from a photo, design their own graphics and logos, or even create a custom “paintbrush” based on a texture or object already featured in an image. The first Firefly app to debut “transfers” different styles and artistic techniques onto a pre-existing image, and will also apply styles or texturing to letters and fonts based on written instructions; it launches soon in a private beta.

What's Firefly?

Adobe already makes widely-used visual and graphic design software, so their integration of generative AI tools makes a lot of common sense. While experimenting with AI art processes like Stable Diffusion and Midjourney can be fun for everyone – and have gone viral on social media purely for their ability to bring imaginative concepts and scenes to life – Adobe products present a more immediate and practical application.

One other element setting the company’s new Firefly tools apart is how they’re trained. According to the company, Firefly’s models have been trained exclusively utilizing the company’s own royalty-free media library, Adobe Stock, which contains content owned by the company as well as openly licensed or public domain images. Firefly users in the future will also be able to train the software on their own content and designs, and all material produced by the apps will contain metadata indicating whether or not it was entirely AI-generated, and if not, the original source of all the visual content.

Art vs. Theft

This is a major sea change from the technique employed by Stable Diffusion, Midjourney, OpenAI’s DALL-E, and other similar AI art tools. These apps were trained on databases of existing images culled from public image hosting websites, including copyrighted or privately owned material. This has naturally led to a multitude of murky ethical and legal debates about who really owns artwork and the sometimes subtle differences between influence, homage, and outright theft.

Some experts have argued that training software on artwork so that it can one day create original artwork of its own, falls under the definition of “fair use,” just like an artist going to a museum, studying paintings all day, and then going home to their own studio and applying what they have learned. But while human artists can learn about plagiarism and how to avoid it, and use the work of others to evolve and develop their own voices and styles, computers have no such understanding or “voice” of their own, and will unquestioningly re-apply techniques and styles they’ve borrowed/stolen from across the Web for use on new projects.

These issues are at the heart of the ongoing lawsuit against Stability AI and Midjourney, along with DeviantArt and its AI art generator, DreamUp. A trio of artists allege that these organizations are infringing on their rights, along with the rights of millions of other artists, by training AI tools on their work without consent. Getty Images has also filed a lawsuit against Stability AI, alleging “brazen infringement” of their intellectual property “on a staggering scale.” They claim Stability copied more than 12 million images from their database with no permission or compensation.

Pivoting to Video

If all of that sounds heady, multi-faceted and complex, just get ready, because the next major step in generative AI – text-to-video generation – appears to be right around the corner. AI startup Runway, which to date has focused on specialized applications like background removal and pose detection, announced its first AI video editing model, known as Gen-1, back in February, and uploaded a demo reel of the next iteration – Gen-2 – earlier this week. (Runway helped to develop the open-source text-to-image model Stable Diffusion.)

While Gen-1 requires a source image and video to produce content, it's not transforming existing video—it's generating an entirely new one based on multiple inputs. It's the same model that Gen-2 uses, just improved with the addition of text prompts. So far, the results are not exactly “Top Gun: Maverick” in 4K; they tend to look indistinct, blurry and pixelated. Many also have that hallucinatory, psychedelic effect that frequently results from constantly-swirling, uncanny AI animation. Still, nonetheless, the resulting videos are identifiable based on the prompts. The footage exists without ever being recorded with a camera.

For now, users wanting to check out Gen-2 can sign up for a waitlist on Runway’s Discord, but it’s just a matter of time before these tools go more widely public (despite the astounding amount of processing power required to generate original video). So whatever thorny ethical considerations remain around generative AI, the time to figure them out is now, because there’s apparently no stopping this proverbial train now that it’s left the station.

Trending