Eight Men Indicted for Using Discord and Twitter To Raise Stock Value in ‘Pump and Dump’ Scheme

Samson Amore

Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College and previously covered technology and entertainment for TheWrap and reported on the SoCal startup scene for the Los Angeles Business Journal. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.

Discord
Andria Moore

This week, the FBI indicted eight men, two of whom reside in Beverly Hills, for allegedly using Discord and Twitter to artificially raise stock values in an illegal “pump and dump” scheme.

According to the indictment released Wednesday, the accused made an estimated $114 million off the sale of stocks they pumped up in a Discord server called Atlas Trading. The defendants were also active on financial Twitter (referred to as, “fintwit”), where influencers use social media to spread misinformation about stocks.


Atlas Trading, the Discord server the eight defendants ran and claimed by the defendants to be the “largest free stock trading community,” the Discord server Atlas Trading has been taken offline. Whether that’s through the Justice Department or the alleged fraudsters remains unclear: Discord didn’t immediately return dot.LA’s request for comment on the recent indictment.

In the wake of this news, let’s take a brief look at what the platform is, and how prevalent these types of scams are on Discord.

Why Discord?

A text, voice and video chat app, Discord is well known for having a huge community of young and impressionable users (according to Statista, users aged 16-24 made up roughly 22% of its audience) and it’s been dubbed the “teens’ favorite chat app,” partly because it interfaces perfectly with all their favorite games. Users can chat, game and build community.

All of which explains why Discord is ripe for these types of schemes. For starters, anonymity allows users the feeling they are shielded from legal consequences. But more importantly, a private Discord channel can cultivate the sense that you’re a part of a close-knit community of insiders tooled with information not yet known to the greater public.

In the case of Atlas Trading, the group promised its followers surefire gains on securities it was seeking to pump. Though the indictment notes that once people began to lose money, the Discord server’s members then started questioning the scammers’ guarantees in the Discord server. Throughout it all, though, the scammers of course denied they were selling securities and that they were part of a pump and dump scheme.

How Do Influencers Use Discord for Pump and Dump Schemes?

Using their Instagram pages full of pictures of exotic cars, multiple credit cards and designer clothes to lure in marks, finance influencers typically offer access to an “exclusive” Discord server where they promise private, and valuable communication about trading. As such, it’s not uncommon for these influencers to ask people to pay to get access to servers.

In the case of Atlas Trading, the influencers banded together to buy the same securities then posted false information on Discord and other social platforms hyping the stock up. Then, they all sold covertly while still encouraging their followers to buy to increase their profits.

In other cases, the scammer(s) even send specific messages to the Discord about when to buy or sell a stock with a promise of guaranteed returns, in a bid to further control the market.

How Common Are These Types of Scams on Discord?

The Better Business Bureau’s site notes no shortage of recent anonymous complaints about Discord. Many reference cryptocurrency scams being conducted on the app. One report from July 1 noted the mark lost $5,700 to a stock trading scam after getting a message from a Facebook page called OBR Investing asking them to join a Discord server. After joining the Discord, the person began making deposits but couldn’t withdraw their cash and received a fraudulent check.

Many others reported fake messages on Discord claiming they won Bitcoin or Ethereum and had to deposit funds to retrieve the payout. One person avoided losing any money but another reported losing $8,501 to a broker going by PigyTrade. “Upon requesting for withdrawal the company refused to give me information regarding my transaction and stopped replying all together,” the report read.

Another person reported in September that Yan Stavisski, founder of Los Angeles-based “financial education” King Credit, allegedly scammed them out of $1,500 under the guise of getting investing classes and access to a private Discord server. “I have never heard back from him, ever since. I paid $1500 to this LLC,” the person wrote.

What Can I Do to Avoid Being Scammed?

Be wary of any trade promising fast cash, and assume every offer is a scam until proven otherwise. In addition, you should seek verification of claims about any developments that could bump up a stock’s value; the SEC includes “product developments, lucrative contracts, or the company’s financial health.” If these claims are real, it shouldn’t be hard to find an accredited news source reporting on it. Or, check the SEC’s database for recent filings; publicly traded companies are mandated to keep the regulators updated too.

But, for that reason, pump and dump schemes usually target smaller companies that don’t require a lot of volume to move the price. In that case, verification is doubly important.

Also, be wary of any “high-pressure pitches,” the SEC says. If anything is marketed as a “once in a lifetime opportunity,” it isn’t, it’s just a bid to pressure you into selling against an invisible deadline.
https://twitter.com/samsonamore
samsonamore@dot.la

Subscribe to our newsletter to catch every headline.

'Open Letter' Proposing 6-Month AI Moratorium Continues to Muddy the Waters Around the Technology

Lon Harris
Lon Harris is a contributor to dot.LA. His work has also appeared on ScreenJunkies, RottenTomatoes and Inside Streaming.
'Open Letter' Proposing 6-Month AI Moratorium Continues to Muddy the Waters Around the Technology
Evan Xie

AI continues to dominate the news – not just within the world of technology, but mainstream news sources at this point – and the stories have entered a by-now familiar cycle. A wave of exciting new developments, releases and viral apps is followed by a flood of alarm bells and concerned op-eds, wondering out loud whether or not things are moving too fast for humanity’s own good.

With OpenAI and Microsoft’s GPT-4 arriving a few weeks ago to massive enthusiasm, we were overdue for our next hit of jaded cynicism, warning about the potentially dire impact of intuitive chatbots and text-to-image generators.

Sure enough, this week, more than 1,000 signatories released an open letter calling for all AI labs to pause training any new systems more powerful than GPT-4 for six months.

What does the letter say?

The letter calls out a number of familiar concerns for anyone who has been reading up on AI development this past year. On the most immediate and practical level, it cautions that chatbots and automated text generators could potentially eliminate vast swathes of jobs previously filled by humans, while “flood[ing] our information channels with propaganda and untruth.” The letter then continues into full apocalypse mode, warning that “nonhuman minds” could eventually render us obsolete and dominate us, risking “loss of control of our civilization.”

The six-month break, the signatories argue, could be used to jointly develop shared safety protocols around AI design to ensure that they remain “safe beyond a reasonable doubt.” They also suggest that AI developers work in collaboration with policymakers and politicians to develop new laws and regulations around AI and AI research.

The letter was signed by several AI developers and experts, along with tech industry royalty like Elon Musk and Steve Wozniak. TechCrunch does point out that no one from inside OpenAI seems to have signed it, nor Anthropic, a group of former OpenAI developers who left to design their own “safer” chatbots. OpenAI CEO Sam Altman did speak to the Wall Street Journal this week in reference to the letter, noting that the company has not yet started work on GPT-5 and that time for safety tests has always been built into their development process. He referred to the letter’s overall message as “preaching to the choir.”

Critics of the letter

The call for an AI ban was not without critics, though. Journalist and investor Ben Parr noted that the vague language makes it functionally meaningless, without any kind of metrics to gauge how “powerful” an AI system has become or suggestions for how to enforce a global AI ban. He also notes that some signatories, including Musk, are OpenAI and ChatGPT competitors, potentially giving them a personal stake in this fight beyond just concern for the future of civilization. Others, like NBC News reporter Ben Collins, suggested that the dire AI warnings could be a form of dystopian marketing.

On Twitter, entrepreneur Chris Pirillo noted that “the genie is already out of the bottle” in terms of AI development, while physicist and author David Deutsch called out the letter for confusing today’s AI apps with the Artificial General Intelligence (AGI) systems still only seen in sci-fi films and TV shows.

Legitimate red flags

Obviously, the letter speaks to relatively universal concerns. It’s easy to imagine why writers would be concerned by, say, BuzzFeed now using AI to write entire articles and not just quizzes. (The website isn’t even using professional writers to collaborate with and copy-edit the software anymore. The new humans helping out “Buzzy the Robot” to compose its articles are non-editorial employees from the client partnership, account management, and product management teams. Hey, it’s just an “experiment,” freelancers!)

But it does once more raise some red flags about the potentially misleading ways that some in the industry and the media are discussing AI, which continues to make these kinds of high-level discussions around the technology more cumbersome and challenging.

A recent viral Twitter thread credited ChatGPT-4 with saving a dog’s life, leading to a lot of breathlessly excited coverage about how computers were already smarter than your neighborhood veterinarian. The owner entered the dog’s symptoms into the chatbot, along with copies of its blood work, and ChatGPT responded with the most common potential ailments. As it turns out, a live human doctor tested the animal for one of the bot’s suggested illnesses and accurately guessed the diagnosis. So the computer is, in a very real sense, a hero.

Still, considering what might be wrong with dogs based on their symptoms isn’t what ChatGPT does best. It’s not a medical or veterinary diagnostic tool, and it doesn’t have a database of dog ailments and treatments at the ready. It’s designed for conversations, and it’s just guessing as to what might be wrong with the animal based on the texts on which it was trained, sentences and phrases that it has seen connected in human writing in the past. In this case, the app guessed correctly, and that’s certainly good news for one special pupper. But there’s no guarantee it would get the right answer every time, or even most of the time. We’ve seen a lot of evidence that ChatGPT is perfectly willing to lie, and can’t actually tell the difference between truth and a lie.

There’s also already a perfectly solid technology that this person could have used to enter a dog’s symptoms and research potential diagnoses and treatments: Google search. A search results page also isn’t guaranteed to come up with the correct answer, but it’s as if not more reliable in this particular use case than ChatGPT-4, at least for now. A quality post on a reliable veterinary website would hopefully contain similar information to the version ChatGPT pulled together, except it would have been vetted and verified by an actual human expert.

Have we seen too many sci-fi movies?

A response published in Time by computer scientist Eliezer Yudkowsky – long considered a thought leader in the development of artificial general intelligence – argues that the open letter doesn’t go far enough. Yudkowsky suggests that we’re currently on a path toward “building a superhumanly smart AI,” which will very likely result in the death of every human being on the planet.

No, really, that’s what he says! The editorial takes some very dramatic turns that feel pulled directly from the realms of science-fiction and fantasy. At one point, he warns: “A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.” This is the actual plot of the 1995 B-movie “Virtuosity,” in which an AI serial killer app (played by Russell Crowe!) designed to help train police officers grows his own biomechanical body and wreaks havoc on the physical world. Thank goodness Denzel Washington is around to stop him.

And, hey, just because AI-fueled nightmares have made their way into classic films, that doesn’t mean they can’t also happen in the real world. But it nonetheless feels like a bit of a leap to go from text-to-image generators and chatbots – no matter how impressive – to computer programs that can grow their own bodies in a lab, then use those bodies to take control of our military and government apparatus. Perhaps there’s a direct line between the experiments being done today and truly conscious, self-aware, thinking machines down the road. But, as Deutsch cautioned in his tweet, it’s important to remember that AI and AGI are not necessarily the exact same thing.

Will EVGo’s Stock Surges Be Enough To Keep the Company Stable?

David Shultz

David Shultz reports on clean technology and electric vehicles, among other industries, for dot.LA. His writing has appeared in The Atlantic, Outside, Nautilus and many other publications.

Will EVGo’s Stock Surges Be Enough To Keep the Company Stable?
Image from EVGo

Shares of EVgo are up over 20% today after the company released Q4 earnings that outpaced predictions from Wall Street. Analysts had predicted the company would announce a loss per share in the neighborhood of $0.16-$0.18, but the Los Angeles-based electric vehicle charging company reported a much more meager loss, to the tune of just $0.06 per share.

Read moreShow less

How AgTech Startup Leaf Wants To Modernize the Farming Industry

Samson Amore

Samson Amore is a reporter for dot.LA. He holds a degree in journalism from Emerson College and previously covered technology and entertainment for TheWrap and reported on the SoCal startup scene for the Los Angeles Business Journal. Send tips or pitches to samsonamore@dot.la and find him on Twitter @Samsonamore.

green leaf drawing and rolling farm lands
Evan Xie

At least 50,000 acres in the state of California are estimated to be underwater after a record-breaking year of rainfall. So far this year, California has received nearly 29 inches of rain, with the bulk being dumped on its central and southern coasts. Farmers are already warning that the price of dairy, tomatoes and other vegetables will rise as the weather prevents them from re-seeding their fields.

While no current technology can prevent weather disasters, Leaf Agriculture, a Los Angeles-based startup that launched in 2018, wants to help farmers better manage their properties by leveraging data.

Read moreShow less
https://twitter.com/samsonamore
samsonamore@dot.la
RELATEDEDITOR'S PICKS
LA TECH JOBS
interchangeLA
Trending