Watch: The Future of Content Moderation Online

Annie Burford

Annie Burford is dot.LA's director of events. She's an event marketing pro with over ten years of experience producing innovative corporate events, activations and summits for tech startups to Fortune 500 companies. Annie has produced over 200 programs in Los Angeles, San Francisco and New York City working most recently for a China-based investment bank heading the CEC Capital Tech & Media Summit, formally the Siemer Summit.

Watch: The Future of Content Moderation Online

As Big Tech cracks down on moderation after the Capitol attack and Wall Street braces for more fallout from social media's newfound influence on stock trading, legislators are eyeing changes to Section 230 of the Communications Decency Act of 1996. On Wednesday, February 10, dot.LA brought together legal perspectives and the views of a founder and venture capitalist on the ramifications of changing the way that social media and other internet companies deal with the content posted on their platforms.

A critic of Big Tech moderation, Craft Ventures General Partner and former COO of PayPal David Sacks called for an amendment of the law during dot.LA's Strategy Session Wednesday. Tyler Newby and Andrew Klungness, both partners at law firm Fenwick, laid out the potential legal implications of changing the law.


Section 230 limits the liability of internet intermediaries, including social media companies, for the content users publish on their platforms.

"Mend it, don't end it," Sacks said.

Sacks said he's concerned about censorship in the wake of companies tightening moderation policies. In the case of Robinhood's recent decision to freeze users from trading certain stocks— including GameStop's — for a period of time, he said we're now seeing discussions about Big Tech's role in censorship unfold in nonpartisan settings.

"Who has the power to make these decisions?" he said. "What concerns me today is that Big Tech has all the power."

Social media sites including Twitter pulled down former President Trump's account after last month's attack on the U.S. Capitol. But critics have said that these sites didn't go far enough in stopping conversations that provoked the violence.

To provide some external standard, he called for the "reestablishment of some First Amendment rights in this new digital public square" — which is to say, on privately owned platforms.

Newby pointed to a series of recent bills aimed at reigning in the power of tech companies. Changes to moderation laws could have sweeping impacts on more companies than giants like Facebook or Twitter.

"It's going to have a huge stifling effect on innovation," said Klungness, referring to a possible drop in venture capital to new startups. "Some business models may be just simply too risky or may be impractical because they require real-time moderation of content."

And if companies are liable for how their users behave, Klungness said, some companies may never take the risk in launching these companies at all. "Some business models may be just simply too risky or may be impractical because they require real-time moderation of content," he said.

Watch the full discussion below.

Strategy Session: The Future of Content Moderation Online

David Sacks, Co-Founder and General Partner of Craft Ventures

David Sacks, Co-Founder and General Partner of Craft Ventures

David Sacks is co-founder and general partner at Craft. He has been a successful tech entrepreneur and investor for two decades, building and investing in some of the most iconic companies of the last 20 years. David has invested in over 20 unicorns, including Affirm, Airbnb, Bird, Eventbrite, Facebook, Houzz, Lyft, Opendoor, Palantir, Postmates, Reddit, Slack, SpaceX, Twitter and Uber.

In December 2014, Sacks made a major investment in Zenefits and became the company's COO. A year later, in the midst of a regulatory crisis, the Board asked David to step in as interim CEO of Zenefits. During his one year tenure, David negotiated resolutions with insurance regulators across the country, and revamped Zenefits' product line. By the time he left, regulators had praised David for "righting the ship", and PC Magazine hailed the new product as the best small business HR system.

David is well known in Silicon Valley for his product acumen. AngelList's Naval Ravikant has called David "the world's best product strategist." David likes to begin any meeting with a new startup by seeing a product demo.

Kelly O'Grady, Chief Correspondent & Host and Head of Video at dot.LA 

Kelly O'Grady is dot.LA's chief host & correspondent. Kelly serves as dot.LA's on-air talent, and is responsible for designing and executing all video efforts. A former management consultant for McKinsey, and TV reporter for NESN, she also served on Disney's Corporate Strategy team, focusing on M&A and the company's direct-to-consumer streaming efforts. Kelly holds a bachelor's degree from Harvard College and an MBA from Harvard Business School. A Boston native, Kelly spent a year as Miss Massachusetts USA, and can be found supporting her beloved Patriots every Sunday come football season.

Tyler Newby is a partner at Fenwick

Tyler Newby, Partner at Fenwick 

Tyler focuses his practice on privacy and data security litigation, counseling and investigations, as well as intellectual property and commercial disputes affecting high technology and consumer-facing companies. Tyler has an active practice in defending companies in consumer class actions, state attorney general investigations and federal regulatory agency investigations arising out of privacy and data security incidents. In addition to his litigation practice, Tyler regularly advises companies large and small on reducing their litigation risk on privacy, data security and secondary liability issues. Tyler frequently counsels companies on compliance issues relating to key federal regulations such as the Children's Online Privacy Protection Act (COPPA), the Fair Credit Reporting Act (FCRA), the Computer Fraud and Abuse Act (CFAA), the Gramm Leach Bliley Act (GLBA), Electronic Communications Privacy Act (ECPA) and the Telephone Consumer Protection Act (TCPA).

In 2014, Tyler was named among the top privacy attorneys in the United States under the age of 40 by Law360. He currently serves as a Chair of the American Bar Association Litigation Section's Privacy & Data Security Committee, and was recently appointed to the ABA's Cybersecurity Legal Task Force. Tyler is a member of the International Association of Privacy Professionals, and has received the CIPP/US certification.

Andrew Klungness is a partner at Fenwick

Andrew Klungness, Partner at Fenwick 

Leveraging nearly two decades of business and legal experience, Andrew navigates clients—at all stages of their lifecycles—through the opportunities and risks presented by novel and complex transactions and business models.

Andrew is a co-chair of Fenwick's consumer technologies and retail and digital media and entertainment industry teams, as well as a principal member of its fintech group. He works with clients in a number of verticals, including ecommerce, consumer tech, fintech, enterprise software, blockchain, marketplaces, CPG, mobile, AI, social media, games and edtech, among others.

Andrew leads significant and complex strategic alliances, joint ventures and other collaboration and partnering arrangements, which are often driven by a combination of technological innovation, industry disruption and rights to content, brands or celebrity personas. He also structures and negotiates a wide range of agreements and transactions, including licensing, technology sourcing, manufacturing and supply, channel partnerships and marketing agreements. Additionally, Andrew counsels clients in various intellectual property, technology and contract issues in financing, M&A and other corporate transactions.

Sam Adams, Co-Founder and CEO of dot.LA

Sam Adams, Co-Founder and CEO of dot.LA 

Sam Adams serves as chief executive of dot.LA. A former financial journalist for Bloomberg and Reuters, Adams moved to the business side of media as a strategy consultant at Activate, helping legacy companies develop new digital strategies. Adams holds a bachelor's degree from Harvard College and an MBA from the University of Southern California. A Santa Monica native, he can most often be found at Bay Cities deli with a Godmother sub or at McCabe's with a 12-string guitar. His favorite colors are Dodger blue and Lakers gold.

http://www.linkedin.com/in/annieburford
annie@dot.la

Subscribe to our newsletter to catch every headline.

Creandum’s Carl Fritjofsson on the Differences Between the Startup Ecosystem in Europe and the U.S.

Decerry Donato

Decerry Donato is a reporter at dot.LA. Prior to that, she was an editorial fellow at the company. Decerry received her bachelor's degree in literary journalism from the University of California, Irvine. She continues to write stories to inform the community about issues or events that take place in the L.A. area. On the weekends, she can be found hiking in the Angeles National forest or sifting through racks at your local thrift store.

Carl Fritjofsson
Carl Fritjofsson

On this episode of the LA Venture podcast, Creandum General Partner Carl Fritjofsson talks about his venture journey, why Generative-AI represents an opportunity to rethink products from the ground up, and why Q4 2023 and Q1 2024 could be "pretty bloody" for startups.

Read moreShow less

AI Is Undergoing Some Growing Pains at a Pivotal Moment in Its Development

Lon Harris
Lon Harris is a contributor to dot.LA. His work has also appeared on ScreenJunkies, RottenTomatoes and Inside Streaming.
AI Is Undergoing Some Growing Pains at a Pivotal Moment in Its Development
Evan Xie

One way to measure just how white-hot AI development has become: the world is running out of the advanced graphics chips necessary to power AI programs. While Intel central processing units were once the most sought-after industry leaders, advanced graphics chips like Nvidia’s are designed to run multiple computations simultaneously, a baseline necessity for many AI models.

An early version of ChatGPT required around 10,000 graphics chips to run. By some estimates, newer updates require 3-5 times that amount of processing power. As a result of this skyrocketing demand, shares of Nvidia have jumped 165% so far this year.

Building on this momentum, this week, Nvidia revealed a line-up of new AI-related projects including an Israeli supercomputer project and a platform utilizing AI to help video game developers. For smaller companies and startups, however, getting access to the vital underlying technology that powers AI development is already becoming less about meritocracy and more about “who you know.” According to the Wall Street Journal, Elon Musk scooped up a valuable share of server space from Oracle this year before anyone else got a crack at it for his new OpenAI rival, X.AI.

The massive demand for Nvidia-style chips has also created a lucrative secondary market, where smaller companies and startups are often outbid by larger and more established rivals. One startup founder compares the fevered crush of the current chip marketplace to toilet paper in the early days of the pandemic. For those companies that don’t get access to the most powerful chips or enough server space in the cloud, often the only remaining option is to simplify their AI models, so they can run more efficiently.

Beyond just the design of new AI products, we’re also at a key moment for users and consumers, who are still figuring out what sorts of applications are ideal for AI and which ones are less effective, or potentially even unethical or dangerous. There’s now mounting evidence that the hype around some of these AI tools is reaching a lot further than the warnings about its drawbacks.

JP Morgan Chase is training a new AI chatbot to help customers choose financial securities and stocks, known as IndexGPT. For now, they insist that it’s purely supplemental, designed to advise and not replace money managers, but it may just be a matter of time before job losses begin to hit financial planners along with everyone else.

A lawyer in New York just this week was busted by a judge for using ChatGPT as part of his background research. When questioned by the judge, lawyer Peter LoDuco revealed that he’d farmed out some research to a colleague, Steven A. Schwartz, who had consulted with ChatGPT on the case. Schwartz was apparently unaware that the AI chatbot was able to lie – transcripts even show him questioning ChatGPT’s responses and the bot assuring him that these were, in fact, real cases and citations.

New research by Marucie Jakesch, a doctoral student from Cornell University, suggests that even users who are more aware than Schwartz about how AI works and its limitations may still be impacted in subtle and subconscious ways by its output.

Not to mention, according to data from Intelligent.com, high school and college students already – on the whole – prefer utilizing ChatGPT for help with schoolwork over a human tutor. The survey also notes that advanced students tend to report getting more out of using ChatGPT-type programs than beginners, likely because they have more baseline knowledge and can construct better and more informative prompts.

But therein lies the big drawback to using ChatGPT and other AI tools for education. At least so far, they’re reliant on the end user writing good prompts and having some sense about how to organize a lesson plan for themselves. Human tutors, on the other hand, have a lot of personal experience in these kinds of areas. Someone who instructs others in foreign languages professionally probably has a good inherent sense of when you need to focus on expanding your vocabulary vs. drilling certain kinds of verb and tense conjugations. They’ve helped many other students prepare for tests, quizzes, and real-world challenges, while computer software can only guess at what kinds of scenarios its proteges will face.

A recent Forbes editorial by academic Thomas Davenport suggests that, while AI is getting all the hype right now, other forms of computing or machine learning are still going to be more effective for a lot of basic tasks. From a marketing perspective in 2023, it’s helpful for a tech company to throw the “AI” brand around, but it’s not magically going to be the answer for every problem.

Davenport points to a similar (if smaller) whirlwind of excitement around IBM’s “Watson” in the early 2010s, when it was famously able to take out human “Jeopardy!’ champions. It turns out, Watson was a general knowledge engine, really best suited for jobs like playing “Jeopardy.” But after the software gained celebrity status, people tried to use it for all sorts of advanced applications, like designing cancer drugs or providing investment advice. Today, few people turn to Watson for these kinds of solutions. It’s just the wrong tool for the job. In that same way, Davenport suggests that generative AI is in danger of being misapplied.

While the industry and end users both race to solve the AI puzzle in real time, governments are also feeling pressure to step in and potentially regulate the AI industry. This is much easier said than done, though, as politicians face the same kinds of questions and uncertainty as everyone else.

OpenAI CEO Sam Altman has been calling for governments to begin regulating AI, but just this week, he suggested that the company might pull out of the European Union entirely if the regulations were too onerous. Specifically, Altman worries that attempts to narrow what kinds of data can be used to train AI systems – specifically blocking copyrighted material – might well prove impossible. “If we can comply, we will, and if we can’t, we’ll cease operating,” Altman told Time. “We will try, but there are technical limits to what’s possible.” (Altman has already started walking this threat back, suggesting he has no immediate plans to exit the EU.)

In the US, The White House has been working on a “Blueprint for an AI Bill of Rights,” but it’s non-binding, just a collection of largely vague suggestions. It’s one thing to agree “consumers shouldn’t face discrimination from an algorithm” and “everyone should be protected from abusive data practices and have agency over how their data is used.” But enforcement is an entirely different animal. A lot of these issues already exist in tech, and are much larger than AI, and the US government already doesn’t do much about them.

Additionally, it’s possible AI regulations won’t work well at all if they aren’t global. Even if you set some policies and get an entire nation’s government to agree, how to set similar worldwide protocols? What if US and Europe agree but India doesn’t? Everyone around the world accesses roughly the same internet, so without any kind of international standard, it’s going to be much harder for individual nations to enforce specific rules. As with so many other AI developments, there’s inherent danger in patchwork regulations; it could allow some companies, or regions, or players to move forward while others are unfairly or ineffectively stymied or held back.

The same kinds of socio-economic concerns around AI that we have nationally – some sectors of the work force left behind, the wealthiest and most established players coming in to the new market with massive advantages, the rapid spread of misinformation – are all, in actuality, global concerns. Just as the hegemony of Microsoft and Google threaten the ability of new players to enter the AI space, the West’s early dominance of AI tech threatens to push out companies and innovations from emerging markets like Southeast Asia, Subsaharan Africa, and Central America. Left unfettered, AI could potentially deepen social, economic, and digital divisions both within and between all of these societies.

Undaunted, some governments aren’t waiting around for these tools to develop any further before they start attempting to regulate them. New York City has already set up some rules about how AI can be used during the hiring process while will take effect in July. The law requires any company using AI software in hiring to notify candidates that it’s being used, and to have independent auditors check the system annually for bias.

This sort of piecemeal figure-it-out-as-we-go approach is probably what’s going to be necessary, at least short-term, as AI development shows zero signs of slowing down or stopping any time soon. Though there’s some disagreement among experts, most analysts agree with Wharton professor and economist Jeremy Siegel, who told CNBC this week that AI is not yet a bubble. He pointed to the Nvidia earnings as a sign the market remains healthy and not overly frothy. So, at least for now, the feverish excitement around AI is not going to burst like a late ‘90s startup stock. The world needs to prepare as if this technology is going to be with us for a while.

What the Future of Rivian Looks Like According to CEO RJ Scaringe

David Shultz

David Shultz reports on clean technology and electric vehicles, among other industries, for dot.LA. His writing has appeared in The Atlantic, Outside, Nautilus and many other publications.

What the Future of Rivian Looks Like According to CEO RJ Scaringe
Rivian

Rivian CEO RJ Scaringe took to Instagram last weekend to answer questions from the public about his company and its future. Topics covered included new colors, sustainability, production ramp, new products and features. Speaking of which, viewers also got a first look at the company’s much-anticipated R2 platform, albeit made of clay and covered by a sheet, but hey, that’s…something. If you don’t want to watch the whole 33 minute video, which is now also on Youtube, we’ve got the highlights for you.

Read moreShow less
RELATEDEDITOR'S PICKS
LA TECH JOBS
interchangeLA
Trending