Section 230: What’s at stake?

Image via Gary Waters IKON Images/Newscom

Over the past year, American society has endured several transformative and polarizing cultural shifts. In an unprecedented turn of events, we witnessed the third presidential impeachment trial in history, a racial justice movement of unmatched proportions, and a global pandemic that has taken the lives of 300,000 Americans and counting. These intrinsically political issues have riveted public attention and exacerbated the existing ideological divide, in part due to an excess of false and misleading information circulating on social media platforms.

At long last, the misinformation problem has become a topic of public debate. Members of Congress on both sides of the aisle are now asking the questions: Should social media companies be left to regulate misinformation as they see fit? Should the government control how social media companies moderate speech? And if so, how could this be accomplished without running up against first amendment protections?

Central to these difficult questions is Section 230 of the Communications Decency Act, a 26-word-long statute widely referred to as “the law that created the internet.” Section 230 was drafted in the aftermath of a consequential 1995 New York Supreme Court decision, Stratton Oakmont, Inc. v. Prodigy, which determined that any internet platform enforcing content policies could be held legally liable for the user-generated content they host. From a legal standpoint, this divided the internet cleanly down the middle. Companies could act as digital message boards and be treated as “common carriers,” or they could engage in content moderation, rendering them liable to defamation, negligence, and other civil wrongs.

Worried that such a strict dichotomy could kneecap the fledgling tech industry, Senators Ron Wyden and Chris Cox co-sponsored a bill that would come to be known as Section 230. This statute ultimately overturned Stratton Oakmont and reclassified internet platforms as unique legal entities. In essence, Section 230 permitted platforms to set and enforce their own content guidelines while also protecting them from lawsuits if their regulatory processes occasionally overlooked problematic content. For this reason, Section 230 has been widely credited for facilitating the growth of the early internet into an innovative, vibrant, and inclusive digital forum, ultimately laying the groundwork for the rise of the American tech industry.

Image via The New York Times

Despite establishing the legal bedrock of the modern internet, Section 230 now hangs in the balance. Many lawmakers believe its protections have expanded beyond their original scope and now require revision. Others blame the statute for the rise of Big Tech, which they view as a force with dangerous, monopolistic power over user data and online speech.

While these concerns are legitimate, a total repeal of Section 230 could have seismic implications for online speech and the future of the tech industry. Without a liability shield, major social media platforms may be forced to drastically restructure the sharing process to prevent costly and time-consuming legal battles. Any tweet or status update perceived as defamatory could be deleted on an angry follower’s whim. Yelp reviews, beauty product ratings, and even political debates in comment sections below online articles could become a thing of the past. Or worse, social media platforms could revoke instantaneous posts altogether, ensuring no content that could cause legal problems for platforms would ever see the light of day. These scenarios may sound dystopian, but the reality is, we’re toying with a law that the modern internet has never existed without.

As a member of Gen-Z, I’ve grown up on social media. In middle school, Instagram was an extension of my art class — an outlet for creative expression that allowed me to refine my aesthetic eye. In high school, Twitter allowed my friend group to extend our cafeteria chatter into after-school hours. My generation was born into a world of post-9/11 uncertainty, raised during the worst economic downturn since the Great Depression, came of age during an era of bitter political divisiveness, and has been released into adulthood at the onset of a global pandemic. The modern internet has been one of the few throughlines of our conscious lives, so naturally, the prospect of radically altering that is terrifying. Partially as a means of assuaging my personal anxieties over the implications of a Section 230 repeal, I’ve taken it upon myself to investigate the factors that led us here and what the internet’s next chapter might look like.

For my senior thesis at the University of Pennsylvania’s Annenberg School for Communication, I’ll be researching Facebook, Twitter, and YouTube’s approaches to moderating political misinformation leading up to the 2020 election. Over the next several months, I plan to qualitatively analyze major internet speech events and use data-driven insights to construct a framework for understanding the motivations, intentions, and effectiveness of the political misinformation policy changes made during this time. Finally, I’ll be interviewing legal scholars and industry professionals in an attempt to understand how revisions to Section 230 could change social media as we know it.

Personally, I’m anxious, but also determined, and most of all excited. I hope you stick around to read what I find.

I mostly write about tech policy, freedom of expression, and equity in the digital world. –––––––––––––––– COMM + CS @ Penn