Skip to content

Scott’s Blog

Algorithms Are Deadly: Why The Supreme Court Should Overturn Gonzales v. Google LLC

Have you wondered why YouTube recommends certain videos just for you? Or why you see specific posts and ads on Facebook, Instagram, and Twitter. The answer is a 10-dollar word with billions in significance. It is also central to the upcoming Supreme Court case, Gonzalez v. Google LLC. “An algorithm is a set of step-by-step procedures, or a set of rules to follow, for completing a specific task or solving a particular problem.”[1] They are used by big tech companies (online platforms), in praxis, to recommend internet content. It could be as innocuous as helping you decide what groceries to buy or as significant as what you fundamentally believe about the world and other people. In short, these companies are now able to complete specific tasks or solve problems better than we can ourselves

Additionally, building algorithms into their business model is how online platforms such as Facebook, Instagram, Google, and Twitter are profiting hundreds of billions each year from users like you. What is this business model? Engagement –> Growth –> Revenue. Therefore, the primary task of these online platforms is:

How do we get users to engage our online platform content as long and as often as possible?  

This task is paramount because it determines how much money can be made selling advertising. It also makes the incentive to create better, more complex, more effective & efficient algorithms astoundingly high.

Unfortunately, when a pure engagement strategy is applied globally, it creates some negative consequences. The safety of individuals and groups is chief among them.  

The ruling in the upcoming Supreme Court case: Gonzales v. Google LLC will contribute to settling the scope of liability for online platforms under Section 230(c)(1) of the Communications Decency Act.

“c. Protection for ‘Good Samaritan’ blocking and screening of offensive material.

  1. Treatment of publisher or speaker: No provider or user of an interactive computer service (online platform) shall be treated as the publisher or speaker

The Lower Court ruling in Gonzales v. Google LLC was, “On appeal from the Northern District of California, the United States Court of Appeals for the Ninth Circuit affirmed the district court’s ruling, holding that Section 230 protects the algorithmic recommendations.”[2]

The context of the case is the death of Reynaldo Gonzalez’s brother, Nohemi Gonzalez. He was killed by the Islamic State in a terrorist attack at a café in Paris. Reynaldo Gonzales and his attorneys contend that the nature and scope of YouTube’s algorithms used to make targeted recommendations of propaganda videos from the Islamic State partially led to the attack.  

So, overturning the Lower Court ruling will change the law to hold Google (and other online platforms) legally liable in certain ways for how they use their algorithms to make targeted recommendations. In short, it would make algorithms the publisher or speaker of content and the companies that use them open to litigation.

In addition to the individual safety concerns, I believe the Supreme Court should overturn the lower court’s ruling because online platforms are using and improving recommendation algorithms to promote a great deal of dangerous, extreme, and divisive content. This has proven to be harmful to the population and is done to earn hundreds of billions in profits.

Moreover, if the Supreme Court overturns the Lower Court ruling, the greatest weapon in YouTube’s business model, generating dangerous, extreme, and divisive content through algorithms, will be limited. Other online platforms also depend on similar weapons.

In testimony before a senate subcommittee Francis Haugen, a former employee at Facebook and current whistleblower said, “Facebook, and other online platforms, use engagement-based ranking to determine which content they believe is most relevant to user’s interests. After considering a post’s likes, shares, and comments, as well as a user’s past interactions with similar content, the algorithms powering someone’s Facebook news feed, or other platform content, will place dangerous, extreme, and divisive content in front of that person in order to get them to engage longer and more often. This is in contrast to a chronological ranking that is based on when content was posted or sent.”[3]

It is clear recommendation algorithms promote harmful content. Haugen said in her testimony, “…online platforms continue to use engagement-based ranking even though they repeatedly found the Instagram app is harmful to a number of teenagers … 13.5% of U.K. Teen girls said suicidal thoughts became more frequent after using Instagram and 17% of teen girls say their eating disorders got worse after using Instagram.”[4] Haugen also makes it clear that the “engagement-based ranking formula helps sensational content, such as posts that feature rage, hate or misinformation.”[5]

More specifically, Facebook uses an algorithm called, M.S.I. (meaningful social interaction). “In 2018, Facebook overhauled its news feed algorithm to prioritize interactions such as comments and likes, between friends and family … M.S.I. made Facebook an angrier social platform and created an environment that encouraged polarization, misinformation, and shocking content.”[6]

Astoundingly, only “10-20% of misinformation and 3-5% of hate speech is being taken down.”[7] That leaves a treasure trove of harmful content for bad actors and ignorant users to propagate.

So, why do these online platforms use Engagement-based Ranking Systems and M.S.I. if they know it is harmful? Money. YouTube earns vast profits from its 2.3 billion users. The same is true of Twitter from its 300 million active users and TicTok from their 467 million active users.

To illustrate the massive wealth on the table, consider this, “Facebook values each user at $51.58 … and 2.9 billion users access Facebook’s platform. They also demonstrate a 7.18% increase year-over-year,”[8] So, Facebook is profiting hundreds of billions using this simple equation, the more engagement, the more money.

Therefore, the Supreme court should overturn Gonzales v. Google LLC and make it clear through other similar rulings that recommendation algorithms are a form of speech and the companies that use them should be held legally liable for what they promote. The court should do this precisely because, unfortunately, recommendation algorithms continue to risk the population’s health through dangerous, extreme, and divisive content promotion.