Menagerie

 

Menagerie and Section 230 – Google warns a Supreme Court case could ‘upend the internet.’

Supreme Court case could 'upend the internet

Menagerie and Section 230 – Google warns a Supreme Court case could ‘upend the internet.’

Twitter and YouTube are monsters. They rampage through your life, eat your time. , infect your brain and They hypnotize you into clicking and liking and subscribing. 

These monsters have a name … the Algorithm. Algo. 

Like Godzilla setting off a tsunami, Algo and its recommendation engine send viral content your way: the next tweet in your timeline or video in your queue. Algo is insatiable, brainless, and unfeeling: it doesn’t care if it’s feeding you quality content or crap. Algo’s only goal is to entrance you into staring into a glowing screen, like a witch drawing you into a magic mirror, so it can eat your brains and earn cash from advertisers. Watch the Social Dilemma on Netflix for a full breakdown on the ugliness of Algo.

Unlike any communication tool in human history—unlike newspapers, radio stations, or coffee shop bulletin boards—YouTube and Twitter generally don’t have real people who edit, curate, or kill any content, no matter how stupid or awful. No one has a leash on Algo. 

Sometimes the content the Algo sends you is harmless; people binge on baking shows or soccer highlights Sometimes it’s mixed—people obsess about a moon landing conspiracy. Sometimes the algorithm—without the human touch of an editor—is truly horrific. 

Algo’s body count

In 2015, Nohemi Gonzalez was a 23-year-old American exchange student in Paris studying design. She drew beautifully, designed cool toys, and laughed easily. On the evening of Nov. 13, a gunman opened fire on the Paris bistro where she was dining with friends. Loud pops. Screaming. Then deadly silence. One of 130 people killed that night in a series of coordinated attacks on sports and nightlife venues in Paris, orchestrated by Islamic State (ISIS) terrorists, Gonzales was the sole American victim. And her family’s quest for justice has taken on an unexpected prominence: It’s the center of a current U.S. Supreme Court case, Gonzalez vs. Google LLC, that could change how the internet operates. Or, as The Verge put it rather concisely: “The Supreme Court will determine whether you can sue platforms for hosting terrorists.” 

ISIS used social media platforms like YouTube to recruit new followers. As has been well-documented, terrorist-created videos can pop up in the Algo’s recommendations. Some of the terrorists in Paris watched YouTube’s ISIS recruitment videos. In the Gonzalez family’s opinion, this made Google—which owns the YouTube Algo—liable in Nohemi’s death.

At stake in this Supreme Court decision is something known as “Section 230,” a brief clause in the Telecommunications Act of 1996 that has been broadly cited–including in Google’s original, successful defense of Gonzalez–by tech platforms to claim legal immunity from the content posted by users on their services. Under Section 230, if a user posts defamation, harassment, or other forms of harmful speech (like, say, spreading conspiracy theories about an 82-year-old victim of assault), the individual user can be sued, but the platform (with a few exceptions) cannot be.” But, Gonzalez argues, those platforms’ algorithmic recommendation engines are themselves a form of curation. Gonzalez asks whether Section 230 immunity disappears if a platform recommends or amplifies problematic content to users. Google and Twitter control Algo. They write his code. They make him go. The recommendations are theirs. Same as if a newspaper put a terrorist recruitment article on their front page. 

A related case being heard by the Supreme Court, Twitter Inc. vs. Taamneh, asks whether a company can be held liable for “aiding and abetting” terrorism if any pro-terrorism content appears on its platform (even if the company aggressively removes most pro-terrorism speech).” 

The suits says Google and Twitter are responsible if Algo knocks over a society, recruits a terrorist, or kills a kid.

If the Supreme Court rules against Google and Twitter and their Algo, to what extent could this decision “upend the internet,” to use Google’s own choice of words? “The most interesting thing about these cases is the possibility, however slight, that the Supreme Court might find that the safe harbor provision of Section 230 of Title V in the Telecommunications Act of 1996 no longer applies,” explains author, speaker, and entrepreneur Robert Tercek. “If that happens…boom. There goes any kind of curation, moderation, algorithmic news feed, algorithmic recommendation, etc.” 

Tercek isn’t exaggerating about the repercussions. If the Supreme Court shrinks Section 230, Twitter owner Elon Musk can forget about his commitment to lighter moderation. Nearly everything Twitter does is built around content recommendations produced by complex algorithms, which in turn respond to the unpredictable behavior of human users. The same is true of all the other major social media companies that emerged in the mid-2000s as part of the tech innovation boom known as Web 2.0—Meta, TikTok, all the biggies. If a company can be dragged into court any time its algorithm amplifies problematic content, the company will have little choice but to remove far more content proactively. 

They’ll have to use humans to rein in the monster Algo. 

An opportunity for decentralization

Meta, the parent company of Facebook and Instagram, argues that an end to the algorithmic curation currently protected by Section 230 would transform the internet into a “disorganized collection of haphazardly assembled information.” 

That’s not true. The Internet is already chaos, controlled by Algo. Amid chaos there is often opportunity, and for companies working toward a decentralized model for the internet, an Internet without Section 230 could mean something very much better. 

If YouTube is held a publisher under Section 230, this means that all the social media platforms need self governance of their communities or significantly change their business model away from algorithmic interference. 

Hallelujah. 

“Most social media companies are effectively psychopaths,” says Tony Greenberg, who advises startups in the Web3 and decentralization spaces. “We have to re-introduce the human element into these things.”

Social media companies should look to a content regulation procedure that allows users to self govern. This governance must de-incentivize users who post harmful content online, while incentivizing users who contribute meaningful value to their respective platforms and communities. Through a sustainable governance and self regulatory model, users of these platforms could effectively create an environment where meaningful dialogue is encouraged while wasteful content is treated appropriately: the users themselves decide what they deem to be valuable to the network, while being incentivized to police the content that is taking value out of the network. 

In other words, humans take over from Algo. Humans recommend videos, tweets, TikToks. And humans are responsible for their choices. 

Reddit is an decent example. Reddit has an Algo, but also human Redditors upvote and downvote content. If others like your content and upvote it, it shows up higher in people’s feeds, and you gain “karma” points. If your posts are downvoted, you lose karma. Reddit tends to be a much more human-centered, trustworthy, and level-headed source of information than Twitter, Facebook or YouTube. There’s more humans there, and Reddit values its humans. 

Redditors make no money for their work. They should. Imagine if Redditors made a fraction of a cent for each upvote they get, a slight ding for a downvote. Being a Redditor could be a real job—as it should be. 

A cash-based editor system wouldn’t be perfect. Money corrupts. People would game the system for cash. 

Users who provide meaningful value to these platforms are the platform’s best asset.

The key is proper governance. Communities need the power to encourage meaningful dialogue, police wasteful content, and incentivize constant participation, ensuring the longevity of the platform and the communities within it. If the Supreme Court shrinks the immunity of Section 230, content sharing platforms will undoubtedly need viable alternatives to governing their platforms beyond the boardroom. 

No matter the court decision, the problem remains

What if Gonzalez is decided in Google’s favor, and Algo keeps stomping on Tokyo and the world? Unfortunately, the issue of content moderation isn’t going away–and neither is the parallel problem of heavily centralized power at social media companies. In recent years, debates about content moderation have hit a fever pitch, with some arguing that social media companies have too much power to shape public discourse and others arguing that they are not doing enough to curb the spread of harmful content. Both sides see Section 230, at least in its current state and interpretation, as part of the problem. 

In addition, questions of Section 230’s platform immunity have come up before and they’ll come up again. Five courts of appeals’ judges have concluded that Section 230 indeed grants such immunity to platforms; three courts of appeals’ judges have rejected such immunity. One appellate judge has concluded only that circuit precedent precludes liability for such recommendations.

The distribution of power is broken.

The power imbalance became clear when the Web 2.0 boom’s winners became essential mainstream channels for news media, communication, community, and creative works–all on the backs of their billions of content creators and users. There’s massive abuse of power by those in charge of making censorship decisions, and when content is censored without transparency or accountability, there is a risk that legitimate speech may be silenced or marginalized for political or other non-legitimate reasons. A post-Section 230 internet could be the “disorganized collection of haphazardly assembled information” that Meta claims it will be. It could also turn into a free-speech nightmare. 

That is—unless we bring back human content moderation. Human editors. Humans making decisions. Like humanity has done for thousands of years. 

How will social platforms adapt to an evolving digital environment where top-down censorship largely fails to ensure the best interest of users? If the Supreme Court decides to shrink the protections of Section 230, how will internet companies govern and regulate their platforms despite the reputational and market risks of top-down censorship? Will social media companies revert to a top-down regulation procedure? Will they crank the screws on Algo to broaden their identification of “potentially harmful” content? 

All of these potential scenarios have major problems, ranging from free speech concerns to reputational and market risk. But users who provide meaningful value to these platforms are the platform’s best asset, and the solution can be found in centering and empowering them. There are Web3 tools that allow these platforms to identify these users and leverage their expertise, through economic incentivization and game-theoretical voting mechanisms that focus on the longevity of all stakeholders. And empowerment of the masses will be the antidote to abuse by the powerful few. 

Wulf Kaal is a law professor at the University of St. Thomas (Minnesota) and co-founder of Menagerie.is, an online tool that allows for the creation of decentralized organizations like DAOs, clubs and companies.

No Comments

Post A Comment