Users, Platforms and Governments: A Case for Collaboration in Shaping the Digital Landscape

Facebook Headquarters at 1 Hacker Way, Menlo Park California. Photo by Anthony Quintano. 

Facebook Headquarters at 1 Hacker Way, Menlo Park California. Photo by Anthony Quintano

“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”

– John Perry Barlow, A Declaration of the Independence of Cyberspace

American essayist and political activist John Perry Barlow wrote his manifesto “A Declaration of the Independence of Cyberspace” in 1996 on the heels of the Telecommunications Act of 1996. Provisions within the act, specifically the Communications Decency Act (Title V), stipulated that the Federal Communications Commission (FCC) could censor indecent or obscene materials at their discretion. Barlow’s manifesto denounced the bill and advocated for the maintenance of the internet’s self-governance and its independence from traditional hierarchical structures. As spirited as the manifesto was, a look at today’s digital landscape proves it to be overly idealistic. There is an urgent need for content regulation that balances user agency with state interests and the protection of users from online harm. 

The Case for Regulation

The internet is an unprecedented development in human history, allowing communication to reach all corners of the earth. Digital platforms have become public spheres where users congregate to exchange, disseminate, and receive information. This very fluidity means that the impacts and consequences of online action transcend national borders and implicate matters that are of interest to states, such as election security.

A prominent example is the “Jenna Abrams” Twitter account (@Jenn_Abrams), a highly successful disinformation campaign run by Russia’s Internet Research Agency (IRA) during the 2016 US Presidential Election. The IRA had aimed to sow discord among the American electorate by creating thousands of social media profiles impersonating everyday Americans from diverse backgrounds and political leanings, a tactic through which they would influence the trajectory of public discourse. The Jenna Abrams persona was among one of the agency’s most successful attempts––by the time of its unmasking in November 2017, the Twitter account had gained more than 70,000 followers and was the second-most followed English-speaking IRA. Twitter account. The account took on the persona of a young, white, American woman whose likeability, perceived authenticity, and performance of cultural assimilation allowed her infiltration into American conservative circles on Twitter. The account’s modus operandi was to build Abrams’ following through viral, pop culture-infused tweets and then incite divisions by promulgating discordant views on issues like immigration, segregation, and Donald Trump, especially nearing the 2016 election. A study by The Daily Beast revealed that Abrams had been featured in articles by over 30 media outlets, including mainstream news media like the New York Times and CNN, evidence of the account’s successful impersonation of American political opinions. In the account’s later stages, it attracted an average of 700 likes and 500 retweets for every tweet, and it was even directly retweeted by influential far-right personalities like Michael Flynn Jr. and Paul Joseph Watson. The account operated for more than three years from 2014 to 2017 before Twitter noticed something was amiss and shut it down. This incident demonstrates how easily users acting in bad faith can threaten a country’s vital functions, such as elections.

 

Governmental bodies are hence left scrambling to balance the protection of national interests like election security with the imperative to protect the fundamental rights of internet users. The latter not only involves upholding users’ freedoms of expression and speech, but it also entails protecting other diverse rights established by human rights law. While freedom of expression is indeed a fundamental right, its exercise should not infringe upon other fundamental goals underlying human rights law in the Universal Declaration of Human Rights (UDHR), which include respect for others’ rights and protection from personal interferences or attacks. This caveat deters abuse of the right to free speech, which can take the form of racial slurs or smear campaigns that compromise the victims’ rights. While the UDHR is not a treaty and therefore cannot be legally binding, it has been used as a touchstone for the formulation of the nine major human rights treaties. All United Nations members have ratified at least one of these nine treaties, indicating a global consensus to respect and uphold their articles and those of the UDHR. It seems, then, that the internet can never be fully free of some regulation.

Sole Governance by Users, Platforms or the Government Isn’t Feasible

“We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge… The only law that all our constituent cultures would generally recognize is the Golden Rule. We hope we will be able to build our particular solutions on that basis.”

– Barlow, Declaration

Barlow envisions the internet, devoid of governmental intervention, as a more humane and fair civilization moderated by “the Golden Rule,” which he purports to be the only law that diverse cultures would recognize in an idealized digital landscape. This ethic of reciprocity is perhaps too generous an assessment of the dynamics of internet communities. The reality of the internet is that while it affords us increased communication capacities, it also introduces volatility and increases the potential for conflict. The explanation for this is twofold. Firstly, the technical design of the internet itself is a double-edged sword. Undeniably, the internet accords users numerous benefits, such as the power to communicate with others around the globe. However, the ability to communicate paradoxically breeds and exacerbates viral, mass reactions due to its ability to spread misinformation, mal-information and disinformation instantly and widely. Additionally, the internet’s two-way communication channels go beyond the capabilities of one-way platforms like the telegraph or radio, meaning users have more leeway to exercise their freedom of speech. Regrettably, however, the right to express oneself generally seems to give more power to abusive users than to those being abused, as their hateful rhetoric is given legitimacy and airtime online in the name of “free speech.” In fact, a Pew Research Center study found 41 percent of Americans have personally experienced online harassment, with the number rising to 64 percent for the young adult (under 30 years old) demographic. Taken together, it is evident that online speech has a high tendency to derail from civility, and an internet ruled by the spirit of reciprocity, or the Golden Rule, is woefully under-equipped to deal with such a sensitive landscape.

 As such, it appears regulation in some form is necessary. Ironically, Barlow avers that the internet’s governance will emerge from “enlightened self-interest and the commonweal,” seemingly incognizant of the fact that self-interest and the commonweal’s interests are often fundamentally incongruent. Advocates of an individualist cyberspace like Barlow seem convinced that privately-run, self-governed digital platforms would be more neutral and equitable compared to state-run media, which is often propelled by governmental interests for the sake of power entrenchment. However, they fail to recognize that online ecosystems like the internet produce content just as biased as state-run media. Big Tech companies like Facebook or Twitter are privately-owned, profit-oriented digital technologies that have little incentive to serve interests beyond those of their shareholders. As such, we cannot rely on digital platforms to self-govern.

A case in point would be the 2014 anti-Muslim riots in Myanmar, which followed an online allegation blown out of proportion that spread like wildfire on Facebook, which at the time was (and still is) almost synonymous with the internet in Myanmar. Despite Burmese officials and foreign correspondents’ attempts to urge Facebook to step up campaigns against hate speech and disinformation, Facebook’s interest in Myanmar’s market potential seemed to override concerns about the proliferation of the hate speech that their platform was breeding. In other words, Facebook and other Big Tech firms are more inclined to serve their own narrow self interests of profit-making rather than those of the commonwealth. Evidently, private actors have no interest in the public good. When weighing privately owned digital media against state-run media, it appears neither is objective in presenting information for public consumption. As such, an ideal of a fairer, self-governed internet does not hold water––digital platforms cannot serve as a regulatory body on their own.

 A Three-Pronged Regulatory Approach

“We must declare our virtual selves immune to your [governments’] sovereignty, even as we continue to consent to your rule over our bodies. We will spread ourselves across the Planet so that no one can arrest our thoughts.”

– Barlow, Declaration

Indeed, governments, like privately-owned media platforms, have their own interests, which pose a challenge to the creation of a humane and fair internet as Barlow had envisioned. However, it remains essential that governments play a cooperative and complementary role to the internet’s own regulatory efforts, such as user verification and content moderation, to safeguard civil, enjoyable online experiences. Additionally, users, as relevant stakeholders, should be involved as regulatory members themselves, to actively shape a digital landscape they want to participate in. These three stakeholders––government, private platforms, and users––are inextricably linked to the trajectory of digital platforms and must work in tandem to keep each other in check and shape a more equitable, respectful media landscape.

Barlow’s assertion that the rule of law in nations cannot apply to a bodiless, matter-less internet is a very dangerous one. This ideology emboldens abusive users who feel they can get away scot-free after spreading divisive or abusive material online and fails to acknowledge that no matter how intangible or bodiless the cyberspace may be, the fact remains that its wide-reaching impacts can touch all areas of life, both online and offline. For instance, attacks or disinformation campaigns that seek to undermine or damage their targets can severely compromise their credibility and reputation in real life. Vaccine disinformation throughout the pandemic is just one example of a harmful online campaign with potentially deadly public health consequences, both for people who buy into it, and for those affected as a result of others’ non-compliance with vaccination. Governments are burdened with the duty to protect their constituents from infringements upon their rights and welfare, such as in the case of vaccine disinformation, even online. For example, government intervention could take the form of criminal penalties either against digital platforms for hosting abusive or harassing content, as currently enforced in Australia, or against individual content posters.

For digital platforms who wish to self-regulate speech and activity, the reality is that they are ill-equipped to take users’ voices into account during content moderation. Big Tech companies’ staff are predominantly white, college-educated males, an unrepresentative cross-section of the internet’s large user base. The implication of this is that rules of propriety on such platforms are often crafted by small teams of people who share a particular worldview, leading them to overlook minority perspectives and cultural nuances to which they have little to no exposure. As such, content moderators are often incapable of judiciously determining appropriate regulations or recourse when different contexts and cultural norms are at play. This then means that minority perspectives are unfairly penalized and rendered less valid than the dominant segments of society, since they are not part of the dominant discourses normalized and perpetuated by this select demographic of content moderators. Essentially, the narrow demographic and worldviews of platform moderators inadvertently reinforce racism and other forms of inequity by rendering selective content visible or invisible at their own discretion.

Mark Zuckerberg calls for stronger internet regulation. Photo by The Guardian. 

Mark Zuckerberg calls for stronger internet regulation. Photo by The Guardian

This is where user participation and governmental guidelines both prove relevant. At the very least, digital platforms should intentionally ensure equal representation of diverse perspectives by employing platform moderators of diverse demographics. Diversifying recruitment could account for varied perspectives and provide moderators with clearer insights into each demographic’s tendencies and cultural norms. Facebook’s decision to create the Oversight Board, an allegedly independent, third-party review body, seems to be a step in the right direction. In May 2020, Facebook announced that the scope and candidacy of the forty board positions were first delineated via a global consultation process with over 650 people in 88 different countries. In a bid to ensure members reflect a broad range of knowledge, backgrounds and experiences, members who have lived in over 27 countries and who speak at least 29 different languages were chosen. Of course, questions on the extent of the Board’s independence and objectivity still stand––the initial four co-chairs appointed to select the remaining 36 members were appointed by Facebook (assisted by consultancies), and the Board is also funded by $130 million from Facebook. 

Of course, users themselves should constitute a core part of content moderation, working with governments and digital platforms to co-regulate and shape the trajectory of online discourse. Crowdsourced user ratings are one possible, democratic means of content moderation. Studies have shown that displaying trust ratings of laypeople on digital platforms can prove effective against misinformation or unsavory content. The modus operandi of this mechanism is as follows: every user content or post is accompanied by an “upvote” or “downvote” function that the general public can use to attest to the content’s civility or factual accuracy. The crowdsourced rating algorithm can then decrease the downvoted posts’ visibility substantially, while ensuring that lone, troll votes to the contrary have little impact on rankings once posts garner sufficient upvotes or downvotes. Reddit, for example, already employs an “upvote” and “downvote” function, although it is aimed more at determining whether the content “contributes to the conversation” rather than the factual accuracy. A mechanism more targeted at determining the objectivity or civility of content would thus help create a more pleasant internet experience for the masses. Not only can such an approach allow users of all demographics to work alongside platform moderators to build a safer, healthier online space, but it also can cover much more ground compared to bodies like the Oversight Board, the members of which only deliberate over a small pool of content.

One thing is clear: Barlow’s utopic vision of a wholly individualist, bodiless internet remains unfeasible and outdated. Facebook’s Founder and Chief Executive Mark Zuckerberg himself has affirmed the necessity of a “more active role for governments and regulators” to “protect society from broader harms.” The Oversight Board, for example, seems to be a precursory step in incorporating ‘outsider,’ impartial, diverse viewpoints in content moderation, but it is vital that users themselves are accorded active, forefront roles in shaping the digital landscape of which they are a part. At the center of the debate surrounding the internet’s trajectory lies a cognizance that the Big Tech companies behind the Internet’s many applications share an interdependent relationship with users and governments across the globe. A collaborative partnership is needed to shape a conducive, civil cyberspace. 

Eleanor Yeo (BC’23) is a senior editor at CPR, studying History and Sociology. She hails from sunny Singapore where it is summer all-year-round. In her free time, she enjoys pastries (and food in general really) and karaoke.

Eleanor Yeo