Decentralizing Big Tech: A Path to Restoring our Freedoms Online

Computer code pictured. Photo by Markus Spiske.

Computer code pictured. Photo by Markus Spiske.

Logging onto social media is often reminiscent of entering a war zone. Troll “armies” and users espousing unsubstantiated opinions assume the role of enemy insurgents. Rather than leveraging guns and artillery, their weapons of choice are insults and prejudiced remarks, doxxing threats, and the conscious or subconscious spread of misinformation. As we scroll through our feeds, we must also maneuver around these landmines of viral yet unvetted information, subjective statements disguised as fact, and hate speech. 

Now, more than ever before, it is difficult to discern fact from fiction. As a result, local and federal governments around the globe, the public, and the five U.S. companies known as “Big Tech”—Microsoft, Facebook, Apple, Google, and Amazon—are entrenched in a debate surrounding how we should curb  the spread of misinformation on social media while safeguarding social media users’ freedom of speech. This debate is further problematized by privacy concerns caused by Big Tech’s collection, use, and profitization of consumer data and contentions surrounding its monopolization of the tech industry. As evidenced by the removal of the alt-tech social network application Parler following the U.S. capitol riot, Big Tech companies have responded to public outcry by employing more stringent content moderation and creating institutions like Facebook’s Oversight Board, commonly dubbed Facebook’s “Supreme Court.”  Along with efforts to streamline content moderation, scores of antitrust legislation have also been levied to restore competition within the tech industry, curb Big Tech’s consolidation of consumer data, and to restore consumer freedom of choice in technology. 

The intertwining between these agents, their practices, and their implications on our speech and privacy beget questions about the responsibility Big Tech and governments have in protecting our freedoms and privacy online. While it is evident that some level of content moderation and antitrust doctrine is necessary, do current regulations uphold fundamental citizen rights and democratic values or must we look beyond our existing technological milieu and policies in our efforts to bolster consumer freedoms?

Content Moderation: A Practice Riddled with Subjectivity

At first glance, the process of moderating content may seem intuitive and standardized. Yet, the content moderation practices of Big Tech companies are subjective and inconsistent due to the flaws in their tools used. Between April and September of 2019, Facebook purportedly removed 3.2 billion fake accounts along with millions of posts of explicit child abuse, pornography, terrorism, and discriminatory language or misinformation. Approximately 90% of this content was removed by artificial intelligence (AI) technology—generated and licensed by proprietary and third-party providers—during the pre-moderation stage. At this stage, AI leverages adaptive “training data” including hash-matching, keyword filtering, and recurrent neural networks to filter and remove content deemed inappropriate or harmful by respective Big Tech rules and terms of services. The remaining 10% is removed by human content moderators and consists of posts that were removed during the pre-moderation stage but were appealed by users. 

Despite the usefulness of these means of content moderation, AI software used by Big Tech corporations has shown itself to reify and perpetuate societal biases. AI is laden with subjectivities that, given the scale of moderation, dangerously proliferate. AI lacks accessible—meaning those that can be shared and developed among tech companies—and effective algorithms and the professionals to implement them. Their inefficacy stems from the lack of high-quality datasets necessary to train AI-enabled content moderation systems. Thus, AI is ill-equipped to detect nuances like national and cultural sensitivities in content moderation. Although human content moderation is intended to resolve these disparities, it presents its own problems. Human content moderators for Big Tech corporations are often a mixture of both proprietary and third-party, short-term, contract workers—both American and international—whose review process is intrinsically devoid of standardization. 

Not only are content moderation tools vulnerable to bias, but confidential, internal policies that provide a framework for how flagged content is distributed into reviewable queues are also dependent on the regional sociopolitical context and the bottomline motivations held by the company. The subjectivities that prioritize the motivations of Big Tech corporations can act symbiotically to impose a distinctly American, libertarian approach to interpretations of concepts such as free speech and information access. Commercial content moderators often act as conduits for the complex sets of values and cultural norms favored by their platforms and embed their own belief systems into their moderation practices. Thus, in an effort to engage in discourse online or prevent the removal of their content, social media users must adhere to predominantly western values, the ideologies of artificial systems and human content moderators, and a company’s brand reputation. A social media user may post at-home treatments that have eased their own COVID-19 symptoms, for example, but a tech platform may flag that content as threatening because posts that misalign with CDC guidelines could harm the platform’s reputation. Alternatively, AI software and human content moderators may remove a post simply because its content does not align with data inputs in the AI system or the personal ideals of human content moderators. At the whim of these unknown agents and motives, users are left to stomach the dilution of their free speech online.

The Big Tech Monopoly Constricts the Consumer

The constraints imposed by Big Tech’s biased moderation practices prompts many users to seek alternative platforms; however, Big Tech’s monopolization of the technology industry has diminished the chances of competitive alternatives for consumers. Big Tech’s success is largely dependent on its consolidation and sale of consumer data—from usernames and passwords to IP addresses to the dates, times, and numbers of in and outbound phone calls and text messages—via proprietary AI and third-party developers. These data insights are used to refine the profiling of users for targeted algorithmic recommendations to increase website, advertising, and even political engagement to ultimately generate economic, political, or social value. 

Cutouts placed on the capitol lawn as Mark Zuckerberg prepared to testify before Congress. Photo by Avaaz.

Cutouts placed on the capitol lawn as Mark Zuckerberg prepared to testify before Congress. Photo by Avaaz.

Big Tech companies, however, weaponize this data and their scale to curb competition. Facebook often only permits third-party developers,who perform their own data analysis and extraction,on their platforms to enhance their own data insights. If third-party developers become competition, Big Tech companies have two options. They can either purchase them as Facebook did when it acquired Whatsapp in 2014, or bury them by drastically shifting policies and regulations on their platform to stunt further growth as Facebook did when it made application programming interfaces (APIs) available to third-party developers only on the condition that they refrain from developing competing functionalities. This “bury or buy” practice enables Big Tech companies to largely remain unrivaled in their services and leaves consumers with little choice about which platforms they can use. They are beholden to Big Tech corporations and, therefore, must endure their fragmented content moderation practices regardless of their infringement of consumer freedoms. 

Why Policy Reform and Anti-Trust Doctrine Has Been Insufficient

Aside from internal policy reform, Big Tech, particularly Facebook, has created a centralized and allegedly unbiased Supreme Court-like body to make final decisions about flagged content on Facebook—the Oversight Board. The Oversight Board is an independent body composed of forty global thought-leaders working in constitutional law, media policy, and humanitarianism. Four co-chairs were first selected by Facebook via a global consultation process with over 650 people in 88 countries. These co-chairs then work with Facebook to source, vet, and interview candidates for the remaining thirty-six board positions. The board selects a small number of  “highly emblematic cases” and determines if decisions were made in accordance with Facebook’s stated values and policies for content moderation. The board uses an appellate process whereby any individual can appeal their content decision to the board which, if selected, will then be reviewed with transparency. The board can also give recommendations to Facebook.

As of the end of January, the Oversight Board announced its rulings on its first batch cases, ultimately deciding to overrule Facebook’s decision to remove several posts pertaining to a COVID-19 “cure,” Nazi propaganda, nudity, and selacious commentary on the Uighur Muslims while upholding one decision regarding the Armenian and Azerbaijani people. Following these decisions, the Oversight Board recommended that Facebook tell users the specific rule that their post violated and better define policies on dangerous misinformation. This ruling will serve as precedent for how Facebook proceeds with similar issues and established credibility for the Oversight Board in the public and governmental eye. 

However, despite the board being in its early stage and its composition of lauded public figures, it merits scrutiny for centralizing and streamlining the ideological landscape while dangerously punting culpability away from Facebook. The Oversight Board’s review process is more intensive than proprietary and third-party content moderation, but it ultimately follows the same model leveraged by content moderators: it creates and uses a centralized body that “approximates” objectivity. Facebook has shifted responsibility about how content should be moderated from itself to the Oversight Board—from one centralized body to another. Furthermore, despite its appearance as an independent body, the board is funded by Facebook, which calls into question the true objectivity and transparency of its rulings. Their rulings also only consider an incredibly small fraction of the millions of content that Facebook removes,  meaning that the systemic free speech flaws in Facebook and other Big Tech companies go unchecked. Thus, while I am inclined to continue observing the Oversight Board in its nascent stage, its current iteration symbolizes a quasi-legal figurehead monitoring Big Tech’s interference with free speech, but it remains just that—symbolic.

Additionally, Big Tech’s stifling of competition and consumer autonomy through the “bury or buy” strategy has recently been met with antitrust legislation, but this may not be enough to deter its monopolization of the tech industry. In Dec. 2020, the Federal Trade Commission charged Facebook with the violation of Section 2 of the Sherman Act, which makes it unlawful for a person to monopolize trade in several U.S. states. Facebook is also charged with violating Section 7 of the Clayton Act, which prohibits acquisitions that significantly reduce competition and lead to creating a business monopoly. The outcome of this lawsuit could act as a watershed moment in Big Tech if it revitalizes the competitive landscape. However, Big Tech has amassed so much data, engagement, and integration in our daily lives that it will be difficult for Big Tech to truly be challenged in the near future.  

A Path Forward

Ultimately, the flaws of content moderation and of the Big Tech monopoly drive the need for decentralization. Facebook, the Oversight Board, or any other technological platform alone should not make decisions about what should be removed online. National, cultural, and individual subjectivities about what constitutes free speech call for the public to be at least partially involved if not entirely at the mantle. Revitalizing competition through further Big Tech antitrust doctrine can ensure that the public has the power to leave a platform for a competitive alternative should Big Tech continue to centralize and monopolize. Platforms must also commit to responsible decision-making in their moderation practices by ensuring that their rules coincide with human rights regulations rather than instituting new rules and creating new institutions as they go. In doing so, perhaps social media and Big Tech will be less reminiscent of warzones and instead, our online freedoms will be on the path to restoration. 

Tyrese Thomas is a third-year student in Columbia College studying Sociology and East Asian Studies. He possesses particular interests in entrepreneurship, venture capital, fashion and supply-chain, and law.

Tyrese Thomas