Effective Altruism and the Cult of Rationality: Shaping the Political Future from FTX to AI
Who in society is left behind when those in power prioritize the future over issues impacting individuals in the current day? The Effective Altruism (EA) movement has an answer. Utopian in their future-minded aims, effective altruists idealize an imagined future where accumulated wealth creates the greatest good for future generations. While not overtly political in current times, the movement has latent, insidious impacts on the functioning of American government and society. Most recently, such influence has made itself explicitly visible in the FTX bankruptcy scandal and subsequent congressional hearings. While reflecting on the movement's political and social repercussions, it is imperative for us to assess what an EA-dominated world would look like. If effective altruism controlled political advocacy, society would tumble towards a different form of tyranny of the majority.
As a method of future minded, strictly data-driven philanthropy—popular among young elites in the technology and finance industries—EA emphasizes the quantification of “the most good.” It reduces the effectiveness of solutions to three main measures: the number of potential lives saved, the odds of success, and the most objective value derived from philanthropic donation. A morbid, rational, altruistic view of life, the movement translates its empirical analysis of value into its priorities in the present and the future. The EA line of reasoning suggests some social issues in the present day should be of less importance if it would negatively impact the possibility of life for future generations, such as artificial intelligence turning against humans.
The EA movement started in November of 2009, brought about by two Oxford University students, Toby Ord and William MacAskill. MacAskill and his group of like-minded peers took utilitarian moral philosophy to heart and created an organization called “Giving What We Can,” in which members were encouraged to donate 10 percent of their income to charities. It then evolved into the wider EA movement. MacAskill writes that, “Humanity, today, is like an imprudent teenager.” In other words, short term-enjoyment has become a prime motivator for decisions, despite the magnitude of their long term repercussions. For example, while it may be seen as less ethical to work on Wall Street, if one does so with the intention of giving away their earnings, it would be considered more “effectively altruistic” than working at a low-paying yet socially responsible job. Such paradigms sit at the core of EA’s philosophy.
The EA movement has become more prevalent in the current socio-political landscape due to its connection to Sam Bankman-Fried and the FTX cryptocurrency scandal. While the actual function of FTX is relatively unrelated to promoting an EA ideology, Backman-Fried himself was one of the most vocal advocates for the EA mission and provided substantial donations to the cause. His net worth, prior to congressional investigations, was estimated at $32 billion.
On November 11, 2022, Bankman-Fried’s company filed for bankruptcy. $8 billion in customer deposits were reported missing, and Bankman-Fried was shortly thereafter arrested in the Bahamas on December 13, 2022. The next day, the United States Senate Committee on Banking, Housing, and Urban Affairs conducted a full committee hearing about the crypto crash.
This incident created several problems for the EA movement. Not only was the movement’s public image affected, but the millions of dollars Bankman-Fried planned to give to EA organizations and interest charities disappeared overnight. The movement has lost one of its wealthiest supporters. More importantly, however, the scandal prompted a reckoning on how an organization that prioritizes rationality, scrutiny, and assessing risk could miss this level of blatant unethical practices. Of course, Effective Altruists insist the FTX scandal has not impacted their mission.
To a certain degree, the financial solvency of the EA movement is irrelevant. The FTX scandal has shed light on a far more profound issue—the reliability of EA's leadership. Because of the recent scandal, there is cause to consider why those in the Effective Altruism movement should determine effectiveness and rule on what deserves philanthropy. The individuals involved in the EA movement often come from positions of relative privilege, particularly those involved in the tech field. They are majority young, male, white, educated, and socioeconomically advantaged. According to the Center for Effective Altruism, the median age of those involved in the movement is 24 years old, the majority are currently employed or done with collegiate education, and less than 15% were undergraduates. A privileged cohort determining what benefits the majority, while dismissing problems impacting a minority group, does not translate into equitable policymaking.
Policymaking should not be based on deserving, it should be based on what is equitable. However, the utopia of EA is a utopia based on deserving. There is a criteria to get in and reap the benefits of accumulated bounty, and many do not make the cut because they do not fit within a quantifiable mode of statistics nor “effectiveness.” For Effective Altruists, this utopia exists somewhere in the future and is denied to people in the present, unless they are one of the few who are rational enough to be part of the movement or deemed worthy of its reward.
Most Effective Altruists believe that what stands in the way of their utopia is artificial intelligence—that is, AI taking over human society and exceeding the intellectual performance of humans. If the exponential increase in AI abilities is controlled, it could be a benefit to society. However, if AI goes unchecked there could be dire consequences, with the Center for Effective Altruism warning that “it could result in an extreme concentration of power in the hands of a tiny elite.” A different tiny elite from the one funding EA efforts? These expanding AI abilities will affect future generations. The way to combat this potential catastrophic future is through AI value alignment, which ensures that AI behaves with the same values as humans. Effective Altruism pumps a great deal of money into AI alignment research, such as the Berkeley’s Center for Human-Compatible AI. It is framed in a way similar to the relationship between constituents and their governments. How do people ensure that their government is working for the will of all the people? They develop a constitution or a set of rules to abide by that dictate proper action and values.
Effective Altruists seek a utopia that is sterile and sealed off from the small struggles of the present. While upholding rationality as the means to an altruistic life, the movement fails to actually think rationally about its own existence and, most importantly, the consistency between its projected values and actual actions. If political advocacy were to take on this framework, the majority would overshadow the needs of the few in the present. The good of real humans would be overshadowed by the focus on artificial intelligence. Thus, we must be wary of the power behind a mindset focused solely on the hypothetical future and allow space and empathy for the short term needs of society. A tyranny of the quantifiably rational majority would lead to more quantifying of human suffering than policy change.
Claire Schnatterbeck is a Junior Editor for Policy 360. She is a rising senior (CC ‘24) studying political science. When she's not searching for an open seat in Butler Library she can be found listening to a podcast, strolling through Riverside park looking for dogs, or discussing her favorite Simon and Garfunkel song. She has roots in Illinois and Wyoming.
This article was submitted to CPR as a pitch. To write a response, or to submit a pitch of your own, we invite you to use the pitch form on our website.