Political Tetris: Demystifying The Role of Technology in Gerrymandering

A visual representation of the impact of gerrymandering. Graphic courtesy of Wikimedia.

A visual representation of the impact of gerrymandering. Graphic courtesy of Wikimedia.

The 2020 election was consequential for a variety of reasons, not least of which is the fact that, given the recent end of a decade, state legislatures will soon begin the constitutionally sanctioned process known as redistricting. Occurring every ten years and after the census, the process of redistricting involves states revising their congressional and state legislative district maps, theoretically in response to demographic changes and with the goal of better reflecting their current population. Though the partisan clashes that occur during this affair garner less attention from the public than those that occur on Capitol Hill, they are just as, if not more, momentous. The balance of power established by overtly partisan redistricting, known as gerrymandering, often has reverberating consequences that impact the next several election cycles. In fact, as reported by the Center for American Progress, unfairly drawn congressional districts in 2010 shifted, on average, 59 seats in the House of Representatives during the 2012, 2014, and 2016 elections. 

While such malignant and long lasting effects of gerrymandering have been well documented and the motivations of the lawmakers who pursue it well understood, there is one crucial aspect of the problem that remains rather inconspicuous: the role of technology. In order to ascertain how gerrymandering might be curtailed, we ought to closely explore how developments in machine learning have perniciously affected the redistricting process- and, conversely, how they might yet save it. 

For starters, it is evident that the process of redistricting looks nothing today like it once did. John Ryder, the former general counsel of the Republican National Committee and a principal architect of the Republican Party’s 2010 REDMAP initiative, notes that when he and his colleagues started working in the field back in the 1970s, their core set of tools included “handheld calculators, paper maps, pencils, and really big erasers.” This began to change by the 1990 redistricting cycle, when companies like the Caliper Corporation started to develop cheaper and more precise mapping software. Though such applications were originally intended for other, transportation-related purposes, redistricting operatives in both parties (ever so watchful for opportunities that might give them an edge over their opponents) began looking into the prospect of repurposing the software to more quickly generate and analyze potential districting solutions. In this process, it became abundantly clear that these applications, even in their infant stages, were far more efficient and dynamic than paper and pencil would ever be. As the demand for such tools became larger and as the fields of data science and machine learning grew organically, redistricting software became more specialized and more powerful. 

By the 2010 cycle, partisan lawmakers had access to state-of-the-art algorithms with the ability to design thousands of hypothetical district maps nearly instantaneously. The software could then use data about the political leanings and demographic characteristics of various neighborhoods to predict how each potential layout might tip the scales in favor of one party or another. These tools allowed legislators to gerrymander with more precision than ever before by offering suggestions regarding how to fine-tune district lines even at the household level so as to ensure maximum advantages. This “political laser surgery,” as it was dubbed by David Thornburgh, president of an anti-corruption organization known as Committee of Seventy, is the reason the 2010 redistricting cycle is widely considered the worst in our country’s history. 

And, unfortunately, prospects for this upcoming cycle and future ones are similarly dismal. Supreme Court Justice Elena Kagan summarized it best in her dissent for Rucho v. Common Cause, a landmark case in which the Supreme Court ruled that partisan gerrymandering claims are nonjusticiable because they represent political questions that lie outside of federal courts’ jurisdiction. "Gerrymanders will only get worse (or depending on your perspective, better) as time goes on — as data becomes ever more fine-grained and data analysis techniques continue to improve," she wrote. "What was possible with paper and pen — or even with Windows 95 — doesn't hold a candle (or an LED bulb?) to what will become possible with developments like machine learning. And someplace along this road, 'we the people' become sovereign no longer.” 

Of course, it’s unreasonable to assume that technology could now be somehow severed from the process of redistricting. After all, how would such a constraint even be enforced? Not only that, but it’s worth recalling that even though the aforementioned tools have facilitated and expedited gerrymandering, they are not to blame for the malicious intentions of lawmakers who pursue it. For this reason, some have considered turning the process of redistricting over to algorithms entirely, removing human intervention altogether. This idea, though theoretically promising, proves flawed upon a closer examination of the inner workings of these tools and algorithms.

For one, these algorithms are, in and of themselves, apolitical. Their only goal is to find the optimal solution to some programmatically-defined problem given data and parameters with which they can measure success. Because these parameters must be externally determined, it is wholly illogical to imagine a process of redistricting completely devoid of human influence. Some might still question, though ––why not choose an objective criterion once, in some sort of bipartisan manner, and then turn the rest of the process over to the computers? Unfortunately, it turns out that it is rather difficult to imagine a single criterion that could ensure “fair” districts, for the definition of fair itself is so widely debated. Brian Olson of Carnegie Mellon University posited that, given gerrymandering’s history of engendering asymmetrical and distorted districts, it might be appropriate to have computers prioritize compact and equally-populated districts. Though this seemed logical, researchers found that the resulting tool, called Bdistricting, wasn’t successful in creating districts that would have competitive elections. This is in part due to America’s political geography. Many states are characterized by dense and urban Democratic centers surrounded by sparsely populated and rural Republican areas; as such, tools that naively prioritize “nicely-shaped” districts often still end up disenfranchising voters. Similar tools with varying interpretations of fairness failed as well. Not only that, but attempts at improving algorithms by having them take such geographic considerations into account proved unsuccessful; with so many added variables and complexities, the problem simply became computationally intractable. 

Technology can not, then, be trusted to handle the process of redistricting alone. But it can play an important role in its reform. Recently, researchers and engineers alike have begun working on and publishing easy-to-use tools that expose the partisan motivations underlying given district maps. These pieces of software offer a window of transparency into the redistricting process that never existed before and thus that have the capacity to empower citizens to hold their representatives accountable. By simply and accurately summarizing the long term effects of proposed district maps, this software weakens the ability of our state lawmakers to obfuscate their malintent. Not only that, but some existing tools actually allow citizens to try their own hand at redistricting, making the process so easy that fifth graders were able to quickly become proficient. This could prove extremely useful in states like Arizona, Washington, and California, where the committee that oversees redistricting is required to consider any proposals/submissions made by the public.

In this way, the very developments in technology that empowered gerrymandering can now serve to hobble it. Of course, to truly eliminate the possibility of lawmakers manipulating and manufacturing maps by their own accord, more institutional change and oversight is needed. In the meantime, though, we citizens ought to use tools such as those provided by the Princeton Gerrymandering Project and the Committee of Seventy to monitor our elected officials as they begin redistricting later this year. After all, our lawmakers might have graduated from paper and pen, but so have we.

Shruti Verma is a staff writer at CPR and a sophomore in Columbia SEAS studying Computer Science with a minor in Political Science and Economics. 

Shruti Verma