How Data Distorts: Artificial Intelligence and Cash Bail Reform

Binary numbers pictured. Photo by Pikrepo.

Binary numbers pictured. Photo by Pikrepo.

If decades worth of science fiction movies are to be believed, a robot-led conquest of the world is imminent, given the rapid development of artificial intelligence over the past few years. As a recently-declared Computer Science major on the Intelligent Systems track, it would be borderline sacrilegious for me to let this view go unrefuted; I’m a firm believer in the potential that AI has to greatly benefit our world. Machine learning has already revolutionized our understanding of genetics, allowing us to quickly and accurately identify individuals at risk for certain diseases.

Additionally, progress in the realm of computer vision is enhancing our ability to navigate the aftermath of natural disasters, and natural language processing has proven incredibly useful in educational settings. My passion and excitement for the future of the field is not unfettered, however. Though I doubt the validity of the dystopian futures portrayed in contemporary novels and films, I am acutely aware of the fact that AI has shortcomings and that putting blind trust in it has the potential for serious consequences. One particularly critical example of this danger is the use of artificially intelligent algorithms in the United States bail system. 

Designed to ensure that defendants comply with court proceedings and return for their trials, the US cash bail system allows a judge to determine the sum of money that a defendant must pay in order to avoid pretrial detention. This money serves only as collateral; it is returned to the defendant after they make their necessary court appearances—that is, if they are able to pay in the first place. The practice of cash bail is rightly criticized by a majority of Americans for criminalizing poverty by punishing those who simply don’t have the means to pay their court-determined bail amount.

Statistics concur with this reality: the Prison Policy Initiative has determined that most pretrial detainees have income levels that put them in the poorest third of the country. For them, the median bail amount set in the United States is approximately equivalent to eight months of their income, making it incredibly difficult, if not impossible, for these detainees to avoid detention. Another problem associated with the cash bail system is the broad discretion afforded to judges who are deciding the bail amount to set for a particular defendant. This ambiguity means that cash bail is susceptible to the same racial bias that pervades the rest of our criminal justice system. The Brennan Center for Justice finds that African American men on average receive bail assessments that are 35 percent higher than those of white men tried for similar crimes. In general, studies conclude that being Black increases an individual’s chances of being detained pretrial by 25 percent. These disproportionate impacts on already marginalized populations have motivated some states to search for alternatives that eliminate the use of monetary payments, while also limiting the impact of a judge’s discretion. In particular, some states have looked towards the realm of technology, proposing risk assessment tools that, using intelligent algorithms, could advise judges on whether or not an individual should be granted bail. Unfortunately, a closer look at what these tools might look like and how they might act suggests that the employment of AI in cash bail reform is misguided and dangerous. 

To understand this potential consequence of the use of AI to reform cash bail, we should first consider the nature of the software being suggested. In general, intelligent algorithms function by analyzing large sets of real-world data in hopes of discovering patterns that can be used to make predictions about the future. Pretrial risk assessment tools, then, would be trained on data from decades worth of past bail assessments and trial outcomes. This in turn means that the data which the tool understands to be “correct,” and from which it tries to synthesize patterns, will reflect the racial and ethnic disparities in policing, prosecuting, and judicial decisions that have historically characterized our criminal justice system. If an algorithm is only ever given biased, racist data to learn from, how can we expect the predictions it makes and the suggestions it gives to judges to be any less flawed?

Recent attempts at incorporating these tools into city and state judicial systems have borne out the same conclusion. In Broward County, Florida, the risk assessment tool being used was found to falsely label black defendants as future criminals at twice the rate of similar white defendants, which meant that judges were more often advised to release white defendants than they were to release black defendants. Similarly, in Kentucky, algorithms were much more likely to offer no-bail release to white defendants than to black defendants. Importantly, these prejudiced results may do more than just maintain the status quo level of racial disparities. In Virginia, the circuits that relied most heavily on artificially intelligent tools saw their jail populations become even more racially disproportionate; the status quo trend of Black men being detained more often than white men was magnified by the algorithms. As expected, prejudiced data produced nothing more than prejudiced results. After all, if the inherent concept of elevating the past as a standard to be emulated is flawed, how could we expect algorithms that consider this their guiding principle to be anything but? When coupled with the fact that the algorithms and data that many of these tools employ are proprietary, and thus inaccessible to any accountability-seeking individual or organization, it becomes clear that such risk assessment tools are not a viable alternative for the current cash bail system. 

This assessment is not to say that bail reform is impossible or unnecessary. Non-monetary supervised release has recently shown promise as a potential alternative. The aforementioned effects of cash bail on marginalized populations are undeniable and especially distressing when considering what detained individuals risk losing while they await trial in jail: their jobs, their homes, their educational opportunities, and potentially even custody of their children. It is dangerous to pursue an alternative that will maintain and potentially exacerbate the injustices created by the bail system, all while projecting the guise of progress. Artificial intelligence may have a role to play in bettering our criminal justice system in the future, but in the bail system it does nothing more than venerate the racist patterns of our recent past and extend them recklessly. It seems, then, that instead of worrying about some hypothetical robot takeover of the world, we should apprehend the misapplications of artificial intelligence in our present day, ensuring that this powerful technology, which surely is capable of doing much good, is not used to perpetuate society’s gravest errors. 

Shruti Verma is a staff writer at CPR and a sophomore in Columbia SEAS studying Computer Science with a minor in Political Science.

Shruti Verma