A Digital Welfare Dystopia
In 2013, the state of Michigan integrated a $47 million automated fraud detection system, entitled MiDAS, into their welfare programs. In its first year of use, this system accused five times as many individuals of unemployment insurance fraud than had been accused in any year prior in the state’s history, when human employees had governed such judgements. Excited by the prospect of uprooting previously inconspicuous instances of fraud, the state chose not to review this discrepancy manually, instead trusting that a technological tool was surely more reliable than error-prone individuals had ever been. Accordingly, state officials issued statements to all those individuals that MiDAS had flagged, demanding that the amount they had received in unemployment insurance be repaid immediately, along with interest and various other monetary civil penalties. The financial stress induced by these notifications resulted in evictions, divorces, and homelessness for many of the accused. 43 year old Brian Russell, for example, no longer able to support his two children after making his repayment, was forced into filing for bankruptcy.
It wasn’t until years later, after a few individuals had taken issue with the allegations and brung the case to court, that the state finally agreed to review the algorithm’s decisions. In the process, they found that a whopping 93 percent of the fraud determinations made by the tool were demonstrably incorrect. In other words, over 20,000 individuals had suffered as a result of wrongful persecution, all because an algorithm had been venerated as a sort of holy grail.
Unfortunately, the issue could not be traced back to some single, accidental, and easily remediable error within MiDAS. As it turned out, the tool, programmed to associate fraud with departures from strict rules and patterns, interpreted any and every slight deviance from the ordinary as criminal behavior. Of course, this was problematic: individual lives are nuanced and complicated, plagued by happenstance and thus rarely obeying some perfect and predictable framework. Whereas human caseworkers had been able to appreciate such complexities when reviewing files manually, this proved inherently difficult for software programs. In fact, when states and provinces in Australia and the Netherlands built and adopted their own welfare fraud detection systems, they too found the programs unable to comprehend the intricacies of an individual’s life. For example, the programs often conflated innocent mistakes or idiosyncratic circumstances, such as the case of contract workers, with fraud. Importantly, these same tools have been integrated into other facets of the welfare state, often brandished as proof of progress and innovation. Here, too, they have had catastrophic consequences.
In 2006, for example, the governor of Indiana signed a $1.4 billion contract with a coalition of large tech companies, including IBM and ACS, to automate the process for determining eligibility for welfare programs such as the Supplemental Nutrition Assistance Program (SNAP) and Medicaid. Under this new system, all individuals applying to the aforementioned public assistance programs would have their eligibility verified by IBM’s system as opposed to individual caseworkers. The intentions behind this decision were noble: the state wanted to increase efficiency, lessen the load on caseworkers, and simplify the process for applicants. And yet, the outcome was disastrous. In the three years that the tool was employed, one million applications were denied, a 54 percent increase from the average number denied in the years beforehand.
Many of the applications rejected were, upon later review, done so unfairly. In one case, a woman named Omega Young had her Medicaid benefits revoked because she missed a telephone recertification appointment whilst battling ovarian cancer in the hospital. As a result, she could no longer afford her medication, nor could she access free transportation to her medical appointments. Her plight was not unique, either. Another woman was similarly deemed ineligible for Medicaid by the same IBM system when she failed to answer a call from ACS while in the hospital, being treated for sudden heart failure. Their stories remind us that when technology fails in the context of a welfare system, it is the already vulnerable in our society who are most impacted.
Some people suggest that the solution to this pervasive issue lies in mending the technological tools so that they can take into account and recognize special circumstances without immediately penalizing for them. This prospect, though, remains elusive. For one, it seems obviously unreasonable to suggest that one would be able to explicitly enumerate, for the algorithm, all the specific situations under which it should show grace. Of course, to this point, some might offer up the idea of incorporating machine learning algorithms into the tool. After all, machine learning algorithms function by analyzing sets of real-world data in hopes of discovering patterns that can be used to make decisions about future, unseen cases. Incorporating such capacities into tools like MiDAS would hypothetically allow them to “learn” which cases are forgivable instead of being explicitly told so. Still, because machine learning models are taught using past data, the resulting tool may tend to err when given a case that differs significantly from those on which it was trained. And errors in this context, as the Michigan and Indiana cases indicate, are devastating and must be avoided at all costs.
Not only that, but as Virginia Eubanks, Professor of Political Science at the University of Albany and author of Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, points out, even if we are able to somehow fix these problems in software, we ought to still be cautious about giving algorithms complete decision-making power in the welfare system. In particular, she is wary of using algorithms to determine who receives aid and who doesn’t. This was done, for example, by the Los Angeles city government in 2013 to decide who among the 60,000 unhoused people in the region would receive the limited housing assistance available.
“Automated decision-making systems,” Eubanks argues, “act as empathy overrides, outsourcing inhuman choices about who survives and thrives, and who doesn’t” because we know we can’t make those choices ourselves. In particular, Eubanks contends that “we empower machines to make these decisions because they are too difficult for us, because we know better. We know that there is no ethical way to prioritize one life over the next.”
In other words, even if technology progressed to the point where the aforementioned tools could work perfectly, granting them all decision-making power would serve only to selfishly shield us from the intrinsic flaws in the welfare system and the need for institutional change. Delegating ethics and welfare questions to machines would mean that we no longer have to come face to face with individuals who had been left behind by the system.
Technology has begun to pervade every aspect of our world, including our systems of governance, so it is perhaps unreasonable to assume or wish that the welfare state could remain somehow immune. However, to the extent that we can, we ought to restrict its applications to merely administrative or maintenance-related tasks that might aid human caseworkers as opposed to entirely supplanting them. If we don’t take these precautions, then, as the UN’s Special Rapporteur on extreme poverty and human rights, Phillip Alston, warned in a harrowing 2019 report, we may very well find ourselves “stumbling zombie-like into a digital welfare dystopia.”
Shruti Verma is a staff writer at CPR and a sophomore in Columbia SEAS studying Computer Science with a minor in Political Science and Economics.