About the Author

The Impact of Computerized Social Services on the Underprivileged

By Ernest Davis

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. By Virginia Eubanks. St. Martin’s Press, New York, NY, January 2018. 272 pages, $26.99.

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. By Virginia Eubanks. Courtesy of St. Martin’s Press.
How does the computerization of governmental social services impact the poor? In Automating Inequality, Virginia Eubanks delivers a harsh verdict: throughout American history, government policy towards the poor has often amounted to criminalizing poverty; computer technology makes these policies more inescapable, more implacable, and more brutal. Eubank’s book is deeply researched, well-written, passionate, and extremely troubling.

The core of Automating Inequality consists of three case studies: welfare eligibility determination in Indiana, housing eligibility in Los Angeles, and child welfare in Pennsylvania’s Allegheny County.

In 2006, the state government of Indiana hired IBM and Affiliated Computer Services (ACS) to develop a new, more efficient system to determine eligibility for welfare programs such as Medicaid and food stamps. Office centralization was the new plan’s main feature. In the old system, applicants and beneficiaries visited local offices that held their documentation; each family was assigned a fixed case worker. The new system moved services and documents to a centralized call center. Workers at the call center handled issues on a task-by-task basis; each time a beneficiary called, he or she would talk to a new person.

The results were catastrophic. Under the previous system, the positive error rate (benefits incorrectly awarded) was estimated at 4.4 percent and the negative error rate (benefits incorrectly denied) at 1.5 percent. Between 2006 and 2008, the combined error rate more than tripled, rising from 5.9 to 19.4 percent, mostly in incorrect denials. 283,000 personal documents faxed to the center were lost — and any single missing document could deny an applicant benefits.

Interacting with the system was often a nightmare. Applicants were told to expect a phone call within a certain time window; they would take leave from work to be home at that time, only to wait for a call that never came. Inability to reschedule the call was sufficient reason for denial of benefits. Applicants received letters from the office refusing coverage, with the justification “FAILURE TO COOPERATE IN ESTABLISHING ELIGIBILITY” without further explanation. After weeks of work, they often discovered that one required signature was missing or a document had been lost.

The whole incident became a major scandal, with multiple lawsuits. Fundamentally, however, IBM and ACS had given the state what it had asked. Eubanks writes:

The goals of the project were consistent throughout the automation experiment: maximize efficiency and eliminate fraud by shifting to a task-based system and severing caseworker-to-client bonds. They were clearly reflected in contract metrics: response time in the call centers was a key performance indicator; determination accuracy was not. Efficiency and savings were built into the contract; transparency and due process were not.


In 2013, Los Angeles County introduced a “coordinated entry” system to match homeless people with available public housing. Its guiding principle is to give priority to the most vulnerable homeless people, as long as they are deemed responsible enough to be safe neighbors. To evaluate vulnerability, a program called the Vulnerability Index-Service Prioritization Decision Assistance Tool uses applicant information to compute a score between one (low risk) and 17 (very vulnerable).

Eubanks concedes that the entry system fairs better than the chaos that existed under the previous process. However, two large issues persist. Computerization cannot solve the first problem: public housing resources are simply incapable of matching need. Under such circumstances, the system is merely a source of bitter disappointment for many applicants, who complete applications but get no results (Los Angeles recently passed two major ballot initiatives funding housing and other services for the homeless, which will hopefully alleviate this fundamental dilemma).

The second issue pertains to the collected data. To apply through the coordinated system, one must provide a large amount of personal information. Applicants have no way of knowing where this information might go or how it may be used. Few people will sacrifice an opportunity for housing by refusing to provide the requisite details. As a result, while information is collected from large numbers of people, only a handful benefit by actually receiving housing.

Additionally, one’s criminal record inevitably becomes part of the application. However, conflicts with the law can be unavoidable for the chronically homeless. Consequently, the collection and storage of data often leads to criminalization of the poor. Eubanks writes:

[T]he pattern of increased data collection, sharing, and surveillance reinforces the criminalization of the unhoused, if only because so many of the basic conditions of being homeless—having nowhere to sleep, nowhere to put your stuff, and nowhere to go to the bathroom—are also officially crimes. If sleeping in a public park, leaving your possessions on the sidewalk, or urinating in a stairwell are met with a ticket, the great majority of the unhoused have no way to pay resulting fines. The tickets turn into warrants, and then law enforcement has further reason to search the databases to find “fugitives.”


The Allegheny Family Screening Tool (AFST) is an automated system designed to identify children at risk of abuse or neglect. It was launched in Allegheny County, Penn., which includes Pittsburgh and its environs, in August 2016. The system’s design was hands-on and transparent, its usage is measured, and its goals are modest.

The output of the program is merely advisory, and final decisions are always left to the judgment of a human being. Marc Cherna and Erin Dalton, administrators of the department utilizing the program, have much experience and are respected and trusted by the community.

Nonetheless, there are reasons for concern. The predictive model is trained on decades of records, but both the input and outcome variables are problematic. The program aims to predict abuse and neglect, but neither is easy to accurately measure from the data. Instead, the system uses two proxies for outcome variables: “community re-referrals,” i.e., multiple calls to the hotline, and placement of children in foster care. But hotline calls seem to be racially biased and are sometimes simply malicious. As calls are anonymous, screening out vicious callers is impossible.

The input variables are also questionable. Distinguishing indicators of neglect—lack of food, inadequate housing, unlicensed childcare, unreliable transportation, utility shutoffs, homelessness, lack of health care—from the natural effects of poverty is very challenging.

The Office of Children, Youth, and Families (CYF) offers a variety of services and support to indigent families. Necessarily, it is also tasked with reporting problematic family situations. This results in conflicted parents: in asking for aid, they risk the CYF taking their children away.

A determination of neglect or abuse can have lifelong consequences; offenders are permanently barred from any jobs involving interactions with children. Children suffer as well. Since growing up in a troubled family is predictive of being an inadequate parent, they start life with an elevated AFST score. As Eubanks indicates, a high AFST score can easily become a self-fulfilling prophecy:

A family scored as high risk by the AFST will undergo more scrutiny than other families . . . A parent is now more likely to be re-referred to a hotline because the neighbors saw child protective services at her door last week. Thanks in part to the higher risk score, the parent is targeted for more punitive treatment, must fulfill more agency expectations, and faces a tougher judge. If she loses her children, the risk model can claim another successful prediction.


Based on these case studies, I do not think a decisive argument can be made that computerization has hurt the poor in material respects and deprived them of services, housing, or goods. The Indiana case was an enormous fiasco, but the major failing seems to have been almost entirely organizational. The digitization was simply a web-based interface and standard database for record-keeping.

Computerization plays a much larger role in the Los Angeles coordinated entry program—a decision support system—and the AFST, a decision support system based on big data analysis. But in these instances, it is not clear whether the computerized system is performing worse than any other system; the coordinated entry program certainly seems better than the preceding method.

However, if we ask instead whether these computerized tools bring benefits that outweigh costs to the poor in increased risk of arrest, potential loss of their children, compromise of privacy, dehumanization, anger, and alienation, then Eubanks’ studies show that we have no reason for confidence and many grounds for skepticism. These systems look very different to their targets than to their designers. No computerized system, however thorough its data or clever its algorithm, can single-handedly make a major change to the state of the poor; such change will require a large and serious commitment from American society. At present, there is not much sign of that.

Ernest Davis is a professor of computer science at the Courant Institute of Mathematical Sciences, New York University.