Weapons of Math Destruction by Cathy O'Neil

O’Neil’s book offers a wide-ranging and alarming critique of Big Data technology, profit- and efficiency-driven algorithms. The title and the central concept in the book, Weapons of Math Destruction (WMDs), refers to prediction models that inform decisions at large scale and damage the well-being of many people subjected to them. They include models that categorize, track, screen, and managing “potential” criminals, contingent workers (especially minimum-wage workers), job and loan applicants, and insurance premiums.

What makes WMDs more destructive is their lack of transparency. You cannot argue with them, give them feedback, or inspect their inner mechanisms. They are unaccountable. The predictions of WMDs, with the high confidence often attached to them, become self-fulfilling: Those detected as at risk of recidivism (i.e., likely to commit another crime) receive harsher sentences, further losing their ability leave the life of crime; those categorized as less likely to pay back their loans are assigned higher interest rates, further reducing their ability to leave the life of poverty.

Targeted campaign (“micro-targeting”) on social media, which is probably a familiar topic to the readers, is also discussed in a chapter on the impact of WMDs on our civic lives. For micro-targeting to be effective, the targeted subjects must be isolation and atomized, unable to communicate with fellow citizens about how the campaign managers are framing their message for them.

In addition to the lack of transparency, WMDs encourage people who are being tracked by them, e.g., teachers who depend on quantitative evaluations, YouTubers trying to get views, to game the system and modify their behavior to satisfy the algorithm.

In the first chapter, O’Neil points out that models aren’t value-neutral. They must simplify a situation based on values. A model of “food preparation” in a household, for example, is based on (i) the expected satisfaction of family members and (ii) management of available resources. Different households have models that prioritize either (i) or (ii). Even within a family, O’Neil reminds us, a child’s prioritizing enjoyment might ask for a model of having ice-cream for every meal. A corporation who is only motivated by profit and efficiency can be compared to that child, who is unaware of more important principles involved in the preservation of human dignity. Models are abstractions, which means they are essentially selective and their selectivity is guided by values. To correct their biases, it’s necessary for people to implement ethical constraints to models, which sometimes require sacrificing efficiency.

O’Neil argues that a naive–or efficiency-driven–reliance on large-scale predictions about people, with respect to crime, work performance, productivity, or various forms of risk, means that those predictions will replace the people. Thus, the prediction that someone might commit a crime or be a bad worker will lead to the judgment that the person is a future criminal or someone who shouldn’t be hired. So much of those predictions correlate with poverty, which is the reason why WMDs reinforce existing inequalities and make it harder for people to move out of poverty.

The book is a must-read if you’re interested about the social-economic impact of Big Data.