Tag Archives: Weapons of Math Destruction

Mathematical models: the toxic variety

Job applications, credit ratings and the likelihood of being arrested can be affected by mathematical models. Some of the models have damaging effects.

usnews-best-colleges

In 1983, U.S. News & World Report – then a weekly newsmagazine in competition with Time and Newsweek – published a ranking of US universities. For U.S. News, this was a way to increase sales. Its ranking system initially relied on opinions of university presidents, but later diversified by using a variety of criteria. As years passed, the U.S. News ranking became more influential, stimulating university administrators to seek to improve rankings by hiring academics, raising money, building facilities and, in some cases, trying to game the system.

One of the criteria used in the U.S. News ranking system was undergraduate admission acceptance rates. A low acceptance rate was assumed to mean the university was more exclusive: a higher percentage of applicants to Harvard are rejected than at Idaho State.

US high school students planning further study are commonly advised to apply to at least three prospective colleges. Consider the hypothetical case of Sarah, an excellent student. She applies to Stanford, a top-flight university where she would have to be lucky to get in, to Michigan State, a very good university where she expects to be admitted, and to Countryside Tech, which offers a good education despite its ease of admission.

Sarah missed out at Stanford, as expected, and unfortunately was also rejected at Michigan State. So she anticipated going to Countryside Tech, but was devastated to be rejected there too. What happened?

The president of Countryside Tech was determined to raise his institution’s ranking. One part of this effort was a devious admissions policy. Sarah’s application looked really strong, so admissions officers assumed she would end up going somewhere else. So they rejected her in order to improve Tech’s admissions percentage, making Tech seem more exclusive. Sarah was an unfortunate casualty of a competition between universities based on the formula used by U.S. News. 

ranking-dataset-share

            In Australia, the U.S. News rankings are little known, but other systems, ranking universities across the globe, are influential. In order to boost their rankings, some universities hire academic stars whose publications receive numerous citations. A higher ranking leads to positive publicity that attracts more students, bringing in more income. Many students mistakenly believe a higher ranking university will provide a better education, not realising that the academic stars hired to increase scholarly productivity are not necessarily good teachers. Indeed, many of them do no teaching at all. Putting a priority on hiring them means superb teachers are passed over and money is removed from teaching budgets.

WMDs

The story of U.S. News university rankings comes from an important new book by Cathy O’Neil, Weapons of Math Destruction. O’Neil started off as a pure mathematician teaching in a US university, then decided to enter the private sector where she could do something more practical as a “data scientist.” Working for a hedge fund and then some start-ups, she soon discovered that the practical uses of data analysis and mathematical models were damaging to many ordinary people, especially those who are disadvantaged. She wrote Weapons of Math Destruction to expose the misuses of mathematical modelling in a range of sectors, including education, personal finance, policing, health and voting.

A model is just a representation of a bigger reality, and a mathematical model is one that uses numbers and equations to represent relationships. For example, a map is a representation of a territory, and usually there’s nothing wrong with a map unless it’s inaccurate or gives a misleading impression.

mathematical-modelling-calibration-of-dip-stick-4-638

            The models that O’Neil is concerned about deal with people and affect their lives, often in damaging ways. The model used by U.S. News, because it was taken so seriously by so many people, has distorted decisions by university administrators and harmed some students.

“Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics.” (p. 21)

Another example is a model used to allocate police to different parts of a city. By collecting data about past crimes and other factors supposedly correlated with crime, the model identifies areas deemed to be at risk and therefore appropriate for more intensive policing.

predpol

This sounds plausible in the abstract, but in practice in the US the result is racially discriminatory even if the police are themselves unprejudiced. Historically, there have been more crimes in disadvantaged areas heavily populated by racial minorities. Putting more police in those areas means even more transgressions are discovered – everything from possession of illegal drugs to malfunctioning cars – and this leads to more arrests of people in these areas, perpetuating their disadvantage. Meanwhile, crimes that are not geographically located are ignored, including financial crimes of the rich and powerful.

intelligence-led-policing

Not every mathematical model is harmful. O’Neil says there are three characteristics of weapons of math destruction or WMDs: opacity, damage and scale. Opacity refers to how transparent the model is. If you can see how the model operates – its inputs, its algorithms, its outputs – then it can be subject to inspection and corrected if necessary. O’Neil cites models used by professional baseball clubs to recruit players and make tactical choices during games. These models are based on publicly available data: they are transparent.

In contrast, models used in many parts of the US to judge the performance of school teachers are opaque: the data on which they are based (student test scores) are not public, the algorithm is secret, and decisions made on the basis of the models (including dismissing teachers who are allegedly poor performers) are not used to improve the model.

The second feature of WMDs is damage. Baseball models are used to improve a team’s performance, so there’s little damage. Teacher performance models harm the careers and motivation of excellent teachers.

The third feature is scale. A model used in a household to decide on when to spend money can, at the worst, hurt the members of the household. If scaled up to the whole economy, it could have drastic effects.

cathy-oneil-342-500px
Cathy O’Neil

O’Neil’s book is engaging. She describes her own trajectory from pure mathematician to disillusioned data scientist, and then has chapters on several types of WMDs, in education, advertising, criminal justice, employment, workplaces, credit ratings, insurance and voting. Without a single formula, she tells about WMDs and their consequences.

The problems are likely to become worse, because data companies are collecting ever more information about individuals, everything from purchasing habits to opinions expressed on social media. Models are used because they seem to be efficient. Rather than reading 200 job applications, it is more efficient to use a computer program to read them and eliminate all but 50, which can then be read by humans. Rather than examining lots of data about a university, it is more efficient to look at its ranking. Rather than getting to know every applicant for a loan, it is more efficient to use an algorithm to assess each applicant’s credit-worthiness. But efficiency can come at a cost, including discrimination and misplaced priorities.

My experience

Earlier in my career, I did lots of mathematical modelling. My PhD in theoretical physics at the University of Sydney was about a numerical method for solving the diffusion equation, applied to the movement of nitrogen oxides introduced into the stratosphere. I also wrote computer programmes for ozone photochemistry in the stratosphere, among related topics. My initial PhD supervisor, Bob May, was at the time entering the field of mathematical ecology, and I helped with some of his calculations. Bob made me co-author of a paper on a model showing the effect of interactions between voters.

During this time, I started a critical analysis of models for calculating the effect of nitrogen oxides, from either supersonic transport aircraft or nuclear explosions, on stratospheric ozone, looking in particular at the models used by the authors of two key scientific papers. This study led eventually to my first book, The Bias of Science, in which I documented various assumptions and techniques used by the authors of these two papers, and more generally in scientific research.

While doing my PhD, some other students and I studied the mathematical theory of games – used for studies in economics, international relations and other topics – and ran an informal course on the topic. This enabled me to later write a paper about the social assumptions underpinning game theory.

In the following decade, as an applied mathematician at the Australian National University, I worked on models in astrophysics and for incorporating wind power in electricity grids. Meanwhile, I read about biases in models used in energy policy.

I had an idea. Why not write a book or manual about mathematical modelling, showing in detail how assumptions influenced everything from choices of research topics to results? My plan was to include a range of case studies. To show how assumptions affected results, I could program some of the models and then modify parameters and algorithms, showing how results could be influenced by the way the model was constructed and used.

However, other projects took priority, and all I could accomplish was writing a single article, without any detailed examples. For years I regretted not having written a full critique of mathematical modelling. After obtaining a job in social science at the University of Wollongong, I soon discontinued my programming work and before long was too out of touch to undertake the critique I had in mind.

I still think such a critique would be worthwhile, but it would have quite a limited audience. Few readers want to delve into the technical details of a mathematical model on a topic they know little about. If I were starting today, it would be more illuminating to develop several interactive models, with the user being able to alter parameters and algorithms and see outcomes. What I had in mind, decades ago, would have been static and less effective.

What Cathy O’Neil has done in Weapons of Math Destruction is far more useful. Rather than provide mathematical details, she writes for a general audience by focusing on the uses of models. Rather than looking at models that are the subject of technical disputes in scientific fields, she examines models affecting people in their daily lives.

Weapons of Math Destruction is itself an exemplar – a model of the sort to be emulated – of engaged critique. It shows the importance of people with specialist skills and insider knowledge sharing their insights with wider audiences. Her story is vitally important, and so is her example in showing how to tell it.

“That’s a problem, because scientists need this error feedback – in this case the presence of false negatives – to delve into forensic analysis and figure out what went wrong, what was misread, what data was ignored. It’s how systems learn and get smarter. Yet as we’ve seen, loads of WMDs, from recidivism models to teacher scores, blithely generate their own reality. Managers assume that the scores are true enough to be useful, and the algorithm makes tough decisions easy. They can fire employees and cut costs and blame their decisions on an objective number, whether it’s accurate or not.” (p. 133)

weapons-math-destruction

Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (London: Allen Lane, 2016)

Brian Martin
bmartin@uow.edu.au