•  
  •  
 

Abstract

Algorithms can predict the risk that a given offender will reoffend, on the basis of statistical observations about the relationship between certain attributes and proxies for criminal behavior (e.g., arrest or conviction). These predictions inform a range of decisions within the realm of criminal justice across the world, including sentencing, security classification, and parole. The variables that can increase an individual’s risk score, which are not widely publicized by those who develop or use risk tools, include financial stress, the criminality of friends or family, parental neglect, and experiences of domestic violence. In this way, algorithms may treat individuals as predestined to commit crime because of factors that they cannot influence.

For many of us, policies of predictive punishment cause an intuitive discomfort, which is often captured through the language of “algorithmic inequality.” I argue here that this focus reveals only part of what is at stake for individuals when risk tools are used to dispense criminal justice. There are both instrumental and non-instrumental reasons to want decisions about institutional punishment to be responsive to our choices—specifically, to how we behave when faced with different options that we have the knowledge and resources to pursue. These reasons constitute a powerful case for limiting the variables that can influence punitive decisions.

Share

COinS