Algorithms, Impartiality, and Judicial Discretion

Algorithms, Impartiality, and Judicial Discretion

There are many reasons to worry about judicial discretion in the context of sentencing, and developments in the psychology of judgment and decision-making cast doubt on the idea that sentencing is an art. For example, one might receive a harsher sentence from a judge if you appear in court later in the day. Could algorithms be better than judges? Perhaps in one respect: “impartiality”.

Impartiality is often associated with a neutral, impersonal point of view, or an observer that is hypothetically free of subjective biases. The earliest proponents of these views were David Hume (1740) and Adam Smith (1759). One dimension of impartiality is the concept of being impersonal, meaning dispassionate or indifferent. For instance, the good judge is impartial insofar as they are not swayed by emotions and do not factor in personal considerations. An angry judge should not deliver a harsher sentence to a defendant, nor should the judge deliver a more lenient sentence because the judge and defendant both enjoy jazz music.

Another related concept held up as a virtue for a judge is “neutrality.” Thomas Nagel (with the help of Derek Parfit) can help us understand neutrality by the distinction between the concepts of agent-relative and agent-neutral. The basic idea is that a reason for action is agent-relative if it makes some essential reference to a person, and it is agent-neutral if it does not. If I were a judge, I would act on agent-relative reasons if I delivered a harsher sentence because the defendant angered me (since my anger is a reason for me but nobody else). In this case, acting agent-neutrally is to act in a way in which agent-relative reasons are yet to be specified. The relationship between neutrality and impartiality is that neutrality is a necessary condition to impartiality, but neutrality on its own denotes a narrower idea of non-specificity.

Algorithms can be perfectly neutral because they are not subject to emotions or other physiological limits. Vincent Chiao suggests that algorithms can be used for sentencing in order to combat concerns of judicial arbitrariness and bias.  The results could lead to greater justice by getting a bit closer to the ideal of proportionality in sentencing. That is, even if the algorithm is not perfect, it would do better than judges, especially with respect to racial bias. John Hogarth attempted something like this in the 1970s and 1980s, and it largely failed because judges trusted their own judicial discretion and intuitions over these algorithms.  While there are legitimate concerns with introducing novel technologies, technophobia should not be an impediment to a more just legal system.

Still, the concerns related to taking the human element out of judgments have some substance. Leaving aside issues around implementation, one may wonder how impartial reasoning squares with theories of punishment. For instance, in morality, impartial reasoning is not always appropriate. In 1793, William Godwin imagines a scenario where one must choose to either save a chambermaid or Fenelon (the archbishop of Cambrai) from a fire. From an impartial standpoint, the clear outcome would be saving Fenelon, since he benefits thousands with his works. Even if the chambermaid was one’s own wife or mother, the choice would be the archbishop. This may seem like a morally repugnant result. Indeed, feminist ethics teaches us about the importance of emotions and care in morality.

While there are a number of issues around implementing algorithms to assist the judiciary, there is clear potential for addressing access to justice issues. For example, predictable sentencing outcomes could level the playing field in negotiations between the Crown and the accused, increase efficiency for judges, and assist lawyers in building a case. Professor Benjamin Alarie is already involved in a company which uses “AI-powered platforms accurately predict court outcomes and enable you to find relevant cases faster than ever before.” With virtual hearings already beginning at the Supreme Court of Canada, I am optimistic about the next steps in operationalizing legal technology.

Written by Dan Choi, a second year JD Candidate at Osgoode Hall Law School and an IPilogue Contributing Editor.