Image Source: Wikimedia commons; Image only for representation purpose
In the last post, we wrote about the use of algorithmic systems within government administration, the concerns that these pose to claims of due process, and how we might reinvigorate due process in the face of algorithmic systems. In this post, we will examine these claims in light of a specific form of automated decision-making which utilises machine learning techniques and the particular concerns that contemporary machine learning systems pose to claims of administrative justice.
A classical definition of a machine learning algorithm is provided by Tim Mitchell, who states that “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E.” Contemporary algorithmic systems are able to identify patterns and statistical co-relations between data within extremely large datasets. The patterns and co-relations identified by a computer system in one dataset are often applied to new data points or data sets, in order to ‘predict’ the future behaviour of those data points, as learned from the previous data set. These ‘data points’ can be the abstraction of almost any phenomenon, from information about the weather to the behaviour of individuals or groups.
Machine learning systems are commonly used in modern administrative systems. For example, in some criminal justice systems, courts are known to use ‘recidivism risk’ algorithms which apparently classify individuals based on their potential propensity to commit a crime, which informs bail decisions made by judges. Similarly, tax and welfare administrations around the world are increasingly adopting risk classification algorithms to ‘identify’ potentially fraudulent behaviours of their target populations.
ML learning systems present new challenges for ensuring justice within administrative establishments. In particular, given the scale of the data, and the inherent opacity of the decision-making logics applied by ML, these systems exacerbate issues of notice and transparency within decision-making, which is integral to due process claims. Additionally, basing decisions on historical data and pre-determined parameters create tensions with the right to a fair hearing, and particularly the right against prejudiced adjudication. ML-based profiling has the tendency to exacerbate and reify biases against specific groups, based on finding patterns and co-relations which may act as proxies for historically protected attributes, but which may not trigger the same protections which apply to classifications based on protected attributes – presenting another attack on principles of unbiased administrative decision-making.
The rise of profiling and classification systems have occurred concurrently with the development of machine learning systems, and are gradually displacing important norms of administrative justice and due process. In response, regulatory developments are attempting to re-centre fair procedure for automated data-driven decisions which affect individuals.
Consider Article 22 of the European General Data Protection Regulation, which explicitly provides for a right against ‘solely’ automated decision-making which affects the legal rights of a data subject. Similarly, rights of fair notice and justification have also been included within the GDPR, which include, for example, a statement of the logic of an automated decision to be provided to the affected party, along with notice of the data of the subject which was used in coming to that decision. Further, the clause explicitly provides for a right of fair hearing and to obtain human intervention to overturn an automated decision.
Even so, these provisions have been of limited value in re-centring due process within algorithmic decision-making, and there is an urgent need to investigate and reinvigorate the application of administrative justice and due process for the era of machine learning.