Algorithmic Fairness and Anti-Discrimination Law

November 2, 2020 | Divij Joshi

 

We have previously discussed how algorithmic systems which are used in decision-making implicate different legal norms, from data protection to intellectual property. This post examines an important emerging area of interaction between legal systems and algorithms – discrimination and equality law.

 

Algorithmic systems are routinely used to make classifications as outputs. These outputs are based on processing certain information through a particular computational logic encoded within the algorithmic system. Every element within an algorithmic system can exhibit particular political choices and, in turn, lead to persistent biases within the outputs and applications of the system. Algorithmic systems then embed biases which can lead to potentially unlawful forms of discrimination against groups or classes which may be protected by anti-discrimination laws or rights to equal treatment (as under, for example, Articles 14 or 15 of the Constitution of India). However, locating discrimination within algorithmic classifications raises peculiar challenges which discrimination law urgently needs to contend with.

 

Consider, for example, the widely cited case of the COMPAS recidivism risk algorithm used by certain states in the USA, to adjudge whether a person should be granted parole or bail. The algorithm was based on more than 100 data points about incarcerated persons – including criminal history, nature of the crime, etc. A particular algorithmic logic was applied which provided a ‘score’ of recidivism risk and a corresponding classification (‘risk of recidivism’, ‘risk of violent recidivism’), which could be utilised by judges to determine parole. An independent investigation subsequently found that the software persistently classified black defendants as more likely to reoffend, than white defendants, in a manner which could be deemed discriminatory, given that there was a greater chance of a wrong classification for black defendants.

 

Similar examples of discriminatory treatment within algorithmic systems abound. In India’s Aadhaar biometric authentication system, the fingerprint recognition algorithm has been found to persistently discriminate against manual labourers and the elderly. In facial recognition systems used in the USA, researchers have found evidence of misrecognition and misclassification along racial lines.

 

Each of these examples raise issues which go to the core of equality and discrimination law – what constitutes a fair classification when applied to algorithmic systems as opposed to human intuition? How can issues of discrimination by proxy or indirect discrimination be contended with (when discrimination occurs through the use of data or metrics which are not directly protected under the law, which is common within algorithmic systems which rely on profiling)? In fact, as recent debates on the issue indicate, it is incredibly difficult to agree on a legal or moral definition of fairness.

 

As data-based decision-making systems and the use of algorithmic logic proliferates, there is increasing attention being paid to the legal consequences of discriminatory algorithmic systems. In the USA, scholars have argued how legal frameworks of ‘disparate treatment’ and ‘disparate impact’ used to judge discriminatory decisions, can be reoriented for algorithmic systems. In the EU, scholars have argued that the more ‘intuitive’ approaches toward anti-discrimination within human rights legislation should be supplemented through examinations of statistical fairness.

 

There is, unfortunately, little jurisprudence in India to answer these questions clearly. In the Aadhaar judgement, while this argument was raised, it was quickly relegated by the majority decision without going into the merits and legal effects of discriminatory algorithmic classification.

 

Translating concepts of discrimination and equality law to algorithmic systems which make classifications are fundamental to the appropriate regulation of information and technology in India, and is a question which we hope to explore in the upcoming volume on the ‘Philosophy and Law of Information Regulation in India’.

Divij Joshi

Alumni

View profile