Artificial Intelligence and Judicial Bias

August 28, 2021

Image Source: https://bit.ly/3DpE8WN; Image only for representational purpose.

 

On April 6th 2021, then Chief Justice of India S.A. Bobde introduced the Supreme Court Portal for Assistance in Court’s Efficiency (SUPACE). As an artificial intelligence portal, SUPACE is designed to make relevant facts and laws available to a judge depending on the matter currently being heard. As a research tool for judges, its primary purpose is to ostensibly improve the Court’s efficiency and eventually begin to aid in reducing the pendency of cases.

 

CJI Bobde clarified that SUPACE would not spill over into judicial decision making. Its functioning would be restricted to data collection and analysis. However, even a system restricted to organising and disseminating information that has no direct bearing on decision making can have damaging consequences. A major overarching concern when speaking of AI-related justice systems is the possibility of bias. Even if the programmers of the algorithm themselves didn’t intentionally code bias into the system, oftentimes the data used to achieve results reflects systemic biases that already exist.

 

The NGO ProPublica did an analysis of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system in the US. This AI tool is meant to assess the risk of recidivism i.e., the tendency of a convicted criminal to re-offend upon release in order to make decisions on pre-trial detention, sentencing and early release. However, ProPublica found that COMPAS systematically overestimated the recidivism risk of African-American offenders, with white offenders who re-offended within two years mistakenly being labelled as low-risk nearly twice as often.

 

COMPAS is a proprietary system developed by the company Northpointe and the rationale behind the algorithm used was never made public knowledge. With the legal protections afforded to private sector companies on their trade secrets, the lack of transparency seriously affects the ability of an individual to understand or even challenge a decision.

 

Dr. Dory Reiling, former senior judge and expert for the Council of Europe on information technologies and the courts is a major proponent for the idea of allowing external audits to ensure this transparency and accountability. In her words, AI should be able to explain how it came to a result, both in terms of how the information was processed along by making a substantive explanation available.

 

This highlights the need for human control at every stage of AI processing as the technology currently stands. As Dr. Reiling puts it, the algorithm of an AI cannot be used as a sole deciding factor and judges must be in control while possessing the ability to deviate from the output.

 

And so any (minor or substantive) integration of AI into the Indian judicial system must be done with care and deep engagement with potential pitfalls and unintended consequences. Further, we must closely look at and learn from the experience of other judicial systems that’ve adopted AI.

 

This blog was authored by Ajoy. He works at Supreme Court Observer(SCO).