Reflecting on the UK’s Algorithmic Grading System and Administrative Automated Decision Making

August 21, 2020 | Divij Joshi

Source: Wikimedia commons; Image only for representation purpose

 

London in August 2020 saw scenes that would not be out of place in a science fiction fantasy. Protestors gathered outside of the country’s Department for Education and rallied to ‘dismantle the algorithm’. The algorithm in question was a statistical model designed to standardise grades for the country’s GCSE A-level examinations (the equivalent of 12th board exams in India).

 

The decision to allocate standardisation responsibilities to an algorithmic system was made in light of final examinations being cancelled, owing to the COVID-19 pandemic. The alternative implemented by the UK Government consisted of a system which inputted historical information about the general performance of a particular teaching center combined with the performance of individual students, among other inputs, and created a statistical model to predict examination grades for each students for final examinations and adjust grades predicted by teachers (a good explanation of the rather complicated model employed is available here). The results of the process resulted in the algorithm being characterized as unfair, particularly as it systematically downgraded student performance. It seemed to favour students from independent schools, who are primarily from privileged backgrounds, thereby disadvantaging students from public schools.

 

The story is a familiar one as we turn to algorithmic decision-making technologies to augment or replace our decision-making capacities, under the assumption that the use of data science and statistical tools can achieve efficient and fair outcomes. Algorithmic processes that employ statistical methods for predicting future outcomes, formalise certain assumptions, and make explicit trade-offs about which information, under what factors, and which outcomes are prioritized in decision-making systems. For example, algorithms which take into account historical data can end up formalising and standardising historical bias or discrimination in their application. Similarly, the determination of the objective function, or the values for which an algorithm is optimised, can also formalise biased or unfair decisions – such as the OfQual’s model which systematically downgraded instead of grading upwards.

 

Governments around the world, including in India, have been deploying similar algorithmic and automated decision-making systems. The UK Government’s process was, compared to how such decisions are made in other contexts, relatively fair and transparent, incorporating consultation and expertise, and yet it still ended up falling short of democratic expectations of the administrative systems. There are important lessons to be learned from this – including how to design automated processes to achieve democratically accepted outcomes, as well as in what contexts the decision to be automated should be taken at all. Further, within the context of administrative decisions particularly, there is a need to reflect upon how values of due process and natural justice play out in the context of individual decisions that are made by algorithmic systems deployed at scale. In the next blog, we will discuss the emerging area of ‘technological due process’ in some more detail.

Divij Joshi

Alumni

View profile