In our last post, we discussed the use of a predictive scoring algorithm in the UK Government’s education system, and briefly underlined some of the reasons why we should be concerned about the use of algorithmic systems within administrative setups. While forms of computerised algorithmic systems have been a part of administrative decision-making for decades, there appears to be a new ‘computational wave’ in government, characterised by an increasing reliance on such systems to make consequential decisions for citizens in areas ranging from finance, to welfare, to policing.
The systems being introduced are growing increasingly sophisticated and integral to core administrative roles of making and enforcing rules, and, as the UK’s example indicates, can subvert expectations and rights in fairness and accountability of bureaucratic action. The rights and expectations in the fair and accountable functioning of government bureaucracy are particularly germane to the domain of administrative law, and are encapsulated in the principles of administrative due process.
There is an important legal scholarship that explores the relationship between algorithmic systems and administrative law. An early exploration of this link is presented in Paul Schwartz’s 1981 paper, ‘Data Processing and Government Administration: The Failure of the American Legal Response to the Computer’, where Schwartz documents how computer-assisted decision-making in the USA affects what he terms ‘bureaucratic justice’, namely, an expectation that bureaucratic decision-making is accurate, effective and furthers human dignity. The last element, in particular, is affected by computerized decisions, which process information in the absence of clear rules on transparency of collection, use, or access. Schwartz therefore suggests procedural rights in response to such automated decisions, which provide affected individuals with rights of notice, access to information, provision of justifications, and the ability to appeal decisions.
The intervening period has seen a dramatic rise in the ways and means in which information is collected and used by bureaucracy, enabled by vast computational power and communication networks. The forms in which such information is processed has also changed significantly, with algorithmic processes becoming increasingly complex, and particularly in the use of profiling, inferential and predictive statistical techniques, including machine learning models, to sort, score and classify the subjects of bureaucratic action.
Danielle Keats Citron’s 2008 article in the Washington University Law Review, ‘Technological Due Process’, is a brilliant exposition on how such systems impact principles of administrative law. In particular, the article argues that code-enabled administrative decisions imperil forms of due process in administrative adjudication and review of decisions; and also impact rule-making procedures. Citron argues that algorithmic systems formally encode policy, and in the process forsake important elements of the rule-making process in bureaucratic action, including receiving public input, or allowing for forms of discretion, both key elements in administrative accountability.
Administrative law’s focus on fair processes to achieve fair outcomes is central to this line of scholarship, and is reflected in legal and constitutional principles in almost every jurisdiction, and extensively within Indian constitutional jurisprudence. Translating (or rebuilding) principles of accountability and transparency in process, which is key to administrative decision making, into our interactions with information, systems can be key to mitigating the concerns of justice and equity that arises in the context of emerging technologies like machine learning. In the next post, we will focus specifically on work in the domain of machine-learning technologies and administrative law, as an area of concern for legal systems around the world.