In the last few posts, we posed some questions for algorithms as an artefact for governance, including the implications of different forms of algorithms embedded in computing and information infrastructure, their relationships with governing and administration, and their relevance for specific legal domains. Here, we share some readings to critically study, understand and critique algorithmic systems are particularly relevant for lawyers.
Here, Gillespie makes the case for scholars of technology and society to consider and critically engage with algorithms as an artefact that shapes our information environment. He argues that from shaping the way that information is produced to be ‘algorithm ready’, to the manner in which algorithmically produced knowledge shapes public consciousness, algorithms are increasingly central to our networked society.
Barocas and Selbst’s paper explores how algorithms can reproduce and reify patterns of historical discrimination, while potentially obscuring these patterns and shielding them from existing modes of legal action or criticism. The paper situates itself within employment law and anti-discrimination law within the USA but has important lessons in understanding the challenges legal systems can face when assessing new modes of discrimination produced by algorithms.
Ohm and Lehr’s paper examines in detail the various stages of and lifecycle of a machine-learning system and argues that lawyers attempting to understand the implications of machine learning algorithms must pay careful attention to each of these stages, instead of looking merely at the source data, or at the contextual implications or aftermath of the decision.
The General Data Protection Regulation is one of the first legislative efforts which specifically considers the effects of algorithmic decisions and incorporates safeguards against its perceived effect on individual rights. In particular, the GDPR has incorporated certain rights of the data subject to gain information about the logic of algorithmic decisions. This spurred an important legal discussion on what meaningful transparency and explainability of algorithmic systems can look like, and whether the standards of ‘explainability’ of decision-making – routinely invoked in discussions on administrative decisions – are met within the main and supplementary texts of the GDPR. This article provides an overview of this discussion and argues for understanding the GDPR’s text as providing for a right to explanation.