In her groundbreaking work ‘Science at the Bar’, STS scholar Sheila Jasanoff examined how the institutions of the law and the institution of science are co-constituted and co-produced – legal institutions, from regulators to courts, legitimate certain forms of scientific production as valid or authentic through their consideration as evidence. Simultaneously, scientific institutions change the role and processes of legal institutions – including the considerations that go into ‘evidence-based policy’ or the construction of evidentiary ‘truth’ in the courtroom through the introduction of scientific evidence. As information systems and their regulation evolve through scientific and legal intervention, it is important to examine how this interplay between law and science affects their overlapping aims – the production of ‘fact’.
As the US Supreme Court stated in Daubert v Merril Dow, “there are important differences between the quest for truth in the courtroom and the quest for truth in the laboratory.” The law as an institution aims to secure justice, while science is aimed at producing ‘truth’. Perhaps one of the most prominent ways in which these two goals intersect is in the process of constructing ‘evidence’ in a trial. As scholars of STS suggest, this process involves an interplay of institutional and disciplinary imperatives and contexts – and is hardly the ‘neutral’ or objective process that both institutions claim. The process of creating ‘evidence’ increasingly relies on the production of proof through machines and modern information systems. These so-called ‘truth machines’ include technologies like polygraph tests, DNA sampling, and now contemporary information technologies like facial recognition and risk profiling. As Jinnee Lokaneeta brilliantly elucidates in her recent book ‘The Truth Machines: Policing, Violence and Criminal Investigations in India’, institutions of law relied on technologies like narcoanalysis, DNA profiling and polygraph tests to maintain legitimacy over violence and truth, and in the process, validated the claims that these technologies produced ‘scientific evidence’. Lokaneeta analyses how the claims to ‘scientific validity’ actually reproduced the discredited logics of third-degree torture.
Information systems embody many of the same subjectivities in their construction as modern ‘truth machines’ or systems of evidence, used both in the trial as well as in regulatory processes (as evidence for constructing policy, or in the use of machine learning-based inferences in criminal trials (such as risk scores in criminal sentencing). The law’s subjectivities are embodied within the evidence-production techniques of these technologies (for example, in using criminal records or associations as factors in bail sentencing), and the normative claims of data science and information technologies get reified through their acceptance as evidence in the courtroom of in regulatory processes.
What implications does this interplay between law and information systems hold for regulatory and legal processes? For one, it asks us to introspect more clearly and critically about ‘scientific evidence’ in the courtroom. What kinds of expert evidence – including forensic evidence which uses AI and big data, such as facial recognition or DNA sampling – should be allowed in the trial, and how should they be considered by judges? Similarly, regulators should introspect on how techno-legal assemblages remediate compliance with legal norms – from machines that attempt to interpret the law to the construction of information systems which transform the principles underpinning regulations – as, for example, in the construction of Digital Rights Management in copyright law or Digital Sequencing in biodiversity regulation.
Our upcoming publication ‘The Philosophy and Law of Information Regulation’ attempts to answer some of the dilemmas produced by information systems in their interaction with the law and evidence.