Privacy Theory 101: Profiling, Prediction, and Hildebrandt’s Theory of Privacy as Protection of the Incomputable Self

October 9, 2020 | Divij Joshi

 

Source: Wikimedia Commons; Image only for representational purpose

 

In this series of blogs, we have been exploring different conceptual and theoretical approaches to information privacy. In the last post, we explored an influential, historical argument by Warren and Brandeis in their paper on the ‘Right to Privacy’, written in a time when anxieties about photographic and print technologies were prevalent. In this post, we examine some of the anxieties and concerns that contemporary data science methods and technologies like machine learning pose to privacy, and theoretical responses to these anxieties in Mireille Hildebrandt’s 2019 paper, Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning’. (Theoretical Inquiries in Law, 20, 83 – 121)

 

The starting point for Hildebrandt’s inquiry is the datafied and automated environment in which we find ourselves, resulting from automated decision-making systems and ‘Artificial Intelligence’, which employ statistical methods like machine learning to make predictions and inferences about human behaviour.

 

The paper argues that these technologies potentially disturb our sense of self by essentialising human nature through computation and data. This argument rests on two grounds. First, that human nature and identity is always being constituted, reinvented, and is constantly unstable. Similarly, human action and behaviour are not only representational but also constitutive – such as legal declarations which create shared understandings of the world around us. Hildebrandt argues that, given this relational nature of human behaviour, no ‘true’ self of human beings can emerge through computation or datafication. Machine learning technologies, therefore, are inherently ‘limited’ in the sense that they attempt to formalise behaviour or attributes which are necessarily incomputable and unknowable through their techniques.

 

The paper goes on to argue for an ecological approach to privacy, stating that privacy should be understood only as of the interplay between new technologies and methods of surveillance vis-a-vis individual methods at data control and protection, but rather, that new technologies of datafication and quantifying should be approached at an ontological level – that these technologies remake our shared environments, shape our behaviours, in a way that these behaviours can be datafied, captured and computed. Technologies which rely increasingly upon inferential and predictive quantification also remake themselves and consequently reconfigure our environments to reflect what is computed. Hildebrandt argues that this configuration of our environment poses a threat to our incomputable ‘inner self’, by requiring us to internalise the logic of computational systems which operate around us and shape our behaviour according to the environment deemed most suitable to these systems. This conflicts with the common understanding that privacy protects the core of an individual’s inner self where she is free to think and act according to her own intentions and beliefs.

 

The paper goes on to use this perspective on privacy to argue that machine learning processes can never be ‘agnostic’ or ‘unbiased’, and instead, machine learning techniques and norms around regulation should incorporate forms of agonism, or the ability to challenge, contest and respond to machine learning’s inherent and necessarily biased nature. Hildebrandt’s conception of privacy is an important theoretical contribution to how we should think about and frame the norms around technologies of prediction and inference, as they increasingly encroach upon our online and sensored environments.

Divij Joshi

Alumni

View profile