In this series of blogs, we have been exploring different conceptual and theoretical approaches to information privacy. In the last post, we explored an influential, historical argument by Warren and Brandeis in their paper on the ‘Right to Privacy’, written in a time when anxieties about photographic and print technologies were prevalent. In this post, we examine some of the anxieties and concerns that contemporary data science methods and technologies like machine learning pose to privacy, and theoretical responses to these anxieties in Mireille Hildebrandt’s 2019 paper, ‘Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning’. (Theoretical Inquiries in Law, 20, 83 – 121)
The Right to Privacy has firmly re-established itself in the constitutional lexicon, following the 9-judge decision in KS Puttaswamy v Union of India. The re-emergence of privacy as an area of constitutional interest has no doubt been informed by anxieties about technological developments – particularly digital communication and information technologies. The internet and related technologies have renewed concerns about how information flows can affect fundamental individual and societal interests – articulated as the ‘right to be let alone’, or the right to self-determination, among others. In the next few blogs, we will examine the theoretical constructs of a ‘right to privacy’, its relationship to the right under Indian law, and its implications for information regulation and governance going forward.