Non-Personal Data Regulation: Interrogating ‘Group Privacy’

July 30, 2020 | Divij Joshi

Image Source: Wikimedia commons; Image only for representation purpose

 

In the last post, we examined the Government of India’s Draft Report on Non-Personal Data, and its justifications for regulating NPD for ‘economic benefit’. This post examines the concept of ‘group privacy’, which has been presented as a justification for the regulation of NPD.

 

The Draft Report notes that “collective privacy refers to possibilities of collective harm related to Non- Personal Data about a group or community that may arise from inappropriate exposure or handling of such data.”  The Draft Report goes on to recommend that the proposed regulator for NPD should consider the concept of collective privacy. While the NPD draft does not explicitly define or even substantially consider the contours of a ‘group privacy’ right, the discussion around the concept of group privacy marks an important conceptual turn away from the individual as the focus of privacy right claims and protections. Moreover, many of the Committee’s regulatory measures have been justified on the grounds of protecting the collective interests, including privacy, of communities and groups.

 

Contemporary privacy law in much of the world including in India, is distinctly focused on individual agency. This is apparent both from how courts have developed the right to privacy in India, as well as from the distinct focus of the Personal Data Protection Bill and other data protection efforts, which explicitly provide rights over information only in as much as they relate to an identifiable individual. If privacy is understood as providing control over contexts and elements of identity and personality through information, these interests also have a collective dimension, extending beyond individuals. Information about groups of individuals, at an aggregate level, can infringe upon privacy by extrapolating aspects or making inferences about group behavior or attributes, and in turn, make decisions which have consequences for members of groups.

 

As the Draft Report recognises, ‘group privacy’ is an emergent concept, which attempts to look both at groups as entities capable of exercising privacy rights and obligations, as well as the relations between individual membership in groups and its impact on privacy. The collective dimension of privacy has become particularly relevant in contemporary debates around algorithmic data analytics and profiling.

 

Modern information technologies process and analyse information at aggregate levels, which often obscures the individual element or contribution to make broader generalisations and decisions about groups and their behaviors. Consider, for example, migration data used to map movements of specific communities across borders and, in some cases, intervene and control their movement. The decisions for modeling such behavior and acting on it will be taken on information about the group as an aggregate, and not simply on the behaviour of identified individuals within it. Every member of this community is impacted by the information collected about it and the decisions taken on account of it, giving rise to a ‘community interest’ in information as distinct from individual interest.

As Barocas and Nissenbaum note, regulatory responses which focus only on individual control over information, like anonymisation, or notice and consent, offer very limited control when the processing and decision-making takes place at the aggregate level. The Draft Report’s recognition of the collective dimension of privacy, therefore, is a welcome step.

 

Defining a collective is crucial to demarcating these group interests. All groups are defined on the basis of existing or possible collective attributes of their members. However, data analytics changes the possibilities for defining groups. As Mittelstadt notes, not only is group identity on the basis of pre-existing and commonalities between members (like caste, gender or geography), data analytics increasingly results in the creation of ‘ad-hoc’ groups of individuals based on certain attributes or information generated through profiling (for example, specific consumption patterns, online behavior or movement), without members even being aware of their inclusion. The creation of new, indeterminate, categories of communities makes the task of defining group privacy rights much trickier, as compared to, say, protecting determinate categories which may be protected by existing elements of anti-discrimination law.

 

Indian law must contend with these difficult questions and probe the collective dimensions of the right to privacy, particularly in the face of automated data processing and decision-making technologies. This requires exploring both the relations between individual data protection and group membership, as well as exploring how collective rights in information can be accounted for and acted upon. The Personal Data Protection Bill, for example, already accounts for individuals who may be impacted by decisions made on the basis of both static and ad-hoc group membership, by expanding the scope of personal data to include analytical inferences, as well as providing for greater control over specific categories of information like race or sexuality.

 

The Draft Report on Non Personal Data does not directly dwell much on how group privacy may be affected by information processing. Rather, it proceeds to ontologically define group interests in information as entailing ownership, and lays down a basis for how such collective ownership of data may be exercised – in particular, through the concept of ‘data trusts’. In the next post, we will analyse and critique the concept of data trusts in the context of the Draft Report.

 

 

Further Reading:

 

Taylor, L., Floridi, L., van der Sloot, B. eds. (2017) Group Privacy: new challenges of data technologies. Dordrecht: Springer. (here)

 

Tisne, M., The Data Delusion, Stanford Cyber Policy Centre, (here).

Divij Joshi

Alumni

View profile