Rights and Trump(s): Some Dilemmas of Online Content Moderation

January 25, 2021 | Divij Joshi

 

On January 8, 2021, Twitter permanently disabled the account of @realDonaldTrump, the personal account of the President of the United States of America. The decision came in the wake of misinformation and instigation from Donald Trump, culminating in a violent act at Capitol Hill, Washington D.C.

 

The decision has once again revealed dilemmas at the heart of social media. Online platforms are progressively responsible for the distribution and creation of media. With the internet enabling unprecedented media saturation, how do we conceive of the regulation of online speech? And mediate between often competing for public values like freedom of expression and the prevention of harms to dignity?

 

This blog poses some provocations and dilemmas which are crucial to answering this question.

 

 

Not ‘Whether’, but ‘How’

The first dilemma is whether online ‘intermediaries’ should be engaging in the moderation of the content on their platform, or if they should only be carriers of content with no editorial responsibility. The myth of the ‘neutral intermediary’ has been surprisingly pervasive and resilient, owing in large part to the posturing of online social media companies, as well as the legacy of regulating online services as intermediaries.

 

In part due to nature of their operations, and in part to their self-proclaimed status as ‘mere platforms’, much of the conversation around online speech today conveniently ignores that the moderation of online speech is at the very heart of online social media. From their technical design to written rules and terms of service, to decisions made about what content to curate, alter, censor or prioritise – the moderation of online content is a necessary part of any platform’s operation. As such, it is unhelpful to expect online platforms to ‘not moderate’ online speech or to be agnostic towards the content that is carried on the platform.

 

The appropriate question, therefore, should not be ‘whether’ online platforms moderate or influence online content, but rather, what are the limitations and appropriate forms of such moderation? For the regulation of online speech, this requires looking beyond the binary of platforms as neutral arbiters of speech and agents bearing complete editorial responsibility.

 

 

Public Values and Private Systems

Online social media is a dominant feature of our media landscape and they wield considerable sway over public life – as has been increasingly apparent over the last few years, the decisions that platforms make about the speech they host can have consequences for the physical safety of individuals and communities. And even repercussions which can threaten public institutions like electoral integrity or values like public health.

 

A second dilemma is how we should consider the actions of these private agents which make consequential public decisions. Does Twitter have the right as a private corporation to make editorial decisions without encumbrances or can (or should) individuals and communities claim rights against such platforms. As a legal or constitutional matter, the answer to this question would depend on the jurisdiction where it is being answered, but in general, constitutional free speech rights would not extend to requiring particular information to be carried on a private channel or platform (as opposed to a content-agnostic ‘must carry’ rule, as may be required, for example, of ISPs or other kinds of agents).

 

At the same time, there is an urgent need to recognise the infrastructural role that these private companies play as media agents. As it stands, the content moderation decisions of these entities are essentially usurping democratic and judicial functions crucial to societies – for example, the mediation between competing public values like individual free expression rights and dignitary harms to individuals and communities. The appropriate question for regulators, therefore, is to understand how the practices and philosophies of private content moderation by online platforms can best be regulated to ensure transparency, accountability, safety and democratic control.

 

The next post will explore the geopolitical dimensions of industrial-scale content moderation practices employed by large platforms.

Divij Joshi

Alumni

View profile