Online platforms, particularly social media platforms are prominent models of media today. Information flows over these online media infrastructures are influenced by a variety of factors – including the technical design of the platforms, the governance models and internal rules which explicitly determine how particular forms of information are curtailed or promoted, as well as the legal and regulatory regimes to which they must respond. This blog attempts to address some of the questions and challenges arising out of empirical methods for studying platforms.
Almost all major online platforms are privately controlled firms, wherein empirical disclosures to the public are restricted to corporate disclosure requirements. With the rise in platform intervention in content moderation of third-party content, there has been a consequent rise in internal governance practices aiming to provide more information about platform behaviour – both as a response to user demands for greater transparency, as well as, in certain cases, responses to regulatory developments. Even so, there remain substantial challenges to empirically studying the content moderation practices of online firms, presenting impediments not only to researchers seeking to understand platform behaviour but equally to regulators and policymakers intending to govern it.
Some important methods of studying platform content moderation have been ethnographic – for example, Kate Klonick’s germinal work on Facebook’s internal governance processes as well as the creation of its Oversight Board was built through interviews and observations taken ‘on the field’, by observing corporate behaviour at Facebook. Important ethnographic work also exists which has studied the content moderation from the ‘bottom-up’ or through the lens of the human labour which is tasked with deciphering and implementing platform rules for content moderation, such as Sarah Robert’s work studying the human labour behind the content moderation.
Qualitative and quantitative studies of platform disclosures and policies are another methodology used to empirically study platform behaviour. Platforms make disclosures through their public-facing terms and conditions, ‘community guidelines’, as well as ‘transparency reports’ which disclose data about takedowns and censorship practices, particularly those resulting from government requests. Some important advocacy work, such as the Ranking Digital Rights project, has focussed around creating ranked indexes for platforms, similar to certain human rights performance indexes. Similarly, the Lumen database, hosted at Harvard University has been collecting and examining content takedown requests within Google, which has been invaluable for researching and advocating for improved content moderation regimes.
Understanding the limitations in current transparency practices are critical for developing policy responses, both for ensuring transparency as a goal in itself, but also to ensure that disclosures about content moderation practices can instrumentally lead to better forms of regulation and policy relating to online media. In our upcoming workshop and publication on ‘The Philosophy and Law of Information Regulation in India’, we will be exploring methods and methodologies for empirical studies of new media and their implications for policy and regulation.
Content Moderation Reading List at the Social Media Collective: https://socialmediacollective.org/reading-lists/content-moderation-reading-list/