Preface and foreword
Preface
Platforms have power. But this power is not unchecked. Governments have an important role to play in protecting their citizens’ rights vis-à-vis third parties and ensuring a communication order in which rights are not violated. (And in addition, of course, they need to respect human rights themselves and not arbitrarily shut down sites or use their power to make the Internet less free and open). As leader of working group 2 it is my distinct privilege to present this collection which unites studies by researchers within the Global Digital Human Rights Networks on issues connected to the overarching question of how platforms deal with human rights and their human rights obligations. This study is a key deliverable of our working group in the second year of the Global Digital Human Rights Network’s activities. We will follow-up with Guidelines for platforms and an Assessment Model for states and other stakeholders in 2024. We developed this study under Corona conditions but were able to meet in the Tyrolean Alps in Obergurgl, Austria, in July 2022 to finalize this study.
Matthias C. Kettemann
Leibniz Institute for Media Research | Hans-Bredow-Institut, Hamburg, Germany
Humboldt Institute for Internet and Society, Berlin, GERMANY
Department of theory and future of law | university of Innsbruck, Austria
Foreword
The Global Digital Human Rights Network is proud to submit this important study into the practices of online platforms when faced with human rights challenges. The overarching concern whether human rights can be safeguarded online as efficiently as offline is reflected in the topics of platforms power, hate speech and discrimination, limitations of private governance mechanisms, the governance of election-related information and the institutional responses to making private platform rules better. The broader philosophical assertion of the sameness of human rights online and offline is made manifest in specific issues related to the transposeability of offline rules and principles to the online environment.
The Network wishes to recognize the efforts of Professor Matthias C. Kettemann and his team, as well as all contributors for having undertaking this timely research and produceed valuable insights and conclusions. This comparative study aims to demystify how platforms deal with rights – and clarify whether they take rights seriously. It contributes to the mission of academia to engage with civil society, and political and corporate stakeholders in conceptualizing the challenges of human rights protection online. We expect that the study will not only provide a valuable contribution to human rights scholarship, but will influence more widely human rights discourse at various levels and in different regions.
Mart Susi
Professor of Human Rights Law, Tallinn University
Chair of Global Digital Human Rights Network
Tools and Vectors of Platform Power
The Power of App Stores and Their Normative Orders
App stores govern content one institutional level above social networks. They exercize power through largely vague and ambiguous terms and conditions, which leads to inconsistent app moderation practices. Regulation to date has not addressed this problem sufficiently. However, the rules in the upcoming Digital Services and Digital Markets will increase the obligations app stores have.
(Niche) Platforms as Experimental Spaces for Content Moderation - Mapping Small, Medium and Niche Platforms Online
We map and define the term of ‘niche platforms’ and look into different aspects of the definition of small and medium platforms in law as well as sociologically and ethnographically. We question the current regulation by size of platforms and show, which other factors have a role in establishing a niche platform (such as thematic orientation of a platform). We plead for a more nuanced regulation of platforms.
Facebook and Artificial Intelligence. Good Practices Review
Humans are usually good at distinguishing which content can hurt sensibilities, but machines still have a lot of trouble differentiating between hate speech, race, sex, politics, etc. This is one of the great challenges of artificial intelligence. The problem of detecting such content and comments has not been solved adequately by the AI system Facebook has in use.
Hate Speech and Discrimination
Discrimination on Online Platforms: Legal Framework, Liability Regime and Best Practices
Online discrimination may resemble traditional discrimination, but it can have more serious consequences, as the internet plays an essential role in our lives, shaping our view of the world, our opinions and our values. But who is responsible? Even though the safe harbor principle still applies in Europe and the USA, platforms are less than. They have the ability to shape the information published on the platform, and they profit financially form the interaction that users have with information present on their platforms. In addition, the design of platform can shape the form and substance of their users’ content. By analysing existing regulation and community standards, we show which measures are best suited for preventing and redressing online discrimination.
Online Hate Speech - User Perception and Experience Between Law and Ethics
‘Governance’ of online hate speech (OHS) has become a buzzword in social media research and practice. In stakeholder discussions, the opinions of users remain underexplored, and data on their experiences and perceptions is scarce. The present paper focuses on five case studies of model OHS postings in the context of the Austrian OHS governance system. For these case studies, 157 respondents assessed in an online survey whether a posting should be deleted according to their own ethical standards, whether they believed that this posting was currently punishable under Austrian criminal law, and whether it should be punishable. We found that OHS-awareness among our respondent group was high and that there was a preference for state regulation, i.e., punishability under national criminal law, and for the deletion of OHS postings. Simultaneously, readiness for counter-speech and reporting of postings for deletion remains relatively low. Thus, OHS postings are hardly ever answered directly or forwarded to specialised organisations and/or the police. If OHS postings are reported, it is mostly done via the channels of the respective platform.
The Impact of Online Hate Speech on Muslim Women: Some Evidence from the UK
The intersectional analysis on the implications of the ‘racialized’ representation of Muslim women online reveals why and how they are experiencing harm online as a consequence of their gender, religious affiliation and ethnic origin that single them out as ‘targets’ for online hatred. By securitizing Muslim women, this case study on the UK shows how online hate speech sets the basis for serious limitations of their fundamental rights, extending beyond freedom of expression.
Protecting Rights on Platforms
Pandemics and Platforms: Private Governance of (Dis)Information in Crisis Situations
What role do online platforms play in managing and governing information during the pandemic? Chinese platforms cooperated substantially with the governments’ message (and message control) on COVID-19, but also US-based platforms like Twitter and Facebook that had employed a hands-off approach to certain types of disinformation in the past invested considerably in the tools necessary to govern online disinformation more actively. Facebook, for instance, deleted Facebook events for anti-lockdown demonstrations while Twitter had to rely heavily on automated filtering (with human content governance employees back at home). Overall we have seen emerge a private order of public communication on the pandemic.
Legal Mechanisms for Protecting Freedom of Expression on the Internet – The Case of Serbia
Serbia’s mechanisms on protecting online freedom of expression are still developing, partially due to the digital economic underdevelopment, but also due to a lack of interest of major platforms in developing and applying rules especially for Serbian market. We suggest to adopt a new law on the media, which recognizes and regulates platform in light of their role for online discourses, although they are not media in the traditional sense of the concept. When it comes to hate speech on the internet, although there is no doubt that there is room for improvement of the legal framework, the existing constitutional and legal provisions provide sufficient guarantees for protection against hate speech. Rather, the application of existing legal frameworks needs to re refined.
Digital Rights of Platform Workers in Italian Jurisprudence
The social relevance of so-called "platform workers" has surged dramatically during the pandemic. The contribution explores how the issues concerning “digital rights” have been addressed in the Italian legal system and suggests possible remedies to reduce the vulnerability of members of this new workforce.
Platforms and Elections
The Legal Framework of Online Parliamentary Election Campaigning - An Overview of the Legal Obligations of Parties and Platforms
The German legal system provides for an interplay of rights and obligations for both political parties, especially if they are members of the government, and platforms. For political parties, these are, in particular, constitutional principles and their formulation in simple law, while platforms have so far been primarily regulated by media law. The EU's regulatory projects, especially the DSA and DMA, supplement this catalogue with far-reaching obligations for platforms.
Improving Platform Rules
Platform-proofing Democracy - Social Media Councils as Tools to increase the Public Accountability of Online Platforms
New institutional configurations represent a good opportunity to increase the legitimacy of the power platforms wield over internet users and to advance the protection of individual rights against platform overreach; such institutional configurations can be conceived as expert-based or participatory "Social Media Councils", but more research needs to be done on different practical avenues of their implementation.
Internet and Self-Regulation: Media Councils as Models for Social Media Councils?
Social Media Councils are at the moment mostly theoretical innovations and waiting for pilot projects. Media Councils are established part of media regulation and therefore could provide role models as well as best practices to the social media councils. It is especially important to build trust between different stakeholders when new institutions are formed or otherwise institutions are in crisis mode at the beginning.