Hostname: page-component-68c7f8b79f-8spss Total loading time: 0 Render date: 2025-12-25T12:43:30.174Z Has data issue: false hasContentIssue false

Criminal Hate Speech Attributable to Online Platforms: A Call for a Thorough Corporate Remedial Responsibilities Framework in Europe

Published online by Cambridge University Press:  08 September 2025

Eva Nave*
Affiliation:
Leiden University , The Netherlands
Rights & Permissions [Opens in a new window]

Abstract

Online platforms have adopted business models enabling the proliferation of hate speech. In some extreme cases, platforms are being investigated for employing algorithms that amplify criminal hate speech such as incitement to genocide. Legislators have developed binding legal frameworks clarifying the human rights due diligence and liability regimes of these platforms to identify and prevent hate speech. Some of the key legal instruments at the European Union level include the Digital Services Act, the proposed Corporate Sustainability Due Diligence Directive and the Artificial Intelligence Act. However, these legal frameworks fail to clarify the remedial responsibilities of online platforms to redress people harmed by criminal hate speech caused or contributed to by the platforms. This article addresses this legal vacuum by proposing a comprehensive remedial responsibilities framework for online platforms which caused or contributed to criminal hate speech based on the general corporate human rights responsibilities framework.

Information

Type
Scholarly Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

I. Introduction

Business models adopted by online platformsFootnote 1 have contributed to the proliferation of online hate speech. Frances Haugen, a whistleblower from Meta Platforms, Inc. (formerly Facebook, Inc.), revealed that the platform prioritized growth over countering online hate speech in countries such as Afghanistan, Ethiopia and India.Footnote 2 In a more extreme example, Amnesty International and the United Nations alerted the public to Meta’s significant contribution to the genocide of the Rohingya in Myanmar after its algorithms failed to take down and amplified hate speech towards this Muslim community.Footnote 3 Other online platforms have also been under increased scrutiny for adopting content moderation and recommendation algorithms amplifying hate speech.Footnote 4

The framework addressing the companies’ responsibilities to comply with human rights is developed in the United Nations Guiding Principles on Business and Human Rights (UNGPs).Footnote 5 The UNGPs, though not legally binding, were endorsed by the United Nations Human Rights Council in 2011 and are the key international standard-setting instrument explaining the three essential corporate human rights responsibilities. Based on the UNGPs, companies must adopt: (i) a policy commitment to respect human rights; (ii) a human rights due diligence process to identify, prevent and mitigate adverse impacts on human rights; and (iii) remediation mechanisms of any adverse impacts on human rights to which the company caused or contributed.Footnote 6

At the European Union (EU) level, online platforms have the corporate human rights responsibility to counter illegal content, including hate speech. The Corporate Sustainability Due Diligence Directive (CSDDD),Footnote 7 the Artificial Intelligence Act (AI Act),Footnote 8 the Digital Services Act (DSA)Footnote 9 and the Audiovisual Media Services Directive (AVMSD),Footnote 10 all contribute to establishing the human rights due diligence of online platforms to counter online hate speech. Nevertheless, this European legal framework fails to clarify the third task stemming from the UNGPs, i.e., the remedial responsibilities of online platforms to redress people harmedFootnote 11 by online hate speech caused or contributed to by the platforms.

This article’s central research question is two-fold: In compliance with the right to an effective remedy, how can European legislators better align the framework on corporate remedial responsibilities of online platforms which caused or contributed to criminal hate speech with the general framework on corporate remedial responsibilities? Additionally, are there heightened remedial responsibilities for very large online platforms (VLOPs)Footnote 12 or for cases of criminal hate speech amounting to gross violations of human rights?

This article covers legal and policy instruments from both the European Union and the Council of Europe given the alignment between the two human rights systems.Footnote 13 Occasional references to international human rights instruments contextualise their influence on European instruments. Doctrinal research identifies legal loopholes in legislation and suggests normative approaches that are compliant with human rights. This article focuses on hate speech on online platforms for two reasons. First, online platforms, and especially VLOPs, have constituted the most problematic digital environment quickly disseminating hate speech. Second, online platforms are increasingly regulated at the European level and thus allow for a more consolidated normative analysis.

To answer the research question, Section II analyses the European standards on criminal hate speech. Given that there is no definition of criminal hate speech at the EU level,Footnote 14 the central instrument investigated is CM/Rec(2022)16 adopted by the Council of Europe Committee of Ministers.Footnote 15 This section also initiates the academic debate about the elements of criminal hate speech that may classify as gross violations of human rights. In these cases, the international standards on the right to remedy for gross violations of human rights should apply.

Facebook’s contribution to the genocide of the Rohingya in Myanmar is used as an example mainly in Section II, but also occasionally referred to in other sections. This case is relevant because it is one of the most thoroughly documented showing the societal impact of the corporate human rights responsibilities of VLOPs contributing to hate speech as well as the impact of the lack of compliance with corporate remedial responsibilities.

Section III investigates the application of the right to effective remedy prescribed in Art. 13 of the European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR),Footnote 16 Art. 47 of the Charter of Fundamental Rights of the European Union (CFREU),Footnote 17 and the Victims Rights DirectiveFootnote 18 to online hate speech. This section also examines the international standards on the right to remedy for cases of gross violations of human rights.

Section IV clarifies the general corporate remedial responsibility by explaining the framework stemming from the UNGPs and from the Organisation for Economic Co-operation and Development (OECD) Guidelines for Multinational Enterprises on Responsible Business Conduct and OECD Due Diligence Guidance (OECD Guidelines).Footnote 19 This framework covers: modes of responsibility, remedial processes and remedial outcomes. This framework applies to online platforms that caused or contributed to criminal hate speech.

Section V highlights the need for and proposes legal standards for a corporate remedial responsibilities framework at the EU level, including for online platforms that caused or contributed to criminal hate speech. The legal instruments reviewed are the CSDDD, the AIA, the DSA, the AVMSD and the policy instruments researched are the Code of Conduct on countering illegal hate speech online, and Recommendations CM/Rec(2022)16 and CM/Rec(2014)6.Footnote 20 The proposed standards focus on clarifying modes of responsibilities, remedial processes, and remedial outcomes. In this context, the three remedial outcomes analysed are guarantees of non-repetition, restitution and compensation.

II. Criminal Hate Speech on Online Platforms

A. European Standards on Criminal Hate Speech

Although there is no binding definition of hate speech in international or European human rights law, CM/Rec(2022)16Footnote 21 distils the key elements for the regulation of hate speech both online and offline. CM/Rec(2022)16 clarifies that hate speech is always illegal as it is either (1) criminalised in its most severe forms or (2) prohibited under civil or administrative law.Footnote 22

This article explores the legal framework applicable to category (1), i.e., criminal hate speech.Footnote 23 The decision to focus on criminal hate speech is based on a growing recognition of its key elements at the European level, specifically following the adoption of CM/Rec(2022)16.Footnote 24 CM/Rec(2022)16 clarifies in Paragraph 11 that, based on existing international and regional human rights, the following hateful expressions are criminally actionable:

  1. a. public incitement to commit genocide, crimes against humanity or war crimes;

  2. b. public incitement to hatred, violence or discrimination;

  3. c. racist,Footnote 25 xenophobic, sexist and LGBTI-phobic threats;

  4. d. racist, xenophobic, sexist and LGBTI-phobic public insults under conditions such as those set out specifically for online insults in the Additional Protocol to the Convention on Cybercrime concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems (ETS No. 189);

  5. e. public denial, trivialisation and condoning of genocide, crimes against humanity or war crimes; and

  6. f. intentional dissemination of material that contains such expressions of hate speech (listed in a-e above) including ideas based on racial superiority or hatred.Footnote 26

CM/Rec(2022)16 takes an open-ended approach to the list of impermissible groundsFootnote 27 for hate speech as both Paragraph 11 and Paragraph 2 introduce a list of several characteristics by using ‘such as’.Footnote 28 Nevertheless, this article defends that CM/Rec(2022)16 could have improved legal coherence had it expressly referred to two elements stemming from the critical legal conceptualisation of hate speech. First, the historical oppression perpetuated by hate speechFootnote 29 and, second, the intersectionality of systems of oppression with a view to adequately reflect the harm caused by hate speech.Footnote 30 Hence, the subsequent analysis in this article adopts an explicitly open-ended conceptualisation of impermissible grounds for hate speech, grounded in the acknowledgement that hate speech is used to perpetuate systems of oppression, and that the intersectionality of historical systems of oppression is an aggravating factor harming people targeted by hate speech.

At the EU level, the European Commission published in 2021 a Communication encouraging the Council of the European Union (Council) to extend hate speech and hate crime to the list of EU crimes under Art. 83(1) of the Treaty on the Functioning of the European Union (TFEU).Footnote 31 However, while the EU does not adopt such legislation on criminal hate speech, this article follows the conceptualisation of criminal hate speech in Paragraph 11 CM/Rec(2022)16.

Finally, certain elements of criminal hate speech may classify as gross violations of human rights. The normative framework does not clarify which, if any, elements of criminal hate speech amount to gross violations of human rights. On the one hand, Paragraph 11 CM/Rec(2022)16 includes together with incitement to genocide, also incitement to crimes against humanity and incitement to war crimes as criminal hate speech. On the other hand, international criminal law does not clarify if these three types of incitement would classify as the most serious crimes in international law amounting to gross violations of human rights.Footnote 32

This article does not attempt to resolve this normative debate. Rather, this article seeks to acknowledge the possibility that elements of criminal hate speech may amount to gross violations of human rights and thus result in the application of the right to remedy and reparation for victims of gross human rights violations. This analysis is key to adequately frame the corporate remedial responsibilities of online platforms responsible for such criminal hate speech potentially amounting to gross violations of human rights.

B. The Role of Online Platforms

This section introduces, first, the services provided by online platforms and, second, how they facilitate the spread of hate speech on their platforms. After that, this section expands on Meta’s contribution to the genocide of the Rohingya in Myanmar as an example clarifying the problematic role of online platforms contributing to the rise of online and offline hate speech.

Online platforms facilitate the dissemination of user-generated content.Footnote 33 Given the large user base and high amounts of content, online platforms typically employ two types of algorithms to manage content:Footnote 34 (1) content moderation algorithms, and (2) content ranking and recommendation algorithms.Footnote 35

Content moderation algorithms are used to enforce policies of prohibited content. Users are informed about the content that is prohibited on the platform in the terms of service.Footnote 36 Examples of outcomes of content moderation include disabling, labelling, suspension and removal of content.Footnote 37 The terms of service (ToS) often do not clarify the standards used to decide on content moderation outcomes. The current regulatory framework applicable to the ToS provides insufficient guidance regarding the content that should be prohibitedFootnote 38 or the way that the ToS should address the outcomes to be attained from content moderation.Footnote 39

Ranking and recommendation algorithms assist with the task of deciding which content to first display on the users’ newsfeed or on auto-plays after the completion of a given video. The suggestion of subsequent content that is ranked high is called chaining.Footnote 40 The reverse operation, when a content is deliberately not suggested, is called demotion or down-ranking. These algorithms typically aim to link users to other users, to groups or to specific posts that can match their interests and thus maximise engagement on the platform.Footnote 41 Online platforms have disclosed little to no information on the internal processes guiding these ranking and recommendation algorithms or possible outcomes.Footnote 42

The Committee of Ministers of the Council of Europe and the European Commission have warned that the algorithms employed by online platforms can facilitate the dissemination of online hate speech.Footnote 43 Analysing to what extent online platforms enhance the severity of hate speech, it is relevant to review the context in which the expression was manifested. When assessing the severity of hate speech, the ECtHR evaluates ‘contextual variables’Footnote 44 such as: the political and social context at the time of the speech;Footnote 45 the speaker’s status or role in society;Footnote 46 the reach and form of dissemination of the speech;Footnote 47 the likelihood and imminence that the speech results, directly or indirectly, in harmful consequences;Footnote 48 the nature and size of the audience;Footnote 49 the perspective of the people targeted by the speech (including its historical oppression).Footnote 50

This article explores how online platforms affect the severity of hate speech by reviewing three contextual variables: (1) reach, as well as the size of the audience; (2) the polarised and susceptible nature of the audience; and, (3) the likelihood of harm. These three variables were selected based on the algorithms currently discussed within the context of online platforms.

First, online platforms typically enable faster dissemination of content to larger audiences than traditional offline media, thereby amplifying the reach of speech. Users can instantaneously publish content with a wider network than in offline settings. Nevertheless, studies show that the reach of speech is only increased for certain types of content, e.g., hate speech spreading faster than innocuous content.Footnote 51 Depending on the algorithms deployed, content can be amplified, deamplified, blocked, removed, etc. Typically, algorithms are not trained to process either the context or the languages of already marginalized communities, resulting in the illegal removal of content produced by these communities.Footnote 52 Additionally, it is widely reported that platforms have prioritised user engagement often at the expense of human rights, such as the prohibition of discrimination.Footnote 53 For example, the Facebook PapersFootnote 54 revealed that ranking and recommending algorithms prioritised virality of content, often disregarding whether content is harmful or incites to violence.Footnote 55 Consequently, online platforms have increased the reach of hate speech.

Second, online platforms can polarise large audiences of users due to their content recommendations algorithms.Footnote 56 Designed to connect like-minded people, online platforms have facilitated the organisation of ‘hate mongers’,Footnote 57 and enabled offline violence.Footnote 58 In fact, the Wall Street Journal found that, in 2016, 64 per cent of new members who joined extremist groups on Facebook in Germany resulted from their viewing algorithm recommendations.Footnote 59

Third, by amplifying online hate speech and by polarising users, the current algorithms increase the likelihood of harm. Amnesty International has explained how Meta’s content moderation algorithms failed to take down content advocating for hatred, discrimination and genocide of the Rohingya Muslim community in Myanmar.Footnote 60 This hateful content was then amplified by their ranking algorithm designed to maximise the users’ engagement by showing such content at the top of newsfeeds. Moreover, hateful videos were also amplified by Facebook when its recommendation algorithm automatically played them in its ‘Up Next’ feature. The United Nations Independent International Fact-Finding Mission on Myanmar concluded that ‘[t]he role of social media [was] significant’ in the atrocities.Footnote 61

Members of the Rohingya community are seeking remediation from Meta in three judicial actions, including a request for a US$1 million for an educational project in Bangladesh refugee camps. Despite admitting to not have done enough to prevent the platform from being used to incite offline violence,Footnote 62 Meta refuses to remediate through the educational project, communicating that it had instead improved its content moderation algorithms.Footnote 63 Meta does not detail in which way it has improved its algorithms and Amnesty International emphasises compliance with remediation responsibilities must address the victims’ harms.Footnote 64

III. Right to Remedy for Criminal Hate Speech Online

Having clarified the conceptualisation of criminal hate speech employed in this article,Footnote 65 this section explains the operationalisation of the human right to an effective remedy of people targeted by criminal hate speech. This section identifies, first, the harm caused by criminal hate speech including on online platforms, then sets out the European standards on the State’s duty to ensure access to an effective remedy for people targeted by criminal hate speech.

A. Harm Caused by Hate Speech

Critical race theory was the first school of legal scholarship to advance the conceptualisation of harms caused by hate speech.Footnote 66 According to this scholarship, hate speech can cause psychological, physical and economic or material harms.Footnote 67 Critical race scholars also stressed the cumulative effect of continued exposure to hate speech.Footnote 68

The psychological harms experienced by people targeted by hate speech range from fear, anger, low self-esteem, low capacity of attention, withdrawal from society, depression, nightmares, post-traumatic stress and psychosis.Footnote 69 Studies show that these harms have an aggravated impact on younger people and children.Footnote 70 These layers of harm passed through generations lead to an increased difficulty in dealing with the psychological harms caused by hate speech.Footnote 71 Furthermore, access to psychological support is limited because it is not just expensive but also practitioners often come from privileged backgrounds and thus lack the lived experience of people historically targeted by hate speech.Footnote 72

The physical harms that people targeted by hate speech face can be distinguished between short-term and long-term physical harms. Short-term physical harms include accelerated breathing and heart rate, dizziness, headaches and raised blood pressure.Footnote 73 In the most serious cases, hate speech inciting to violence can lead to hate crimes, war crimes, genocide or crimes against humanity.

Hate speech may also cause economic or material harms of the people it targets. Hate speech may jeopardise access to, e.g., education, health or employment if, by continued exposure to hate speech, people are forced to leave their studies, jobs, neighbourhoods, cities or countries, or to avoid public spaces altogether. In some of the most extreme cases, people targeted by hate speech may become refugees seeking asylum, often facing dire situations ranging from insecurity to lack of access to water and other basic human rights.

In the specific context of harms experienced by people targeted by criminal hate speech on online platforms, all of the harms mentioned above apply, i.e., psychological, physical, and economic harms. Additional impacts to consider include, e.g., disengaging from online platforms to avoid exposure to hate speech may limit the exercise of access to information and freedom of assembly or association.Footnote 74

B. The State’s Duty to Ensure Access to Remedy

European Standards on Remedies

People harmed by hate speech (whether online or offline), and especially by criminal hate speech, have the right to an effective remedy. The right to an effective remedy is a fundamental human right under international and European human rights law.Footnote 75 This right derives from a general legal principle that every breach of international law results in an obligation to provide remedy.Footnote 76 This article focuses primarily on the European standards.

At the Council of Europe level, Art. 13 of the ECHR establishes the right to an effective remedy before a national authority. This provision lays down the State’s positive obligation to investigate allegations of violations, including by private companies, of human rights in a ‘diligent, thorough, and effective’ manner.Footnote 77 The national authority may be a judicial or non-judicial body, if the latter fulfils the independence and impartiality prerequisites.Footnote 78 It is essential that remedies are ‘available, known, accessible, affordable, and capable of providing adequate redress’.Footnote 79 Importantly, the national authorities have the primary responsibility to investigate violations of human rights and a person may only appeal to the ECtHR after exhausting all available domestic procedures.

The right to remedy exists when there is an ‘arguable’ grievance under the ECHR.Footnote 80 This means that Art. 13 of the ECHR is complementary to other rightsFootnote 81 and may be invoked in two circumstances. First, if there is an allegation of a violation of another right in the ECHR. Second, if the person cannot effectively exercise the right to remedy at the national level.Footnote 82 Finally, according to Art. 13 of the ECHR, the remedy must directly remediate the violation.Footnote 83 Nonetheless, in light of the margin of appreciation afforded to Contracting States,Footnote 84 there is no specific prescription of the adequate form of remedy.Footnote 85 Instead, the effectiveness of the remedy should be evaluated on a case-by-case basis.Footnote 86

At the European Union level, Art. 47 of the CFREU prescribes that ‘Everyone whose rights and freedoms guaranteed by the law of the Union are violated has the right to an effective remedy before a tribunal in compliance with the conditions laid down in this Article (…)’.Footnote 87 While the provisions in the CFREU with corresponding rights in the ECHR must be interpreted with similar meaning and scope to the provisions in the ECHR, there is a key difference between Art. 13 of the ECHR and Art. 47 of the CFREU. Art. 47 of the CFREU stipulates that the competent national authority must be a judicial institution. This may be interpreted as strengthening the right since judicial bodies will in principle by default be independent and impartial, while other non-judicial bodies may not be. Notwithstanding, this requirement may also place an added burden on the judicial system and may result in more constraints to exercise the right to an effective remedy.

Additionally, crime survivors in the EU are covered by the Victims’ Rights Directive which establishes minimum requirements for rights, assistance, and protection of crime survivors.Footnote 88 Key rights include the right to legal aid such as the right to a fair remedy,Footnote 89 the right to return of property and the right to compensation.Footnote 90 While the EU does not include hate speech in the EU list of crimes, the Victims’ Rights Directive applies only to elements of hate speech criminalised in the EU.Footnote 91

Applying the European framework on the right to effective remedy established by the CoE and by the EU to cases of online hate speech, two observations are relevant. First, it is clear that national authorities have the duty to protect, investigate and ensure access to remedies. This framework applies to acts committed in digital settings by either users or internet intermediaries, e.g., criminal hate speech.Footnote 92 Importantly, remedial avenues must be available, known, accessible and affordable.

Second, there are different legal thresholds at both the CoE and the EU level regarding the competent authority with which to lodge a remedy claim. Given the extensive work on the right to remedy developed by the Council of Europe for cases of criminal acts online and also recognizing that effective processes may at times be found outside judicial settings, this article follows the approach that remedies can be sought with both judicial and non-judicial institutions, as long as these are independent and impartial.

Remedies for gross human rights violations

Some elements of criminal hate speech may amount to gross human rights violations. In these cases, the international and the European frameworks on the right to remedy and reparation for victims of gross violations of human rights law are complementary and should apply.

At the international level, States are obliged to: (a) prevent violations; (b) effectively, promptly, thoroughly, and impartially investigate violations and, when necessary, take action against those responsible; (c) provide alleged victims with equal and effective access to justice; and, (d) provide effective remedies.Footnote 93 This framework calls for States to adopt provisions for universal jurisdiction.Footnote 94 Importantly, the conceptualisation of victims includes persons individually or collectively harmed physically, psychologically, emotionally, economically or who suffered substantial impairment of their fundamental rights.Footnote 95

At the European level, the Council Decision enabling targeted restrictive measures to address serious human rights abuses worldwide applies.Footnote 96 The understanding of human rights abuses in this framework accounts for genocide and crimes against humanity, and extends to other human rights abuses if widespread and systematic.Footnote 97 The sanctions apply to both natural and legal persons, such as companies.Footnote 98 For these natural or legal individuals, sanctions include inter alia asset freeze and a prohibition to make funds or economic resources available. Remarkably, this Council Decision establishes a global human rights sanctions regime providing the EU with a framework to target inter alia companies responsible for serious human rights violations, regardless of where these took place.

Applying these regimes to survivors of criminal hate speech amounting to gross human rights violations, it becomes clear that States are obliged to ensure access to an effective remedy, including when harm was caused by businesses. Moreover, the conceptualisation of survivor should include people directly and indirectly affected by the crime. Finally, businesses may be considered the perpetrators and thus may have to comply with restrictive sanctions, e.g., asset freeze measures. For example, the EU sanctions regime enables the EU to impose sanctions on Meta for its significant contribution to the genocide of the Rohingya in Myanmar.

The UN and EU standards on the right to an effective remedy for survivors of gross human rights violations offer clearer and more inclusive definitions of survivors, perpetrators and remedial processes, than the general European standards on the right to an effective remedy. First, while the general standards consider survivors only those directly impacted by the crime, the specific standards clarify that, for cases of gross violations of human rights, survivors are those affected both directly and indirectly. Second, the specific standards for victims of gross violations of human rights expressly foresee that non-state actors can be responsible. Third, the specific standards go beyond the general standards by explicitly calling States to implement universal jurisdiction and restrictive measures to address gross violations of human rights, including when committed by companies outside their territory. Applying these standards to criminal hate speech, it follows that the EU legislators have a heightened duty to align corporate remedial responsibilities with the right to an effective remedy for criminal hate speech cases amounting to gross human rights violations.

IV. General Framework: Corporate Remedial Responsibilities for Online Platforms

This section investigates the general remedial responsibilities when the harm is attributable to businesses, including online platforms, and clarifies the modes of corporate responsibility, the remedial processes and the remedial outcomes.

A. Modes of Corporate Responsibility

The UNGPs articulate corporate remedial responsibilities for businesses which caused or contributed to adverse impacts on human rights.Footnote 99 Adverse impacts on human rights happen when the exercise of said human right is excluded or reduced, and can be either actual or potential adverse impacts.Footnote 100 Actual impacts refer to an adverse impact that has already occurred or is occurring, and potential impact refers to an impact that has not occurred yet. Potential adverse impact can either be avoidable or unavoidable, the latter ultimately materialising as an actual adverse impact.

The general framework on corporate human rights remedial responsibility prescribes two modes of remedial responsibilities: the responsibility to remediate and the responsibility to use leverage.Footnote 101 The corporate responsibility to remediate is encapsulated in Guiding Principle 22 of the UNGPs as follows:

‘Where business enterprises identify that they have caused or contributed to adverse impacts, they should provide for or cooperate in their remediation through legitimate processes’.Footnote 102

The OECD Guidance clarifies that this Principle 22 establishes the corporate responsibility to remediate actual adverse impacts that the company caused, contributed to, or potential but unavoidable adverse human rights impacts that the company will cause or contribute to. A business caused an actual adverse human rights impact when its operations alone resulted in the adverse impact.Footnote 103

Conversely, a business is said to have contributed to an actual adverse impact on human rights when (i) its operations, together with operations of other businesses; or (ii) its operations alone, caused, facilitated or incentivised another business to cause an adverse impact on human rights. Notably, the contribution must be substantial.Footnote 104

The second mode of corporate remedial responsibility encompasses the use of leverage to prevent or mitigate actual adverse impacts that the company was directly linked to, and for potential adverse impacts that are avoidable. A company is directly linked to an actual adverse human rights impact if the connection is not sufficiently substantial to amount to contribution. In these cases, the company is not required to remediate, but rather to use its leverage to influence the other actor causing the adverse effects to prevent or reduce said negative effects.Footnote 105 Figure 1 summarises the general framework on corporate remedial responsibilities.

Figure 1. Corporate remedial responsibilities for adverse human rights impacts.

This general framework articulates remedial responsibilities for all businesses, including online platforms. The following sections investigate the remedial processes and outcomes of the corporate responsibility to remediate actual or unavoidable adverse impacts on human rights, including criminal hate speech caused or contributed to by online platforms.

B. Remedial Processes

Remedial processes are the processes through which a remedial responsibility is assessed, and may either be ad hoc or pre-established for specific adverse human rights impacts.Footnote 106 For businesses whose operations pose a high risk to human rights, a pre-established investigative and remedial mechanism is advisable.Footnote 107 In these cases, businesses should adopt an operational-level grievance mechanismFootnote 108 to enable individuals directly affected by the business’ operations, to formally lodge concerns, complaints and seek remedies. Non-judicial remedial processes, such as the operational-level grievance mechanism, should be designed and operated to be effective.Footnote 109 This means that these non-judicial grievance mechanisms should be legitimate, accessible, predictable, equitable, transparent, rights-compatible and a source for continuous learning.Footnote 110

Businesses may provide for remediation directly or in cooperation with another legitimate process.Footnote 111 Subsequently, there is no need for a prior judicial decision,Footnote 112 and businesses that acknowledge having caused or contributed to actual or unavoidable adverse human rights impacts have the responsibility to remediate. Nevertheless, when businesses do not provide remediation proactively, State-based remedial processes should be initiated and businesses must collaborate.Footnote 113

Applying these standards to online platforms, the functionality allowing users to report content arguably qualifies as an operational-level grievance mechanism. Nevertheless, this functionality alone does not fulfil the legitimacy criteria of remedial processes if not overseen by impartial bodies.Footnote 114 Additionally, the reporting process normally assesses whether content complies with terms of service and not with human rights standards.Footnote 115 For cases where the online platforms caused or contributed to criminal hate speech, if platforms do not comply with remedial processes, these should be initiated by States.Footnote 116 The standards on the individual right to remedy apply and, equally, the special regime on remedies for gross human rights violations applies to cases of criminal hate speech amounting to gross human rights violations.

C. Remedial Outcomes

To determine the most appropriate remedial outcomes, businesses should seek to clarify what remedy the victims find most effective.Footnote 117 The general framework for remedial outcomes includes: restitution, satisfaction, rehabilitation, compensation and guarantees of non-repetition of harm.Footnote 118 These remedial outcomes were endorsed by the United Nations framework for cases of gross violations of human rights.Footnote 119

These remedial outcomes apply to any businesses such as online platforms which caused or contributed to criminal hate speech, including that amounting to gross human rights violations. Explaining in more detail what these outcomes entail, restitution aims to restore the original exercise of human rights before the violation and involves: restoration of liberty, identity, family life and citizenship; return to the place of residence; restoration of employment; and return of property.Footnote 120

Satisfaction aims to recognise the illegal acts that resulted in human rights violations and can be both pecuniary and non-pecuniary.Footnote 121 Some examples of satisfaction encompass: ceasing violations; verifying and publicly disclosing the facts (if not contributing to double victimisation); searching of the disappeared or killed (in alignment with the victims’ wishes); an official declaration or judicial decision restoring the victim’s dignity, reputation and rights; judicial and administrative sanctions against those liable; tributes to the victims; and inclusion of violations in training and educational material.

Rehabilitation aims to ensure the access to legal, medical and social services, including psychological support.Footnote 122 Compensation, similarly to satisfaction, can also be pecuniary and non-pecuniary and aims to repair any economically quantifiable harm. Such harm encompasses: physical or mental harm; lost opportunities, including employment, education and social benefits; material damages and loss of earnings, including potential earnings; moral damages; and costs deriving from legal, medical and social services, including psychological services.Footnote 123

Finally, guarantees of non-repetition of harm should include: protecting human rights defenders; providing, on a priority and continued basis, human rights education; ensuring the observance of internal codes of conduct; promoting mechanisms for preventing and monitoring social conflicts; and reviewing and reforming terms of service contributing to or allowing gross human rights violations.Footnote 124

V. European Framework: Online Platforms Remedial Responsibilities for Criminal Hate Speech

This section examines the challenges with the current European framework on remedial responsibilities of online platforms which caused or contributed to criminal hate speech, including gross human rights violations. After that, this section proposes standards to clarify and strengthen this framework by exploring the modes of responsibility, remedial processes and three remedial outcomes.

A. Challenges with Current Framework

This section studies the general framework on corporate remedial responsibilities in the EU CSDDD and AI Act, the remedial responsibilities in the DSA, and the remedial responsibilities of online platforms in European sector-specific instruments on hate speech.

Corporate remedial responsibilities in the EU

The general legal framework on corporate remedial responsibilities in the EU stems from two instruments, i.e., the Corporate Sustainability Due Diligence Directive (CSDDD) and the Artificial Intelligence Act (AI Act). This framework applies to online platforms as these employ AI algorithms for content moderation.

The CSDDD seeks to ensure that businesses respect human rights within their operations and supply chains.Footnote 125 To achieve this goal, the CSDDD builds on the corporate human rights responsibilities framework established in the UNGPs and operationalised in the OECD Guidelines, restating the corporate responsibilities to inter alia provide remedial mechanisms for human rights and environmental negative impacts caused by their operations, their subsidiaries and their value chains.Footnote 126 The preamble of the CSDDD expands on the businesses’ responsibilities to prioritise the prevention and mitigation, ceasing, minimising and remediating actual or potential adverse human rights impacts.Footnote 127 Furthermore, the CSDDD recognises the need to ‘ensure that those affected by a failure to respect this duty have access to justice and legal remedies’.Footnote 128

Nevertheless, the CSDDD fails to reflect the UNGPs’ specific standards on remedial processes (i.e., the importance of creating operational-level grievance mechanisms and the creation of adequate, legitimate and impartial remedial processes) and on remedial outcomes (i.e., restitution, satisfaction, compensation, rehabilitation, guarantees non-repetition). The CSDDD allows EU Member States the discretion to decide the means to reach the binding goals that it prescribes. As a result, in transposing this directive domestically, there may be States deciding to fully develop the corporate remedial responsibilities in alignment with the UNGPs. Be that as it may, it should be acknowledged that the CSDDD advances the framework on proactive measures that corporations need to take in order to prevent adverse human rights impacts. Nevertheless, noting that the official text of the CSDDD will likely be subject to cutbacks as part of the Omnibus Simplification Package, it is crucial to revisit this analysis following the final official changes implemented as a result of this process.Footnote 129

The AI Act prescribes legally binding means to ensure that AI systems respect EU fundamental rights, while fostering investment and innovation.Footnote 130 The AI Act reflects the UNGPs and CSDDD overall standard on corporate human rights remedial responsibilities in two ways. First, it explains which AI systems do not comply with fundamental rights and are, therefore, prohibited. Art. 5 of the AI Act prohibits AI systems that deploy subliminal techniques capable of distorting a person’s behaviour in a manner that causes or is likely to cause physical or psychological harm.Footnote 131

Applying this provision to online platforms, online platforms are undoubtedly prohibited from employing algorithms that amplify hate speech. Second, the AI Act prescribes a fundamental rights risk assessment framework to evaluate potential risks caused by AI systems.Footnote 132 This risk assessment aligns with the UNGPs corporate human rights due diligence and remedial processes which require businesses to adopt processes to identify potential adverse human rights impacts.Footnote 133 Nevertheless, though expanding more than the CSDDD on the risk assessment, similarly to the CSDDD, the AI Act does not prescribe a comprehensive corporate remedial framework encompassing standards on remedial processes and outcomes to be achieved.

Remedial responsibilities in the Digital Services Act

The Digital Services (DSA) seeks to prevent illegal and harmful content online by regulating the human rights responsibilitiesFootnote 134 and liabilityFootnote 135 regimes of internet intermediary services operating within the EU. The conceptualisation of internet intermediaries includes online platforms,Footnote 136 i.e., hosting services which store and disseminate to the public information produced by its users.Footnote 137

The DSA prescribes different human rights responsibilities depending on the business’ role, size and impact.Footnote 138 Within the category of online platforms, the DSA attributes heightened human rights responsibilities to very large online platforms (VLOPs), i.e., those with 45 million or more EU users per month.Footnote 139 In this context, VLOPs should identify, assess and mitigate systemic risks, and negative effects for the exercise of fundamental rights.Footnote 140 Notably, hate speech is explicitly referred to as a systemic risk classified as illegal content in the EU.Footnote 141

Reviewing the DSA framework on corporate remedial responsibilities, it is possible to conclude that the DSA does not provide a comprehensive approach to corporate modes of responsibilities, remedial processes or remedial outcomes.

Firstly, the DSA does not clearly reflect the general UNGPs standards on the modes of corporate remedial responsibilities. Although Chapter II of the DSA regulates the liability regimes of internet intermediaries, it does not clarify that online platforms causing or contributing to adverse human rights impacts bear remedial responsibilities in line with the corporate responsibility framework articulated in the UNGPs.Footnote 142 In another example, Art. 36 of the DSA prescribes that VLOPs must comply with specific crisis response measures in times of extraordinary serious threats to public security or public health in the EU, with the purpose of preventing, eliminating or limiting said serious threats.Footnote 143 While this wording could be interpreted to reflect Principle 22 of the UNGPs, this link is not expressly mentioned. Moreover, Art. 36 of the DSA seems to apply only to VLOPs and in times of crisis, disregarding the ongoing nature of remedial responsibilities of all businesses regardless of size or crisis context.

Secondly, the DSA does not clearly expand on the general UNGPs standards on remedial processes. To clarify, the DSA refers to remedy as: (i) the right to seek judicial remediesFootnote 144; (ii) an interim non judicial measure to ensure effective investigation of infringements and enforcement or to prevent future infringementsFootnote 145; and (iii) an out-of-court dispute settlement for human rights infringements.Footnote 146 These elements seem to broadly reflect, respectively: (i) the State’s obligation to ensure the right to an effective remedy; (ii) an operational-level grievance mechanism; and (iii) the legitimacy requirement for a non-judicial remedial process. However, these mechanisms require effective, impartial, and legitimate implementation and oversight. For example, concerns arise as to whether an out-of-court mechanism not empowered to impose binding decisions will provide access to an effective remedy.Footnote 147 This discussion is further elaborated on in Section V.B. ‘Regulatory and administrative oversight’.

Thirdly, the DSA does not address the general UNGPs standards on remedial outcomes. In this context, the DSA missed an opportunity to provide harmonised guidance and steer this discussion on best suited remedial outcomes for online harms caused or contributed to online platforms, including (criminal, but not limited to) hate speech.

As a result, the DSA, alone, does not articulate a solid or comprehensive framework on the corporate human rights remedial responsibilities of internet intermediaries, including online platforms. This article defends that, similarly to the UNGPs, remedial responsibilities, processes and outcomes ought to have been addressed in the DSA as a whole and all together either under Chapter II after the liability provisions, or independently in a separate chapter on remedial responsibilities. Furthermore, in the context of hate speech, this article defends that the DSA should have clarified that online platforms, with a particular emphasis on VLOPs due to its systemic risks, which caused or substantively contributed to criminal hate speech have to comply with corporate remedial responsibilities. These corporate remedial responsibilities are heightened in the case of criminal hate speech amounting to gross human rights violations. A way to promote legal coherence between the DSA and the corporate human rights responsibilities is to read the DSA in conjunction with the CSDDD; this framework is advanced in Section V.B.Footnote 148

Supplementary Corporate Remedial Frameworks for Online Hate Speech

At the European level, there is one legal and two policy instruments that complement the corporate remedial framework in the DSA applicable to hate speech on online platforms, i.e., respectively, the 2018-revised AVMSD, the Code of Conduct on countering illegal hate speech online and the Recommendations CM/Rec(2022)16 and CM/Rec(2014)6.

The 2018-revised AVMSD prescribes the State’s obligation to regulate inter alia video-sharing platforms with the goals of protecting children and consumers, combating racial and religious hatred and safeguarding media pluralism.Footnote 149 In the AVMSD, video-sharing platforms include online platforms disseminating user-generated videos with the purpose to inform, entertain or educate, and where content organisation is decided by the video-sharing platform.Footnote 150 Art. 28b of the AVMSD addresses businesses directly and establishes the corporate human rights responsibilities of video-sharing platforms to moderate content.Footnote 151 Analysing the remedial responsibilities framework in the AVMSD, Art. 28b(3)(i) clarifies that video-sharing platforms should establish ‘easy-to-use’ complaints mechanisms.Footnote 152 Regarding the remedial outcomes, the AVMSD 2010 version had included a specific remedial outcome for audiovisual media services, i.e., the right of reply.Footnote 153 Nevertheless, the 2018-revised AVMSD did not clarify whether this provision applies to video-sharing platforms.Footnote 154

The Code of conduct on countering illegal hate speech online was agreed upon in 2016 between the European Commission and internet intermediaries, some of which qualify as VLOPs as per the DSA.Footnote 155 This co-regulatory instrument establishes minimum transparency requirements for content moderation aiming to counter online hate speech which include clear communication to the users regarding the processes to notify, review, and request removal of hate speech. Notwithstanding, similarly to the DSA, the Code of conduct does not provide a comprehensive framework on corporate remedial responsibilities, processes, or outcomes required from online platforms which have caused or contributed to hate speech.

CM/Rec(2022)16 reiterates the right to an effective remedy,Footnote 156 and clarifies that remedial processes should be accessible through civil, administrative, and out-of-court mechanisms.Footnote 157 Additionally, CM/Rec(2022)16 explains that some of the most adequate remedial outcomes for online hate speech include: compensation, deletion, blocking, injunctive relief, publication of an acknowledgment that a post constituted hate speech, fines and loss of licence.Footnote 158 Reviewing the CM/Rec(2022)16 corporate remedial standards against the UNGPs, it becomes clear that, though it expands on remedial processes and outcomes, CM/Rec(2022)16 missed an opportunity to distinguish between the State’s duty to ensure access to the right to remedy and the corporate remedial responsibilities of online platforms.

CM/Rec(2014)6 elaborates on human rights for internet users and advances that, for criminal acts committed online, the most effective remedies include inter alia an inquiry, an explanation by the service provider, the possibility to reply, reinstatement of user-created content, reconnection to the Internet and compensation.Footnote 159 Similarly to the CM/Rec(2022)16, this is an important analysis of the suitability of remedial outcomes for online criminal acts which sheds light on the application of the UNGPs remedial framework on online platforms.

Overall, despite occasional references in the European regulatory framework to the corporate remedial responsibilities of online platforms, these instruments lack a comprehensive approach to the framework on corporate remedial responsibilities, processes and required outcomes for online platforms that caused or contributed to criminal hate speech.

B. Proposed Standards for a Comprehensive European Framework

This section proposes standards to address existing loopholes and outlines a comprehensive framework on corporate human rights remedial responsibilities in Europe. Firstly, it expands on the complementarity between the DSA and the CSDDD. Secondly, it develops concepts around the types of regulatory or administrative oversight. Thirdly, this section advances avenues to clarify the application of the DSA to remedial responsibilities by exploring the modes of responsibility, remedial processes and remedial outcomes applicable to online platforms which caused or contributed to criminal hate speech. Finally, this section delves deeper into three types of remedial outcomes and how these could apply to cases of hate speech disseminated by platforms, i.e., restitution and satisfaction, compensation and rehabilitation and guarantees of non-repetition.

Complementarity between the DSA and the CSDDD

The DSA should be read in tandem with the CSDDD to ensure an adequate framing of the corporate remedial responsibilities in Europe. Since both instruments embody concepts of human rights due diligence, a joint analysis of the DSA and the CSDDD strengthens the European corporate remedial responsibilities framework applicable. Though the DSA does not clearly identify the modes of responsibility, the legal requirements for remedial processes or the applicable remedial outcomes for the online hate speech, this instrument does effectively prescribe corporate human rights responsibilities for internet intermediaries, including online platforms, to counter potential or actual adverse impacts on human rights.

Additionally, the CSDDD expands in its Preamble on crucial concepts pertaining to the corporate remedial framework. This instrument starts by clarifying the corporate responsibility to remediate when a business caused or jointly caused adverse impact on human rights as well as the corporate responsibility to use its leverage to influence business partners to prevent or mitigate adverse human rights impacts.Footnote 160

In this context, the CSDDD could have clarified that, following the UNGPs, remediation is due when a business caused or contributed to actual adverse impact on human rights and that the responsibility to use leverage applies in cases of potential avoidable harm or when a company is directly linked (without amounting to the legal threshold of having caused or contributed to)Footnote 161 to adverse human rights impacts.Footnote 162 Notwithstanding, this instrument clarifies that companies must receive complaints both about actual or potential human rights harms and that the complaint mechanisms must align with UN Guiding Principle 31 in that these mechanisms must be fair, accessible, publicly available, predictable and transparent.Footnote 163

The CSDDD also expands on remedial outcomes clarifying that remediation may be financial or non-financial and that its goal must be to restore affected individuals to a state as close as possible to that before the harm took place.Footnote 164 Hence, a joint analysis of both the DSA and the originally adopted text of the CSDDD results in a more solid framing of the corporate due diligence remedial responsibilities.

Nevertheless, this article recognises that significant revisions to the CSDDD are being discussed as part of the Omnibus Simplification Package and that any curtailments on the due diligence responsibilities deriving from the ongoing negotiations will lead to an increased necessity to clarify the EU corporate remedial responsibilities framework.Footnote 165 In the context of online platforms, this means that ensuring an adequate alignment of the DSA with the international corporate remedial responsibilities framework stemming from the UNGPs will remain an effective pathway to strengthen the applicable European corporate remedial framework.

Regulatory and Administrative Oversight

Another aspect that could strengthen the European corporate remedial framework is linked to the regulatory and administrative oversight of the provisions in the DSA and in the CSDDD. Remedy can be pursued directly with the company, through a non-judicial out-of-court mechanism, and through judicial proceedings.

In the context of the DSA, the National Digital Services CoordinatorsFootnote 166 will assume the role of independent administrative authorities key to monitor, supervise and enforce the provisions in the DSA. Hence, the National Digital Services Coordinators will also take on the role of ensuring adequate access to an effective remedy, for example, whether the ‘out-of-court mechanisms’ do provide access to an effective remedy.Footnote 167

Another figure central to ensure the access to an effective remedy are the national courts. Domestically, the judicial power will have the task to determine whether the implementation of the extra-judicial ‘out-of-court mechanism’ has adequately enforced the right to remedy.

It should be noted that the out-of-court mechanism represents only one type of available remedy, as affected individuals can always file judicial claims with domestic courts. The CSDDD confirms that affected stakeholders should not be required to file complaints with the company before seeking remediation through judicial or non-judicial mechanisms.Footnote 168 Similarly, affected stakeholders are not required to exhaust non-judicial mechanisms before lodging their remediation claims with the national courts.Footnote 169

The CSDDD also provides detailed guidance regarding situations where a company fails to remediate. In such cases, national authorities have an obligation to order remediation based on either the national authority’s own initiative or substantiated concerns raised under the CSDDD.Footnote 170 Thus, remedies can be imposed nationally or EU-wide in complement to those already existing in domestic jurisdictions. Finally, the CSDDD also clarifies that remedial measures do not prevent penalties or civil liability under national law.

Clarifications in the DSA

Despite the clarifications resulting from a joint analysis of the DSA and the CSDDD, this article suggests that the DSA could further elaborate on three remedial aspects: modes of responsibilities, remedial processes and remedial outcomes. These standards build on the general framework stemming from the UNGPs on corporate remedial responsibilities, incorporated with enforceable nature in Europe through the CSDDD.

Regarding the modes of responsibility, the European regulatory framework should clarify, in a consistent manner, that online platforms, with an emphasis on VLOPs as per the DSA, which caused or contributed to adverse human rights impacts are responsible for providing remediation. Hence, this remedial responsibility applies to online platforms which caused or contributed to criminal hate speech. This can be achieved, for example, through the development of an additional chapter in the DSA. The clarification of the modes of responsibility are all the more important in cases where the online platform caused or contributed to criminal hate speech amounting to gross violations of human rights. For cases where the online platform was directly linked to actual or potential but avoidable dissemination of criminal hate speech, they should use their leverage to prevent or mitigate said criminal hate speech.

Vis-à-vis the remedial processes, the European regulatory framework should clarify that remedial processes ought to be legitimate, prompt, and impartial in addressing the adverse human rights impacts, including the dissemination of criminal hate speech on online platforms. Though the DSA standardises operational-grievance mechanisms such as the internal appeals and transparency standards, European legislators should ensure that remedial processes apply human rights standards and not terms and conditions privately decided by online platforms and often in misalignment with human rights.

The European regulatory framework fails to establish a clear and comprehensive approach to corporate remedial outcomes required of online platforms which caused or contributed to adverse human rights impacts, including for cases of criminal hate speech and criminal hate speech amounting to gross human rights violations. The following subsections explore the suitability of remedial outcomesFootnote 171 by building on the framework of remedial outcomes for criminal acts online. The theoretical frameworks for remedial outcomes include: restitution and satisfaction as amplification of survivors’ speech; compensation and rehabilitation beyond the area of services; and guarantees of non-repetition as business models’ change. These remedial outcomes could be imposed by the European Commission as interim non-judicial measures applicable to online platforms which caused or contributed to criminal hate speech.Footnote 172

For the overall operationalization of these standards, this article recommends that the European Commission issues a detailed guidance on Art. 21 of the DSA which should expand on the corporate remedial responsibilities elaborated on the Preamble of the CSDDDFootnote 173 and further developed in the UNGPs corporate human rights remedial responsibilities framework. Such guidance should explicitly clarify the modes of responsibility, remedial processes, and remedial outcomes suitable to effectively and promptly remediate people harmed by criminal hate speech disseminated by online platforms. These standards are all the more urgent to clarify for VLOPs as per the DSA, and for cases of criminal hate speech amounting to gross violations of human rights. Furthermore, noting the broadly discussed ‘Brussels effect’Footnote 174 whereby the European Union policy and regulatory framework can have wide-reaching impact beyond the European Union’s borders, it is particularly relevant to have the European Commission issue clarifying guidance on how to strengthen the DSA’s connection with the corporate human rights due diligence remedial responsibilities.

Restitution and Satisfaction as Amplification of Survivors’ Speech

Online platforms which caused or contributed to criminal hate speech must provide for restitution as a means to restore, to the extent possible, the exercise of adverse human rights impacts. In compliance with the standards on satisfaction, businesses must recognise the acts that violated international law and restore the survivors’ dignity.Footnote 175

Though there is a vast array of harms resulting from human rights violations in these cases,Footnote 176 this section proposes a remedy for the specific harm of constrained online participation. To clarify, some of the most commonly reported harms resulting from online hate speech (and even more so from criminal hate speech) are disempowerment, silencing and ultimately disengagement from online platforms of targeted communities.Footnote 177

A remedy to the constrained participation of communities targeted by hate speech is the speaking back capabilities framework advanced by Gelber.Footnote 178 In this framework, Gelber contends that policy and legal approaches should support people targeted by hate speech who wish to respond to it.Footnote 179 This direct engagement in the response process is conceptualised as the empowering act which enables communities targeted by hate speech to overcome the oppression and harm of constrained participation.Footnote 180 Gelber explains that this framework can result in policies of affirmative speech in which actors that enabled and hosted hate speech should likewise facilitate the response and counter narratives.Footnote 181

This article expands on Gelber’s speaking back framework by applying it to the context of online platforms. Importantly, it is widely discussed how the harm caused by hate speech is aggravated by online platforms when their algorithms demote counter narratives.Footnote 182 In this context, this article suggests that online platforms which caused or contributed to criminal hate speech should, as an effective restitution remedy, introduce affirmation speech policies in their content ranking, moderation and recommendation algorithms.

As a result, for a given period, online platforms should amplify survivors’ speech through content ranking algorithms. Similarly, online platforms should deploy content moderation algorithms that will specifically detect and apply a higher scrutiny to hate speech posts targeting marginalized communities with the goal of avoiding double victimization. Finally, to ensure reconnection of marginalised people as groups, online platforms should adopt affirmative speech policies through their link recommendation algorithms by purposefully, for a given period, connecting people marginalised and targeted by such criminal hate speech.

Compensation and Rehabilitation Beyond Area of Services

Online platforms which caused or contributed to criminal hate speech should remediate psychological, physical and material harms through rehabilitation and compensation. This overarching remedial responsibility clarifies that online platforms are responsible to remediate survivors beyond their area of services.

For cases of criminal hate speech amounting to gross human rights violations, online platforms are explicitly required to ensure access and, importantly, pay for rehabilitation and compensation of medical and psychological services. Moreover, in these cases, the conceptualisation of victims expressly includes not only the directly affected persons but also others who are closely related. Finally, the European Commission may impose asset freezing on online platforms which caused or contributed to criminal hate speech amounting to gross human rights violations.Footnote 183

Applying these remedies to the example of Meta’s significant contribution to the genocide of the Rohingya in Myanmar, it becomes clear that Meta should allocate funds and has the corporate remedial responsibility to compensate and rehabilitate beyond its area of services. This responsibility should address material harms including lost opportunities such as limited access to employment or education. This could be achieved through a class action remedy where the company makes funds available to the affected community as part of a settlement agreement.

Guarantees of Non-Repetition Requiring Business Models’ Changes

Many online platforms have adopted business models through the design and deployment of content moderation, ranking and recommendation algorithms that maximise profit and user engagement often at the expense of human rights.Footnote 184 All online platforms have the corporate human rights responsibility to identify, prevent, mitigate and remediate adverse human rights impacts.Footnote 185 Online platforms which caused or contributed to adverse human rights impacts, such as criminal hate speech, have the heightened responsibility to remediate, including by adopting guarantees of non-repetition.

This article proposes the operationalisation of guarantees of non-repetition premised on a change of business models and grounded in two main elements: (1) enforcing content moderation, ranking and recommendation algorithms based on human rights standards; (2) enforcing an alignment of the terms of service with the international human rights standards on the conceptualisation of criminal hate speech and with the corporate human rights responsibilities framework in the UNGPs.

First, online platforms should ensure that their content moderation algorithms remove criminal hate speech. Notably, as per Art. 5(1)(a) of the AI Act, online platforms are prohibited from deploying algorithms that are likely to lead to violence, as is the case of criminal hate speech. A key provision in verifying compliance with these responsibilities is Art. 40 of the DSA, which enables researchers to access data from VLOPs to investigate the impact of algorithms on systemic risks, including hate speech. In this context, this article suggests that, when assessing compliance with Art. 5 of the AI Act (in non-judicial or judicial actions), the judicial burden of proof should be inverted to require online platforms to prove that they did not cause nor contributed to criminal hate speech.Footnote 186 Though this inversion of the burden of proof is not clarified within the CSDDD, this article proposes that this is a key means for online platforms to comply with their duty of care.Footnote 187

Regarding ranking and recommendation algorithms,Footnote 188 this article builds on two contextual variables utilised by the ECtHR to assess the severity of hate speechFootnote 189 to suggest a tighter framework for monitoring criminal hate speech, i.e., the political and social background, as well as the speaker’s status or role in society. This article suggests that, as a minimum legal standard especially during times of conflict or elections, online platforms should proactively monitor users and posts with high levels of engagement above a certain threshold of risk. The notion of engagement level expands on Gelber’s authority framework, whereby measuring authority of a certain speech-act is relevant to analyse the capability of harming.Footnote 190 In this context, the engagement level corresponds to the notion of authority and could track two parameters, i.e., the number of followers of a given user or the number of reactions (e.g., reposting, comments) to a given post.

Secondly, online platforms should reflect the corporate human rights responsibilities framework in their terms of service as instructed in the UNGPs and in the CSDDD, including by adopting a conceptualisation of criminal hate speech aligning with international human rights standards.Footnote 191 Furthermore, online platforms should transparently inform users about the proposed content moderation, ranking and recommendation standards, as well as the tighter contextual application during conflicts or elections. Finally, as a minimum legal standard, after detection of criminal hate speech, online platforms should be required to archive such content for future criminal investigations.Footnote 192

VI. Conclusion

This article addresses the key challenge of the lack of legal clarity about the corporate remedial responsibilities of online platforms that caused or contributed to criminal hate speech. The research question is two-fold: To ensure the right to an effective remedy, how can European legislators better align the legal framework on the corporate remedial responsibilities of online platforms which caused or contributed to criminal hate speech with the general framework on corporate remedial responsibilities? Additionally, are there heightened remedial responsibilities for very large online platforms or for cases of criminal hate speech amounting to gross violations of human rights?

By building upon the European conceptualisation of criminal hate speech, the European standards on the right to an effective remedy and the general framework of corporate human rights responsibilities, this article proposes three legal avenues for the European legislators to clarify the framework on corporate remedial responsibilities.

First, it is important to clarify that the individual right to an effective remedy results in, not only a State obligation to ensure the exercise of said right, but also in direct corporate remedial responsibilities. Second, the corporate remedial responsibilities framework must address: remedial responsibilities modes; remedial processes; and remedial outcomes. Third, the corporate remedial outcomes must be tailored to address the specific harms caused by criminal hate speech online through content moderation, ranking and recommendation algorithms.

Delving deeper into the most effective remedial outcomes for criminal hate speech, this article suggests the amplification of survivors’ speech as means to restore the harm of limited participation. For the remaining harms, online platforms should compensate and rehabilitate beyond their area of services. Finally, this article suggests that the only way in which online platforms can remediate through guarantees of non-repetition of harm is by ensuring that their business model prioritises human rights over profit.

The standards proposed in this article on corporate remedial responsibilities apply to online platforms, with increased corporate human rights responsibilities for VLOPs and platforms which caused or contributed to elements of criminal hate speech amounting to gross violations of human rights. These suggested legal avenues apply first and foremost to the European context given the existing regulatory framework clarifying the conceptualisation of criminal hate speech, particularly since the adoption of CM/Rec(2022)16.

Importantly, the interventions to counter criminal hate speech on online platforms should not be solely legalistic nor should they just rely on remedy after the adverse impact on human rights has occurred. There should be structural changes to addressing power imbalances and systems of privilege, namely through education, representation and through the overall regulation of the private sector requiring the prioritisation of human rights.

Acknowledgements

The author would like to thank Simone van der Hof and Tarlach McGonagle for their insightful comments, as well as Katharine Gelber for the thought-provoking discussions during the research visit at the School of Political Science and International Studies of the University of Queensland, Brisbane, Australia. This research is the fourth article of the PhD thesis of the author.

Financial support

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 861047 for the NETHATE project.

Competing interests

The author declares none.

References

1 Online platforms as per the DSA (also referred to as social media companies). This article employs businesses, companies interchangeably, and assumes that online platforms fall under these categories.

2 Isabel Debre and Fares Akram, ‘Facebook’s Language Gaps Weaken Screening of Hate, Terrorism’ (2021), https://apnews.com/article/the-facebook-papers-language-moderation-problems-392cb2d065f81980713f37384d07e61f?utm_campaign=SocialFlow&utm_source=Twitter&utm_medium=AP(accessed 28 May 2024).

3 Amnesty International, ‘Myanmar: The Social Atrocity: Meta and the Right to Remedy for the Rohingya’ (2022), https://www.amnesty.org/en/documents/ASA16/5933/2022/en/ (accessed 28 May 2024); Human Rights Council, ‘Report of the Independent International Fact-Finding Mission on Myanmar’ A/HRC/39/64 (2018), https://www.ohchr.org/en/press-releases/2018/09/myanmar-un-fact-finding-mission-releases-its-full-account-massive-violations?LangID=E&NewsID=23575 (accessed 28 May 2024), para 74.

4 Rachel Griffin, ‘The Law and Political Economy of Online Visibility. Market Justice in the Digital Services Act’ (2023) 2023 Technology and Regulation 69–79. See also Al Jazeera, ‘The Listening Post: Genocide in Gaza: Enabled by AI, Powered by Big Tech’ (2024), https://www.aljazeera.com/program/the-listening-post/2024/4/13/genocide-in-gaza-enabled-by-ai-powered-by-big-tech (accessed 30 May 2024).

5 UN Human Rights Council, ‘Report of the Special Representative of the Secretary-General on the Issue of Human Rights and Transnational Corporations and Other Business Enterprises, John Ruggie’, A/HRC/17/31 (UNGPs) (2011).

6 UNGPs, note 5, Principle 15.

7 European Union, Directive (EU) 2024/1760 of the European Parliament and of the Council of 13 June 2024 on Corporate Sustainability due Diligence and Amending Directive (EU) 2019/1937 and Regulation (EU) 2023/2859, https://eur-lex.europa.eu/eli/dir/2024/1760/oj/eng (accessed 12 August 2025). The CSDDD was adopted by the EU Council in 24 May 2024, and entered into force 25 July 2024. However, on 26 February 2025, the European Commission presented the ‘First Omnibus Package’ proposal which aims to scale back not just the CSDDD, but also the Corporate Sustainability Reporting Directive (CSRD). For further information on this, see Gibson Dunn, ‘Omnibus Simplification Package Proposed by the EU Commission: Scaling Back Sustainability Reporting and Due Diligence Obligations’, Client Alert (28 February 2025), https://www.gibsondunn.com/omnibus-simplification-package-proposed-by-eu-commission-scaling-back-sustainability-reporting-and-due-diligence-obligations/ (accessed 12 August 2025). On 18 June 2025, a draft report by the European Parliament’s Rapporteur revealed additional reductions on the CSDDD and the CSRD. See Gibson Dunn, ‘EU Omnibus Simplification Package Update’, Client Alert (18 June 2025), https://www.gibsondunn.com/eu-omnibus-simplification-package-european-parliament-rapporteur-proposes-further-cutbacks-to-sustainability-reporting/ (accessed 12 August 2025). The analysis in this article follows the originally adopted text of the CSDDD from 24 May 2024. Nevertheless, it is acknowledged that significant revisions to the CSDDD are being discussed as part of the Omnibus Simplification Package and that any scale down on the due diligence responsibilities deriving from these ongoing negotiations will result in an increased necessity to clarify the corporate remedial responsibilities of online platforms in consistency with the international corporate remedial responsibilities framework stemming from the UNGPs.

8 European Union, Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence and amending certain Union legislative acts COM(2021) 206 final (AI Act), https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf (accessed 28 May 2024).

9 European Union, Regulation of the European Parliament and of the Council on a Single Market for Digital Services and amending Directive 2000/31/EC (DSA), art 93.

10 Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (AVMSD), OJ L 95.

11 This article recognizes the civil society arguments against legal expressions patronizing the agency of marginalized people and thus avoids the use of ‘victims’ and ‘protected characteristics’, and uses instead people targeted by hate speech.

12 DSA, note 9, art 41.

13 Steven Greer, Janneke Gerards and Rose Slowe, Human Rights in the Council of Europe and the European Union: Achievements, Trends and Challenges, 1st edn. (Cambridge University Press, 2018). https://doi.org/10.1017/9781139179041.

14 European Commission, ‘No Place for Hate in Europe. Commission and High Representative Launch Call to Action to Unite Against All Forms of Hatred’ (2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6329 (accessed 28 May 2024).

15 Council of Europe Committee of Ministers, Recommendation CM/Rec(2022)16 of the Committee of Ministers to member States on combating hate speech (CM/Rec(2022)16).

16 Council of Europe, European Convention for the Protection of Human Rights and Fundamental Freedoms, as amended by Protocols Nos. 11 and 14, ETS 5, 4 November 1950.

17 European Union, Charter of Fundamental Rights of the European Union (2007/C 303/01), C 303/1, 14 December 2007.

18 European Union: Council of the European Union, Directive 2012/29/EU of the European Parliament and of the Council of October 2012 establishing minimum standards on the rights, support and protection of victims of crime, and replacing Council Framework Decision 2001/220/JHA, L 315/57, 14 November 2012.

19 OECD, ‘OECD Guidelines for Multinational Enterprises’ (2011); OECD, ‘OECD Due Diligence Guidance for Responsible Business Conduct’ (2018).

20 Council of Europe Committee of Ministers, Recommendation CM/Rec(2014)6 of the Committee of Ministers to member States on a Guide to human rights for Internet users (CM/Rec(2014)6).

21 CM/Rec(2022)16, note 15.

22 CM/Rec(2022)16, note 15, Explanatory memo, para 54.

23 Hereinafter, this article employs ‘criminal hate speech’ and ‘the most severe forms of hate speech’ interchangeably.

24 Such increased understanding of the criminal hate speech allows for an extended legal reasoning on the States’ positive obligations to protect people targeted by hate speech as well as on the corporate human rights responsibilities of online platforms required to counter online hate speech.

25 This article rejects theories of different human ‘races’; however, ‘race’ or ‘racialized’ are used as terms to expose a colonial process whereby a dominant group ascribes another a racial identity for the purpose of continued oppression.

26 CM/Rec(2022)16, note 15, para 11.

27 Tarlach McGonagle, Minority Rights, Freedom of Expression and of the Media: Dynamics and Dilemmas, School of Human Rights Research Series, Vol. 44. (Cambridge – Antwerp – Portland: Intersentia, 2011). Following the work of McGonagle, this article employs ‘impermissible grounds’ for hate speech as a way to refer to the traditionally called ‘protected characteristics’ from discrimination. Some of the most common characteristics protected from discrimination based on human rights standards on non-discrimination include race, ethnicity, nationality, sex, gender, religion, disability. This article recognizes that the expression ‘protected characteristics’ can be understood as a legal condescending term that undermines the agency of people historically or systematically oppressed and, thus, uses the expression ‘impermissible grounds’ in an effort to depart from such patronizing approach.

28 CM/Rec(2022)16, note 15, paras 2 and 11.

29 Katharine Gelber, ‘Differentiating Hate Speech: A Systemic Discrimination Approach’ (2021) 24:4 Critical Review of International Social and Political Philosophy 393–414. https://doi.org/10.1080/13698230.2019.1576006 393–414.

30 Siapera, Eugenia and Viejo-Otero, Paloma, ‘Governing Hate: Facebook and Digital Racism’ (2021) 22:2 Television & New Media 112–30CrossRefGoogle Scholar.

31 European Commission, ‘A More Inclusive and Protective Europe: Extending the List of EU Crimes to Hate Speech and Hate Crime’ (2021), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021DC0777 (accessed 28 May 2024); art 83(1) of the TFEU specifies a list of areas of crime where the European Union legislators may establish minimum legal thresholds regarding the definition of criminal offences and sanctions applicable in all Member States of the EU.

32 Art 25(3)(e) of the Rome Statute criminalizes direct and public incitement of other to commit genocide. Mark Klamberg (ed.), Commentary on the Law of the International Criminal Court, Vol. 29 (Brussels: Torkel Opsahl Academic EPublisher, 2017) 775. See Neema Hakim, ‘How Social Media Companies Could Be Complicit in Incitement to Genocide’ (2020) 21 Chicago Journal of International Law 83.

33 Michael Luca, ‘User-Generated Content and Social Media’ in Handbook of Media Economics, Vol. 1 (North-Holland: Elsevier, 2015), 563–92.

34 Covering the management of both users’ accounts and users’ posts.

35 Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale University Press, 2018).

36 The platforms typically require that users agree to the terms of service when creating an account. This article refers to terms and conditions and community guidelines interchangeably.

37 Goldman, Eric, ‘Content Moderation Remedies’ (2021) 28:1 Michigan Technology Law Review 24 Google Scholar; Eline Labey and Valentina Golunova, ‘Judges of Online Legality: Towards Effective User Redress in the Digital Environment’ in European Yearbook on Human Rights, 1st edn. (Intersentia, 2022) 105–35.

38 Quintais, João Pedro, Appelman, Naomi and Fathaigh, Ronan Ó., ‘Using Terms and Conditions to Apply Fundamental Rights to Content Moderation’ (2023) 24.5 German Law Journal 881911 10.1017/glj.2023.53CrossRefGoogle Scholar; Nave, Eva and Lane, Lottie, ‘Countering Online Hate Speech: How Does Human Rights Due Diligence Impact Terms of Service?’ (2023) 51 Computer Law & Security Review 105884 10.1016/j.clsr.2023.105884CrossRefGoogle Scholar.

39 E.g., CM/Rec(2022)16, para 23 recommends that Member States regulate the necessity that internet intermediaries explain a decision to block, take down or deprioritize certain content. However, it could have provided more detailed guidance for content moderation had it clarified the suitability of moderation outcomes depending on the severity of hate speech.

40 Gillespie, note 35.

41 Leerssen, Paddy, ‘An End to Shadow Banning? Transparency Rights in the Digital Services Act Between Content Moderation and Curation’ (2023) 48 Computer Law & Security Review 105790 10.1016/j.clsr.2023.105790CrossRefGoogle Scholar.

42 Some online platforms have created dedicated websites to explaining their content moderation practices. E.g., https://transparency.x.com/en.html for twitter, https://transparency.fb.com/en-gb/ for Meta, https://about.linkedin.com/transparency for LinkedIn (accessed 28 May 2024).

43 CM/Rec(2022)16, note 15, Preamble and Explanatory memo, para 86; European Commission, note 31.

44 Rosenfeld, Michel, ‘Hate Speech in Constitutional Jurisprudence: A Comparative Analysis’, in Conference: The Inaugural Conference of the Floersheimer Center for Constitutional Democracy: Fundamentalisms, Equalities, and the Challenge to Tolerance in a Post-9/11 Environment, (2002) 24 Cardozo Law Review 1523 Google Scholar, 1565. CM/Rec(2022)16, note 15, Explanatory memo, para 32.

45 Leroy v France, para 38; Delfi AS v Estonia, paras 142–6; Perinçek v Switzerland, para 205.

46 Féret v Belgium, no. 15615/07, 16 July 2009, para 63; General recommendation No. 35, Combating racist hate speech of the Committee on the Elimination of Racial Discrimination; the Rabat Plan of Action on the prohibition of advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence; and, the Guide on Article 10 of the ECHR, Freedom of expression, para 225.

47 Savva Terentyev v Russia, no. 10692/09, para 79; Delfi AS v Estonia, para 110; Stomakhin v Russia, no. 52273/07, 9 May 2018, para 131; and, Jersild v Denmark, paras 32–3.

48 Perinçek v Switzerland, para 205; Savva Terentyev v Russia, paras 32–3.

49 Vejdeland and Others v Sweden, no. 1813/07, 9 February 2012, paras 51–8; and Lilliendahl v Iceland, no. 29297/18, 11 June 2020, paras 38–9.

50 Budinova and Chaprazov v Bulgaria, para 63.

51 Binny Mathew, Ritam Dutt, Pawan Goyal and Animesh Mukherjee, ‘Spread of Hate Speech in Online Social Media’ (2019) Proceedings of the 10th ACM Conference on Web Science 173, https://doi.org/10.48550/arxiv.1812.01693 (accessed 28 May 2024).

52 Janice Asare, ‘Are Marginalized Communities Being Censored Online’, Forbes (2020), https://www.forbes.com/sites/janicegassam/2020/05/24/are-marginalized-communities-being-censored-online/ (accessed 28 May 2024). Furthermore, online platforms often outsource the traumatic human review in content moderation to already marginalized communities working under extremely precarious work conditions; e.g., Adrienne Williams, Milagros Miceli and Timnit Gebru, ‘The Exploited Labor Behing Artificial Intelligence’ (2022), https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/ (accessed 28 May 2024).

53 Larry Elliot, ‘Big Tech Firm Recklessly Pursuing Profits From AI, Says UN Head’, The Guardian (2024), https://www.theguardian.com/business/2024/jan/17/big-tech-firms-ai-un-antonio-guterres-davos (accessed 28 May 2024); Layug, Alyan, Krishnamurthy, Samiksha, McKenzie, Rachel and Feng, Bo, ‘The Impacts of Social Media Use and Online Racial Discrimination on Asian American Mental Health: Cross-Sectional Survey in the United States During COVID-19’ (2022) 6:9 JMIR Formative Research e38589, https://doi.org/10.2196/38589, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9488547/ CrossRefGoogle ScholarPubMed (accessed 28 May 2024); Allyson M Ganster, ‘Black Women and Digital Resistance: The Impact of Social Media on Racial Justice Activism in Brazil and the United States’ Diss. 2019, https://repositories.lib.utexas.edu/items/45168a42-b43d-47ea-800f-24cd7d2d04cc (accessed 28 May 2024).

54 The Wall Street Journal investigation resulting from the work of former Facebook employee and whistle blower Frances Haugen.

55 Amnesty International, note 3, 42.

56 Santos, Fernando P., Lelkes, Yphtach, and Levin, Simon A.. ‘Link recommendation algorithms and dynamics of polarization in online social networks.’ (2021) 118:50 Proceedings of the National Academy of Sciences, e2102141118 10.1073/pnas.2102141118CrossRefGoogle ScholarPubMed.

57 Taylor, Damon Henderson, ‘Civil Litigation Against Hate Groups Hitting the Wallets of the Nation’s Hate-Mongers’ (1999) 18 Buffalo Public Interest Law Journal 95 Google Scholar.

58 Amnesty International, note 3, 42.

59 Amnesty International, note 3, 44; The Wall Street Journal, ‘Facebook Executives Shut Down Efforts to Make the Site Less Divisive’ (2020), wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499 (accessed 28 May 2024).

60 Amnesty International, note 3.

61 United Nations Human Rights Council, note 3.

62 United Nations Human Rights Council, note 3, para 74.

63 Amnesty International, note 3.

64 Amnesty International, note 3.

66 Richard Delgado, Understanding Words That Wound (Routledge, 2019).

67 Nave, Eva, ‘Hate Speech, Historical Oppressions, and European Human Rights’ (2022) 29 Buffalo Human Rights Law Review 83 Google Scholar, 91.

68 Delgado, Richard, ‘Words That Wound: A Tort Action for Racial Insults, Epithets, and Name-Calling’ (1982) 17 Harvard Civil Rights Liberties Law Review 133 Google Scholar.

69 Ibid, 67.

70 Joe R. Feagin and Debra Van Ausdale, The First R: How Children Learn Race and Racism (Rowman & Littlefield Publishers, 2001).

71 Delgado, note 67.

72 Combs, Gene, ‘White Privilege: What’s a Family Therapist to Do?’ (2019) 45:1 Journal of Marital and Family Therapy 6175 10.1111/jmft.12330CrossRefGoogle ScholarPubMed.

73 Delgado, note 68; Research indicates that a potential cause for the higher number of deaths of African Americans associated with hypertension may be linked to continued exposure to hate speech.

74 Gelber, note 29.

75 Piątek, Wojciech, ‘The Right to an Effective Remedy in European Law: Significance, Content and Interaction’ (2019) 6:3–4 China-EU Law Journal 163–7410.1007/s12689-019-00086-3CrossRefGoogle Scholar.

76 Gutman, Kathleen, ‘The Essence of the Fundamental Right to an Effective Remedy and to a Fair Trial in the Case-Law of the Court of Justice of the European Union: The Best Is Yet to Come?’ (2019) 20:6 German Law Journal 884903 10.1017/glj.2019.67CrossRefGoogle Scholar.

77 Council of Europe, Effective Remedies Explanatory Memorandum, https://www.coe.int/en/web/freedom-expression/effective-remedies-explanatory-memo (accessed 28 May 2024).

78 Council of Europe, Guide on Article 13 of the ECHR Right to an Effective Remedy, https://www.echr.coe.int/documents/d/echr/guide_art_13_eng (accessed 28 May 2024), paras 3, 24 and 26.

79 Council of Europe, Effective Remedies, https://www.coe.int/en/web/freedom-expression/effective-remedies#:~:text=You%20have%20the%20right%20to,pursue%20legal%20action%20straight%20away (accessed 28 May 2024).

80 Council of Europe, note 78, para 10.

81 Council of Europe, note 78, para 11.

82 Council of Europe, note 78, para 20.

83 Pine Valley Developments Ltd and Others v Ireland, Commission decision, 1989.

84 Budayeva and Others v Russia, 2008, para 190.

85 Council of Europe, note 78.

86 Colozza and Rubinat v Italy, Commission decision, 1982, 146–7.

87 CFREU, note 17, art 47.

88 European Union, Directive 2012/29/EU of the European Parliament and of the Council of 25 October 2012 establishing minimum standards on the rights, support and protection of victims of crime and replacing Council Framework Decision 2001/220/JHA (Victims Directive).

89 European Commission, DG Justice Guidance Document Related to the Transposition and Implementation of the Victims Directive, 34, https://commission.europa.eu/document/download/238cafb6-d5cd-4d1a-8624-a0bafb2cdfa3_en?filename=13_12_19_3763804_guidance_victims_rights_directive_eu_en.pdf (accessed 29 May 2024).

90 Victims Directive, note 88, arts 15 and 16.

91 Victims Directive, note 88, art 1.

92 Council of Europe, note 79.

93 United Nations, Resolution adopted by the General Assembly on 16 December 2005, Basic Principles and Guidelines on the Right to a Remedy and Reparation for Victims of Gross Violations of International Human Rights Law and Serious Violations of International Humanitarian Law, A/RES/60/147, II(3).

94 United Nations, General Assembly Resolution 60/147, Basic Principles and Guidelines on the Right to a Remedy and Reparation for Victims of Gross Violation of International Human Rights Law and Serious Violations of International Humanitarian Law, para 5.

95 A/RES/60/147, note 93, V(8).

96 European Union, Council Decision (CFSP) 2020/1999 of December 2020 concerning restrictive measures against serious human rights violations and abuses.

97 A/RES/60/147, note 93, art 1(1)(d).

98 CFSP, note 96, arts 2 and 3(1).

99 United Nations Human Rights, Office of the High Commissioner, Implementing the UN ‘Protect, Respect and Remedy Framework’ (UNGPs Guide), 7.

100 Ibid, 5.

101 Ibid, paras 19, 21.

102 UNGPs, note 5, Principle 22.

103 OECD Due Diligence Guidance, note 19.

104 OECD Due Diligence Guidance, note 19, 70.

105 OECD Due Diligence Guidance, note 19, 72.

106 UNGPs Guide, note 99, 70.

107 It should be noted that the UNGPs do require all businesses to prevent harm, regardless of the high or low risk of their operations.

108 UNGPs, note 5, Principle 29.

109 UNGPs, note 5, Principle 31.

110 UNGPs, note 5, Principle 31.

111 UNGPs Guide, note 99, Q. 66.

112 UNGPs Guide, note 99, Q. 64.

113 UNGPs Guide, note 99, Q. 66.

114 Klonick, Kate, ‘The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression’ (2019) 129 The Yale Law Journal 2418 Google Scholar; Rachel Griffin, ‘Rethinking Rights in Social Media Governance: Human Rights, Ideology and Inequality’ (2023) 2:1 European Law Open 30–56.

115 Nave and Lane, note 38.

116 United Nations, note 94, VII, art 11(b).

117 UNGPs, note 5, Principle 20.

118 UNGPs Guide, note 99, Q. 64; United Nations, note 94, IX; Victor Stoica, Remedies Before the International Court of Justice (Cambridge University Press, 2021).

119 A/RES/60/147, note 93.

120 A/RES/60/147, note 93, para 19.

121 Stoica, note 118, 146; A/RES/60/147, note 93, para 22.

122 A/RES/60/147, note 93, para 21.

123 A/RES/60/147, note 93, para 20.

124 A/RES/60/147, note 93, para 23.

125 CSDDD, note 7, e.g., Recitals 5, 7, 14, 20, 24, 32, art 3.

126 CSDDD, note 7, Recital 58.

127 CSDDD, note 7, Recitals 16, 19, 20, 31, 38, 40, 58, 59, 86.

128 CSDDD, note 7, Recital 16.

129 See brief contextualization introduced in note 7.

130 AI Act, note 8, Recital 1.

131 AI Act, note 8, art 5. Notably, this standard seems to contradict the DSA no general monitoring requirement in art 7 because it requires platforms to monitor the impact of their algorithms and ensure that these are not enhancing the probability of harm.

132 AI Act, note 8, Recital 34.

133 UNGPs, note 5, Principle 15(b).

134 DSA, note 9, Chapter III.

135 DSA, note 9, Chapter II.

136 DSA, note 9, Recital 36.

137 DSA, note 9, art 2(h).

138 DSA, note 9, art 33.

139 European Commission, ‘Questions and Answers on the Digital Services Act’ (2024), https://ec.europa.eu/commission/presscorner/detail/en/QANDA_20_2348 (accessed 29 May 2024).

140 DSA, note 9, arts 34 and 35.

141 DSA, note 9, art 34(1)(a) and Recital 16.

142 The due diligence obligations in Chapter III of the DSA can however be interpreted as creating a general duty of care which, if infringed would lead to liability. Machado, CCV and Aguiar, TH, ‘Emerging Regulations on Content Moderation and Misinformation Policies of Online Media Platforms: Accommodating the Duty of Care into Intermediary Liability Models2023 8:2 Business and Human Rights Journal 244–51. doi:10.1017/bhj.2023.25 CrossRefGoogle Scholar.

143 DSA, note 9, art 36(1) and (2).

144 DSA, note 9, Recital 59.

145 DSA, note 9, Recitals 114 and 145, and art 14.

146 DSA, note 9, art 21.

147 Digital Services Act Observatory, ‘The Out-of-Court Settlement Mechanism Under the DSA: Questions and Doubts’ (2023), https://dsa-observatory.eu/2023/10/26/the-out-of-court-settlement-mechanism-under-the-dsa-questions-and-doubts/ (accessed 29 May 2024).

148 Section V.B. ‘Proposed standards for a comprehensive framework’ of this article.

149 AVMSD, note 10.

150 AVMSD, note 10, art 1(1)(b)(aa).

151 AVMSD, note 10, arts 28a and 28b.

152 AVMSD, note 10, art 28b(i).

154 AVMSD 2010/13/EU, note 10, Recital 103 clarifies that the right to reply can apply online; see also art 28.

155 European Commission (2016) The CoC on countering illegal hate speech online; European Commission, ‘DSA: Commission Designates First Set of VLOPs and Search Engines’ (2023), https://ec.europa.eu/commission/presscorner/detail/en/IP_23_2413 (accessed 29 May 2024).

156 CM/Rec(2022)16, note 15, para 20.

157 CM/Rec(2022)16, note 15, paras 75 and 90.

158 CM/Rec(2022)16, note 15, para 75.

159 CM/Rec(2014)6, note 20, para 103.

160 CSDDD, note 7, Recital 58.

161 Section IV.A. of this article.

162 Section IV.A. of this article.

163 CSDDD, note 7, Recital 59.

164 CSDDD, note 7, Recital 58.

165 For a brief contextualization of the CSDDD, see note 7.

166 DSA, note 9, art 49.

167 See discussion above in Section V.A. ‘Remedial responsibilities in the DSA’ of this article.

168 CSDDD, note 7, Recitals 58 and 59.

169 CSDDD, note 7, Recitals 58 and 59.

170 CSDDD, note 7, Recital 59.

172 DSA, note 9, art 70.

173 See above, in Section V.B. subsections ‘Complementarity Between the DSA and the CSDDD’ and ‘Regulatory and Administrative Oversight’, for a more detailed analysis of the key Recitals of the Preamble of the CSDDD focusing on corporate remedial responsibilities.

174 Anu Bradford, The Brussels Effect: How the European Union Rules the World (Oxford University Press, 2020).

175 Stoica, note 118, 148.

177 Katharine Gelber, Speaking Back. The Free Speech Versus Hate Speech Debate (John Benjamins Publishing Company, 2002) 117, 118.

178 Gelber, note 177.

179 This article builds on decolonial and feminist sociology and psychology theories as well as empirical studies showing that the direct engagement and leadership of people targeted by hate speech in deciding the response to the harm caused empowers and contributes to a faster overcoming of the oppression perpetuated by hate speech.

180 Gelber, note 177, 119.

181 Gelber, note 177, 124.

182 E.g., Haimson, Oliver L. Delmonaco, Daniel, Nie, Peipei, and Wegner, Andrea, ‘Disproportionate Removals and Differing Content Moderation Experiences for Conservative, Transgender, and Black Social Media Users: Marginalization and Moderation Gray Areas’ (2021) 5:CSCW2 Proceedings of the ACM on Human-Computer Interaction 135 Google Scholar; Delmonaco, Daniel, Mayworm, Samuel, Thach, Hibby, Guberman, Josh, Augusta, Aurelia, and Haimson, Oliver L.. ‘“What Are You Doing, TikTok?”: How Marginalized Social Media Users Perceive, Theorize, and “Prove” Shadowbanning’ (2024) 8:CSCW1 Proceedings of the ACM on Human-Computer Interaction 139 10.1145/3637431CrossRefGoogle Scholar.

183 European Union, note 96.

184 Introduction and Section II.

185 UNGPs, note 5, Principle 15.

186 In this context, it is important to adequately identify and mitigate potential complicated implications for national criminal law procedures.

187 Machado and Aguiar, note 142.

188 Amnesty International, note 3, 8, highlights that ‘content moderation alone is inherently inadequate as a solution to algorithmically amplified harms’.

190 Gelber, note 29, 401.

191 In misalignment with human rights, Facebook’s terms of service allows hate speech towards criminals. This was one of the criteria permitting hate speech towards two members of the Rohingya community who were initially wrongly accused of having raped. E.g., Eva Nave and Lottie Lane, note 38.

192 This article acknowledges the growing records of infiltration of extremists in law enforcement bodies. Koehler, Daniel, ‘From Superiority to Supremacy: Exploring the Vulnerability of Military and Police Special Forces to Extreme Right Radicalization’ (2025) 48:2 Studies in Conflict & Terrorism, 115138 10.1080/1057610X.2022.2090047CrossRefGoogle Scholar.

Figure 0

Figure 1. Corporate remedial responsibilities for adverse human rights impacts.