I. Introduction
Business models adopted by online platformsFootnote 1 have contributed to the proliferation of online hate speech. Frances Haugen, a whistleblower from Meta Platforms, Inc. (formerly Facebook, Inc.), revealed that the platform prioritized growth over countering online hate speech in countries such as Afghanistan, Ethiopia and India.Footnote 2 In a more extreme example, Amnesty International and the United Nations alerted the public to Meta’s significant contribution to the genocide of the Rohingya in Myanmar after its algorithms failed to take down and amplified hate speech towards this Muslim community.Footnote 3 Other online platforms have also been under increased scrutiny for adopting content moderation and recommendation algorithms amplifying hate speech.Footnote 4
The framework addressing the companies’ responsibilities to comply with human rights is developed in the United Nations Guiding Principles on Business and Human Rights (UNGPs).Footnote 5 The UNGPs, though not legally binding, were endorsed by the United Nations Human Rights Council in 2011 and are the key international standard-setting instrument explaining the three essential corporate human rights responsibilities. Based on the UNGPs, companies must adopt: (i) a policy commitment to respect human rights; (ii) a human rights due diligence process to identify, prevent and mitigate adverse impacts on human rights; and (iii) remediation mechanisms of any adverse impacts on human rights to which the company caused or contributed.Footnote 6
At the European Union (EU) level, online platforms have the corporate human rights responsibility to counter illegal content, including hate speech. The Corporate Sustainability Due Diligence Directive (CSDDD),Footnote 7 the Artificial Intelligence Act (AI Act),Footnote 8 the Digital Services Act (DSA)Footnote 9 and the Audiovisual Media Services Directive (AVMSD),Footnote 10 all contribute to establishing the human rights due diligence of online platforms to counter online hate speech. Nevertheless, this European legal framework fails to clarify the third task stemming from the UNGPs, i.e., the remedial responsibilities of online platforms to redress people harmedFootnote 11 by online hate speech caused or contributed to by the platforms.
This article’s central research question is two-fold: In compliance with the right to an effective remedy, how can European legislators better align the framework on corporate remedial responsibilities of online platforms which caused or contributed to criminal hate speech with the general framework on corporate remedial responsibilities? Additionally, are there heightened remedial responsibilities for very large online platforms (VLOPs)Footnote 12 or for cases of criminal hate speech amounting to gross violations of human rights?
This article covers legal and policy instruments from both the European Union and the Council of Europe given the alignment between the two human rights systems.Footnote 13 Occasional references to international human rights instruments contextualise their influence on European instruments. Doctrinal research identifies legal loopholes in legislation and suggests normative approaches that are compliant with human rights. This article focuses on hate speech on online platforms for two reasons. First, online platforms, and especially VLOPs, have constituted the most problematic digital environment quickly disseminating hate speech. Second, online platforms are increasingly regulated at the European level and thus allow for a more consolidated normative analysis.
To answer the research question, Section II analyses the European standards on criminal hate speech. Given that there is no definition of criminal hate speech at the EU level,Footnote 14 the central instrument investigated is CM/Rec(2022)16 adopted by the Council of Europe Committee of Ministers.Footnote 15 This section also initiates the academic debate about the elements of criminal hate speech that may classify as gross violations of human rights. In these cases, the international standards on the right to remedy for gross violations of human rights should apply.
Facebook’s contribution to the genocide of the Rohingya in Myanmar is used as an example mainly in Section II, but also occasionally referred to in other sections. This case is relevant because it is one of the most thoroughly documented showing the societal impact of the corporate human rights responsibilities of VLOPs contributing to hate speech as well as the impact of the lack of compliance with corporate remedial responsibilities.
Section III investigates the application of the right to effective remedy prescribed in Art. 13 of the European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR),Footnote 16 Art. 47 of the Charter of Fundamental Rights of the European Union (CFREU),Footnote 17 and the Victims Rights DirectiveFootnote 18 to online hate speech. This section also examines the international standards on the right to remedy for cases of gross violations of human rights.
Section IV clarifies the general corporate remedial responsibility by explaining the framework stemming from the UNGPs and from the Organisation for Economic Co-operation and Development (OECD) Guidelines for Multinational Enterprises on Responsible Business Conduct and OECD Due Diligence Guidance (OECD Guidelines).Footnote 19 This framework covers: modes of responsibility, remedial processes and remedial outcomes. This framework applies to online platforms that caused or contributed to criminal hate speech.
Section V highlights the need for and proposes legal standards for a corporate remedial responsibilities framework at the EU level, including for online platforms that caused or contributed to criminal hate speech. The legal instruments reviewed are the CSDDD, the AIA, the DSA, the AVMSD and the policy instruments researched are the Code of Conduct on countering illegal hate speech online, and Recommendations CM/Rec(2022)16 and CM/Rec(2014)6.Footnote 20 The proposed standards focus on clarifying modes of responsibilities, remedial processes, and remedial outcomes. In this context, the three remedial outcomes analysed are guarantees of non-repetition, restitution and compensation.
II. Criminal Hate Speech on Online Platforms
A. European Standards on Criminal Hate Speech
Although there is no binding definition of hate speech in international or European human rights law, CM/Rec(2022)16Footnote 21 distils the key elements for the regulation of hate speech both online and offline. CM/Rec(2022)16 clarifies that hate speech is always illegal as it is either (1) criminalised in its most severe forms or (2) prohibited under civil or administrative law.Footnote 22
This article explores the legal framework applicable to category (1), i.e., criminal hate speech.Footnote 23 The decision to focus on criminal hate speech is based on a growing recognition of its key elements at the European level, specifically following the adoption of CM/Rec(2022)16.Footnote 24 CM/Rec(2022)16 clarifies in Paragraph 11 that, based on existing international and regional human rights, the following hateful expressions are criminally actionable:
-
a. public incitement to commit genocide, crimes against humanity or war crimes;
-
b. public incitement to hatred, violence or discrimination;
-
c. racist,Footnote 25 xenophobic, sexist and LGBTI-phobic threats;
-
d. racist, xenophobic, sexist and LGBTI-phobic public insults under conditions such as those set out specifically for online insults in the Additional Protocol to the Convention on Cybercrime concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems (ETS No. 189);
-
e. public denial, trivialisation and condoning of genocide, crimes against humanity or war crimes; and
-
f. intentional dissemination of material that contains such expressions of hate speech (listed in a-e above) including ideas based on racial superiority or hatred.Footnote 26
CM/Rec(2022)16 takes an open-ended approach to the list of impermissible groundsFootnote 27 for hate speech as both Paragraph 11 and Paragraph 2 introduce a list of several characteristics by using ‘such as’.Footnote 28 Nevertheless, this article defends that CM/Rec(2022)16 could have improved legal coherence had it expressly referred to two elements stemming from the critical legal conceptualisation of hate speech. First, the historical oppression perpetuated by hate speechFootnote 29 and, second, the intersectionality of systems of oppression with a view to adequately reflect the harm caused by hate speech.Footnote 30 Hence, the subsequent analysis in this article adopts an explicitly open-ended conceptualisation of impermissible grounds for hate speech, grounded in the acknowledgement that hate speech is used to perpetuate systems of oppression, and that the intersectionality of historical systems of oppression is an aggravating factor harming people targeted by hate speech.
At the EU level, the European Commission published in 2021 a Communication encouraging the Council of the European Union (Council) to extend hate speech and hate crime to the list of EU crimes under Art. 83(1) of the Treaty on the Functioning of the European Union (TFEU).Footnote 31 However, while the EU does not adopt such legislation on criminal hate speech, this article follows the conceptualisation of criminal hate speech in Paragraph 11 CM/Rec(2022)16.
Finally, certain elements of criminal hate speech may classify as gross violations of human rights. The normative framework does not clarify which, if any, elements of criminal hate speech amount to gross violations of human rights. On the one hand, Paragraph 11 CM/Rec(2022)16 includes together with incitement to genocide, also incitement to crimes against humanity and incitement to war crimes as criminal hate speech. On the other hand, international criminal law does not clarify if these three types of incitement would classify as the most serious crimes in international law amounting to gross violations of human rights.Footnote 32
This article does not attempt to resolve this normative debate. Rather, this article seeks to acknowledge the possibility that elements of criminal hate speech may amount to gross violations of human rights and thus result in the application of the right to remedy and reparation for victims of gross human rights violations. This analysis is key to adequately frame the corporate remedial responsibilities of online platforms responsible for such criminal hate speech potentially amounting to gross violations of human rights.
B. The Role of Online Platforms
This section introduces, first, the services provided by online platforms and, second, how they facilitate the spread of hate speech on their platforms. After that, this section expands on Meta’s contribution to the genocide of the Rohingya in Myanmar as an example clarifying the problematic role of online platforms contributing to the rise of online and offline hate speech.
Online platforms facilitate the dissemination of user-generated content.Footnote 33 Given the large user base and high amounts of content, online platforms typically employ two types of algorithms to manage content:Footnote 34 (1) content moderation algorithms, and (2) content ranking and recommendation algorithms.Footnote 35
Content moderation algorithms are used to enforce policies of prohibited content. Users are informed about the content that is prohibited on the platform in the terms of service.Footnote 36 Examples of outcomes of content moderation include disabling, labelling, suspension and removal of content.Footnote 37 The terms of service (ToS) often do not clarify the standards used to decide on content moderation outcomes. The current regulatory framework applicable to the ToS provides insufficient guidance regarding the content that should be prohibitedFootnote 38 or the way that the ToS should address the outcomes to be attained from content moderation.Footnote 39
Ranking and recommendation algorithms assist with the task of deciding which content to first display on the users’ newsfeed or on auto-plays after the completion of a given video. The suggestion of subsequent content that is ranked high is called chaining.Footnote 40 The reverse operation, when a content is deliberately not suggested, is called demotion or down-ranking. These algorithms typically aim to link users to other users, to groups or to specific posts that can match their interests and thus maximise engagement on the platform.Footnote 41 Online platforms have disclosed little to no information on the internal processes guiding these ranking and recommendation algorithms or possible outcomes.Footnote 42
The Committee of Ministers of the Council of Europe and the European Commission have warned that the algorithms employed by online platforms can facilitate the dissemination of online hate speech.Footnote 43 Analysing to what extent online platforms enhance the severity of hate speech, it is relevant to review the context in which the expression was manifested. When assessing the severity of hate speech, the ECtHR evaluates ‘contextual variables’Footnote 44 such as: the political and social context at the time of the speech;Footnote 45 the speaker’s status or role in society;Footnote 46 the reach and form of dissemination of the speech;Footnote 47 the likelihood and imminence that the speech results, directly or indirectly, in harmful consequences;Footnote 48 the nature and size of the audience;Footnote 49 the perspective of the people targeted by the speech (including its historical oppression).Footnote 50
This article explores how online platforms affect the severity of hate speech by reviewing three contextual variables: (1) reach, as well as the size of the audience; (2) the polarised and susceptible nature of the audience; and, (3) the likelihood of harm. These three variables were selected based on the algorithms currently discussed within the context of online platforms.
First, online platforms typically enable faster dissemination of content to larger audiences than traditional offline media, thereby amplifying the reach of speech. Users can instantaneously publish content with a wider network than in offline settings. Nevertheless, studies show that the reach of speech is only increased for certain types of content, e.g., hate speech spreading faster than innocuous content.Footnote 51 Depending on the algorithms deployed, content can be amplified, deamplified, blocked, removed, etc. Typically, algorithms are not trained to process either the context or the languages of already marginalized communities, resulting in the illegal removal of content produced by these communities.Footnote 52 Additionally, it is widely reported that platforms have prioritised user engagement often at the expense of human rights, such as the prohibition of discrimination.Footnote 53 For example, the Facebook PapersFootnote 54 revealed that ranking and recommending algorithms prioritised virality of content, often disregarding whether content is harmful or incites to violence.Footnote 55 Consequently, online platforms have increased the reach of hate speech.
Second, online platforms can polarise large audiences of users due to their content recommendations algorithms.Footnote 56 Designed to connect like-minded people, online platforms have facilitated the organisation of ‘hate mongers’,Footnote 57 and enabled offline violence.Footnote 58 In fact, the Wall Street Journal found that, in 2016, 64 per cent of new members who joined extremist groups on Facebook in Germany resulted from their viewing algorithm recommendations.Footnote 59
Third, by amplifying online hate speech and by polarising users, the current algorithms increase the likelihood of harm. Amnesty International has explained how Meta’s content moderation algorithms failed to take down content advocating for hatred, discrimination and genocide of the Rohingya Muslim community in Myanmar.Footnote 60 This hateful content was then amplified by their ranking algorithm designed to maximise the users’ engagement by showing such content at the top of newsfeeds. Moreover, hateful videos were also amplified by Facebook when its recommendation algorithm automatically played them in its ‘Up Next’ feature. The United Nations Independent International Fact-Finding Mission on Myanmar concluded that ‘[t]he role of social media [was] significant’ in the atrocities.Footnote 61
Members of the Rohingya community are seeking remediation from Meta in three judicial actions, including a request for a US$1 million for an educational project in Bangladesh refugee camps. Despite admitting to not have done enough to prevent the platform from being used to incite offline violence,Footnote 62 Meta refuses to remediate through the educational project, communicating that it had instead improved its content moderation algorithms.Footnote 63 Meta does not detail in which way it has improved its algorithms and Amnesty International emphasises compliance with remediation responsibilities must address the victims’ harms.Footnote 64
III. Right to Remedy for Criminal Hate Speech Online
Having clarified the conceptualisation of criminal hate speech employed in this article,Footnote 65 this section explains the operationalisation of the human right to an effective remedy of people targeted by criminal hate speech. This section identifies, first, the harm caused by criminal hate speech including on online platforms, then sets out the European standards on the State’s duty to ensure access to an effective remedy for people targeted by criminal hate speech.
A. Harm Caused by Hate Speech
Critical race theory was the first school of legal scholarship to advance the conceptualisation of harms caused by hate speech.Footnote 66 According to this scholarship, hate speech can cause psychological, physical and economic or material harms.Footnote 67 Critical race scholars also stressed the cumulative effect of continued exposure to hate speech.Footnote 68
The psychological harms experienced by people targeted by hate speech range from fear, anger, low self-esteem, low capacity of attention, withdrawal from society, depression, nightmares, post-traumatic stress and psychosis.Footnote 69 Studies show that these harms have an aggravated impact on younger people and children.Footnote 70 These layers of harm passed through generations lead to an increased difficulty in dealing with the psychological harms caused by hate speech.Footnote 71 Furthermore, access to psychological support is limited because it is not just expensive but also practitioners often come from privileged backgrounds and thus lack the lived experience of people historically targeted by hate speech.Footnote 72
The physical harms that people targeted by hate speech face can be distinguished between short-term and long-term physical harms. Short-term physical harms include accelerated breathing and heart rate, dizziness, headaches and raised blood pressure.Footnote 73 In the most serious cases, hate speech inciting to violence can lead to hate crimes, war crimes, genocide or crimes against humanity.
Hate speech may also cause economic or material harms of the people it targets. Hate speech may jeopardise access to, e.g., education, health or employment if, by continued exposure to hate speech, people are forced to leave their studies, jobs, neighbourhoods, cities or countries, or to avoid public spaces altogether. In some of the most extreme cases, people targeted by hate speech may become refugees seeking asylum, often facing dire situations ranging from insecurity to lack of access to water and other basic human rights.
In the specific context of harms experienced by people targeted by criminal hate speech on online platforms, all of the harms mentioned above apply, i.e., psychological, physical, and economic harms. Additional impacts to consider include, e.g., disengaging from online platforms to avoid exposure to hate speech may limit the exercise of access to information and freedom of assembly or association.Footnote 74
B. The State’s Duty to Ensure Access to Remedy
European Standards on Remedies
People harmed by hate speech (whether online or offline), and especially by criminal hate speech, have the right to an effective remedy. The right to an effective remedy is a fundamental human right under international and European human rights law.Footnote 75 This right derives from a general legal principle that every breach of international law results in an obligation to provide remedy.Footnote 76 This article focuses primarily on the European standards.
At the Council of Europe level, Art. 13 of the ECHR establishes the right to an effective remedy before a national authority. This provision lays down the State’s positive obligation to investigate allegations of violations, including by private companies, of human rights in a ‘diligent, thorough, and effective’ manner.Footnote 77 The national authority may be a judicial or non-judicial body, if the latter fulfils the independence and impartiality prerequisites.Footnote 78 It is essential that remedies are ‘available, known, accessible, affordable, and capable of providing adequate redress’.Footnote 79 Importantly, the national authorities have the primary responsibility to investigate violations of human rights and a person may only appeal to the ECtHR after exhausting all available domestic procedures.
The right to remedy exists when there is an ‘arguable’ grievance under the ECHR.Footnote 80 This means that Art. 13 of the ECHR is complementary to other rightsFootnote 81 and may be invoked in two circumstances. First, if there is an allegation of a violation of another right in the ECHR. Second, if the person cannot effectively exercise the right to remedy at the national level.Footnote 82 Finally, according to Art. 13 of the ECHR, the remedy must directly remediate the violation.Footnote 83 Nonetheless, in light of the margin of appreciation afforded to Contracting States,Footnote 84 there is no specific prescription of the adequate form of remedy.Footnote 85 Instead, the effectiveness of the remedy should be evaluated on a case-by-case basis.Footnote 86
At the European Union level, Art. 47 of the CFREU prescribes that ‘Everyone whose rights and freedoms guaranteed by the law of the Union are violated has the right to an effective remedy before a tribunal in compliance with the conditions laid down in this Article (…)’.Footnote 87 While the provisions in the CFREU with corresponding rights in the ECHR must be interpreted with similar meaning and scope to the provisions in the ECHR, there is a key difference between Art. 13 of the ECHR and Art. 47 of the CFREU. Art. 47 of the CFREU stipulates that the competent national authority must be a judicial institution. This may be interpreted as strengthening the right since judicial bodies will in principle by default be independent and impartial, while other non-judicial bodies may not be. Notwithstanding, this requirement may also place an added burden on the judicial system and may result in more constraints to exercise the right to an effective remedy.
Additionally, crime survivors in the EU are covered by the Victims’ Rights Directive which establishes minimum requirements for rights, assistance, and protection of crime survivors.Footnote 88 Key rights include the right to legal aid such as the right to a fair remedy,Footnote 89 the right to return of property and the right to compensation.Footnote 90 While the EU does not include hate speech in the EU list of crimes, the Victims’ Rights Directive applies only to elements of hate speech criminalised in the EU.Footnote 91
Applying the European framework on the right to effective remedy established by the CoE and by the EU to cases of online hate speech, two observations are relevant. First, it is clear that national authorities have the duty to protect, investigate and ensure access to remedies. This framework applies to acts committed in digital settings by either users or internet intermediaries, e.g., criminal hate speech.Footnote 92 Importantly, remedial avenues must be available, known, accessible and affordable.
Second, there are different legal thresholds at both the CoE and the EU level regarding the competent authority with which to lodge a remedy claim. Given the extensive work on the right to remedy developed by the Council of Europe for cases of criminal acts online and also recognizing that effective processes may at times be found outside judicial settings, this article follows the approach that remedies can be sought with both judicial and non-judicial institutions, as long as these are independent and impartial.
Remedies for gross human rights violations
Some elements of criminal hate speech may amount to gross human rights violations. In these cases, the international and the European frameworks on the right to remedy and reparation for victims of gross violations of human rights law are complementary and should apply.
At the international level, States are obliged to: (a) prevent violations; (b) effectively, promptly, thoroughly, and impartially investigate violations and, when necessary, take action against those responsible; (c) provide alleged victims with equal and effective access to justice; and, (d) provide effective remedies.Footnote 93 This framework calls for States to adopt provisions for universal jurisdiction.Footnote 94 Importantly, the conceptualisation of victims includes persons individually or collectively harmed physically, psychologically, emotionally, economically or who suffered substantial impairment of their fundamental rights.Footnote 95
At the European level, the Council Decision enabling targeted restrictive measures to address serious human rights abuses worldwide applies.Footnote 96 The understanding of human rights abuses in this framework accounts for genocide and crimes against humanity, and extends to other human rights abuses if widespread and systematic.Footnote 97 The sanctions apply to both natural and legal persons, such as companies.Footnote 98 For these natural or legal individuals, sanctions include inter alia asset freeze and a prohibition to make funds or economic resources available. Remarkably, this Council Decision establishes a global human rights sanctions regime providing the EU with a framework to target inter alia companies responsible for serious human rights violations, regardless of where these took place.
Applying these regimes to survivors of criminal hate speech amounting to gross human rights violations, it becomes clear that States are obliged to ensure access to an effective remedy, including when harm was caused by businesses. Moreover, the conceptualisation of survivor should include people directly and indirectly affected by the crime. Finally, businesses may be considered the perpetrators and thus may have to comply with restrictive sanctions, e.g., asset freeze measures. For example, the EU sanctions regime enables the EU to impose sanctions on Meta for its significant contribution to the genocide of the Rohingya in Myanmar.
The UN and EU standards on the right to an effective remedy for survivors of gross human rights violations offer clearer and more inclusive definitions of survivors, perpetrators and remedial processes, than the general European standards on the right to an effective remedy. First, while the general standards consider survivors only those directly impacted by the crime, the specific standards clarify that, for cases of gross violations of human rights, survivors are those affected both directly and indirectly. Second, the specific standards for victims of gross violations of human rights expressly foresee that non-state actors can be responsible. Third, the specific standards go beyond the general standards by explicitly calling States to implement universal jurisdiction and restrictive measures to address gross violations of human rights, including when committed by companies outside their territory. Applying these standards to criminal hate speech, it follows that the EU legislators have a heightened duty to align corporate remedial responsibilities with the right to an effective remedy for criminal hate speech cases amounting to gross human rights violations.
IV. General Framework: Corporate Remedial Responsibilities for Online Platforms
This section investigates the general remedial responsibilities when the harm is attributable to businesses, including online platforms, and clarifies the modes of corporate responsibility, the remedial processes and the remedial outcomes.
A. Modes of Corporate Responsibility
The UNGPs articulate corporate remedial responsibilities for businesses which caused or contributed to adverse impacts on human rights.Footnote 99 Adverse impacts on human rights happen when the exercise of said human right is excluded or reduced, and can be either actual or potential adverse impacts.Footnote 100 Actual impacts refer to an adverse impact that has already occurred or is occurring, and potential impact refers to an impact that has not occurred yet. Potential adverse impact can either be avoidable or unavoidable, the latter ultimately materialising as an actual adverse impact.
The general framework on corporate human rights remedial responsibility prescribes two modes of remedial responsibilities: the responsibility to remediate and the responsibility to use leverage.Footnote 101 The corporate responsibility to remediate is encapsulated in Guiding Principle 22 of the UNGPs as follows:
‘Where business enterprises identify that they have caused or contributed to adverse impacts, they should provide for or cooperate in their remediation through legitimate processes’.Footnote 102
The OECD Guidance clarifies that this Principle 22 establishes the corporate responsibility to remediate actual adverse impacts that the company caused, contributed to, or potential but unavoidable adverse human rights impacts that the company will cause or contribute to. A business caused an actual adverse human rights impact when its operations alone resulted in the adverse impact.Footnote 103
Conversely, a business is said to have contributed to an actual adverse impact on human rights when (i) its operations, together with operations of other businesses; or (ii) its operations alone, caused, facilitated or incentivised another business to cause an adverse impact on human rights. Notably, the contribution must be substantial.Footnote 104
The second mode of corporate remedial responsibility encompasses the use of leverage to prevent or mitigate actual adverse impacts that the company was directly linked to, and for potential adverse impacts that are avoidable. A company is directly linked to an actual adverse human rights impact if the connection is not sufficiently substantial to amount to contribution. In these cases, the company is not required to remediate, but rather to use its leverage to influence the other actor causing the adverse effects to prevent or reduce said negative effects.Footnote 105 Figure 1 summarises the general framework on corporate remedial responsibilities.

Figure 1. Corporate remedial responsibilities for adverse human rights impacts.
This general framework articulates remedial responsibilities for all businesses, including online platforms. The following sections investigate the remedial processes and outcomes of the corporate responsibility to remediate actual or unavoidable adverse impacts on human rights, including criminal hate speech caused or contributed to by online platforms.
B. Remedial Processes
Remedial processes are the processes through which a remedial responsibility is assessed, and may either be ad hoc or pre-established for specific adverse human rights impacts.Footnote 106 For businesses whose operations pose a high risk to human rights, a pre-established investigative and remedial mechanism is advisable.Footnote 107 In these cases, businesses should adopt an operational-level grievance mechanismFootnote 108 to enable individuals directly affected by the business’ operations, to formally lodge concerns, complaints and seek remedies. Non-judicial remedial processes, such as the operational-level grievance mechanism, should be designed and operated to be effective.Footnote 109 This means that these non-judicial grievance mechanisms should be legitimate, accessible, predictable, equitable, transparent, rights-compatible and a source for continuous learning.Footnote 110
Businesses may provide for remediation directly or in cooperation with another legitimate process.Footnote 111 Subsequently, there is no need for a prior judicial decision,Footnote 112 and businesses that acknowledge having caused or contributed to actual or unavoidable adverse human rights impacts have the responsibility to remediate. Nevertheless, when businesses do not provide remediation proactively, State-based remedial processes should be initiated and businesses must collaborate.Footnote 113
Applying these standards to online platforms, the functionality allowing users to report content arguably qualifies as an operational-level grievance mechanism. Nevertheless, this functionality alone does not fulfil the legitimacy criteria of remedial processes if not overseen by impartial bodies.Footnote 114 Additionally, the reporting process normally assesses whether content complies with terms of service and not with human rights standards.Footnote 115 For cases where the online platforms caused or contributed to criminal hate speech, if platforms do not comply with remedial processes, these should be initiated by States.Footnote 116 The standards on the individual right to remedy apply and, equally, the special regime on remedies for gross human rights violations applies to cases of criminal hate speech amounting to gross human rights violations.
C. Remedial Outcomes
To determine the most appropriate remedial outcomes, businesses should seek to clarify what remedy the victims find most effective.Footnote 117 The general framework for remedial outcomes includes: restitution, satisfaction, rehabilitation, compensation and guarantees of non-repetition of harm.Footnote 118 These remedial outcomes were endorsed by the United Nations framework for cases of gross violations of human rights.Footnote 119
These remedial outcomes apply to any businesses such as online platforms which caused or contributed to criminal hate speech, including that amounting to gross human rights violations. Explaining in more detail what these outcomes entail, restitution aims to restore the original exercise of human rights before the violation and involves: restoration of liberty, identity, family life and citizenship; return to the place of residence; restoration of employment; and return of property.Footnote 120
Satisfaction aims to recognise the illegal acts that resulted in human rights violations and can be both pecuniary and non-pecuniary.Footnote 121 Some examples of satisfaction encompass: ceasing violations; verifying and publicly disclosing the facts (if not contributing to double victimisation); searching of the disappeared or killed (in alignment with the victims’ wishes); an official declaration or judicial decision restoring the victim’s dignity, reputation and rights; judicial and administrative sanctions against those liable; tributes to the victims; and inclusion of violations in training and educational material.
Rehabilitation aims to ensure the access to legal, medical and social services, including psychological support.Footnote 122 Compensation, similarly to satisfaction, can also be pecuniary and non-pecuniary and aims to repair any economically quantifiable harm. Such harm encompasses: physical or mental harm; lost opportunities, including employment, education and social benefits; material damages and loss of earnings, including potential earnings; moral damages; and costs deriving from legal, medical and social services, including psychological services.Footnote 123
Finally, guarantees of non-repetition of harm should include: protecting human rights defenders; providing, on a priority and continued basis, human rights education; ensuring the observance of internal codes of conduct; promoting mechanisms for preventing and monitoring social conflicts; and reviewing and reforming terms of service contributing to or allowing gross human rights violations.Footnote 124
V. European Framework: Online Platforms Remedial Responsibilities for Criminal Hate Speech
This section examines the challenges with the current European framework on remedial responsibilities of online platforms which caused or contributed to criminal hate speech, including gross human rights violations. After that, this section proposes standards to clarify and strengthen this framework by exploring the modes of responsibility, remedial processes and three remedial outcomes.
A. Challenges with Current Framework
This section studies the general framework on corporate remedial responsibilities in the EU CSDDD and AI Act, the remedial responsibilities in the DSA, and the remedial responsibilities of online platforms in European sector-specific instruments on hate speech.
Corporate remedial responsibilities in the EU
The general legal framework on corporate remedial responsibilities in the EU stems from two instruments, i.e., the Corporate Sustainability Due Diligence Directive (CSDDD) and the Artificial Intelligence Act (AI Act). This framework applies to online platforms as these employ AI algorithms for content moderation.
The CSDDD seeks to ensure that businesses respect human rights within their operations and supply chains.Footnote 125 To achieve this goal, the CSDDD builds on the corporate human rights responsibilities framework established in the UNGPs and operationalised in the OECD Guidelines, restating the corporate responsibilities to inter alia provide remedial mechanisms for human rights and environmental negative impacts caused by their operations, their subsidiaries and their value chains.Footnote 126 The preamble of the CSDDD expands on the businesses’ responsibilities to prioritise the prevention and mitigation, ceasing, minimising and remediating actual or potential adverse human rights impacts.Footnote 127 Furthermore, the CSDDD recognises the need to ‘ensure that those affected by a failure to respect this duty have access to justice and legal remedies’.Footnote 128
Nevertheless, the CSDDD fails to reflect the UNGPs’ specific standards on remedial processes (i.e., the importance of creating operational-level grievance mechanisms and the creation of adequate, legitimate and impartial remedial processes) and on remedial outcomes (i.e., restitution, satisfaction, compensation, rehabilitation, guarantees non-repetition). The CSDDD allows EU Member States the discretion to decide the means to reach the binding goals that it prescribes. As a result, in transposing this directive domestically, there may be States deciding to fully develop the corporate remedial responsibilities in alignment with the UNGPs. Be that as it may, it should be acknowledged that the CSDDD advances the framework on proactive measures that corporations need to take in order to prevent adverse human rights impacts. Nevertheless, noting that the official text of the CSDDD will likely be subject to cutbacks as part of the Omnibus Simplification Package, it is crucial to revisit this analysis following the final official changes implemented as a result of this process.Footnote 129
The AI Act prescribes legally binding means to ensure that AI systems respect EU fundamental rights, while fostering investment and innovation.Footnote 130 The AI Act reflects the UNGPs and CSDDD overall standard on corporate human rights remedial responsibilities in two ways. First, it explains which AI systems do not comply with fundamental rights and are, therefore, prohibited. Art. 5 of the AI Act prohibits AI systems that deploy subliminal techniques capable of distorting a person’s behaviour in a manner that causes or is likely to cause physical or psychological harm.Footnote 131
Applying this provision to online platforms, online platforms are undoubtedly prohibited from employing algorithms that amplify hate speech. Second, the AI Act prescribes a fundamental rights risk assessment framework to evaluate potential risks caused by AI systems.Footnote 132 This risk assessment aligns with the UNGPs corporate human rights due diligence and remedial processes which require businesses to adopt processes to identify potential adverse human rights impacts.Footnote 133 Nevertheless, though expanding more than the CSDDD on the risk assessment, similarly to the CSDDD, the AI Act does not prescribe a comprehensive corporate remedial framework encompassing standards on remedial processes and outcomes to be achieved.
Remedial responsibilities in the Digital Services Act
The Digital Services (DSA) seeks to prevent illegal and harmful content online by regulating the human rights responsibilitiesFootnote 134 and liabilityFootnote 135 regimes of internet intermediary services operating within the EU. The conceptualisation of internet intermediaries includes online platforms,Footnote 136 i.e., hosting services which store and disseminate to the public information produced by its users.Footnote 137
The DSA prescribes different human rights responsibilities depending on the business’ role, size and impact.Footnote 138 Within the category of online platforms, the DSA attributes heightened human rights responsibilities to very large online platforms (VLOPs), i.e., those with 45 million or more EU users per month.Footnote 139 In this context, VLOPs should identify, assess and mitigate systemic risks, and negative effects for the exercise of fundamental rights.Footnote 140 Notably, hate speech is explicitly referred to as a systemic risk classified as illegal content in the EU.Footnote 141
Reviewing the DSA framework on corporate remedial responsibilities, it is possible to conclude that the DSA does not provide a comprehensive approach to corporate modes of responsibilities, remedial processes or remedial outcomes.
Firstly, the DSA does not clearly reflect the general UNGPs standards on the modes of corporate remedial responsibilities. Although Chapter II of the DSA regulates the liability regimes of internet intermediaries, it does not clarify that online platforms causing or contributing to adverse human rights impacts bear remedial responsibilities in line with the corporate responsibility framework articulated in the UNGPs.Footnote 142 In another example, Art. 36 of the DSA prescribes that VLOPs must comply with specific crisis response measures in times of extraordinary serious threats to public security or public health in the EU, with the purpose of preventing, eliminating or limiting said serious threats.Footnote 143 While this wording could be interpreted to reflect Principle 22 of the UNGPs, this link is not expressly mentioned. Moreover, Art. 36 of the DSA seems to apply only to VLOPs and in times of crisis, disregarding the ongoing nature of remedial responsibilities of all businesses regardless of size or crisis context.
Secondly, the DSA does not clearly expand on the general UNGPs standards on remedial processes. To clarify, the DSA refers to remedy as: (i) the right to seek judicial remediesFootnote 144; (ii) an interim non judicial measure to ensure effective investigation of infringements and enforcement or to prevent future infringementsFootnote 145; and (iii) an out-of-court dispute settlement for human rights infringements.Footnote 146 These elements seem to broadly reflect, respectively: (i) the State’s obligation to ensure the right to an effective remedy; (ii) an operational-level grievance mechanism; and (iii) the legitimacy requirement for a non-judicial remedial process. However, these mechanisms require effective, impartial, and legitimate implementation and oversight. For example, concerns arise as to whether an out-of-court mechanism not empowered to impose binding decisions will provide access to an effective remedy.Footnote 147 This discussion is further elaborated on in Section V.B. ‘Regulatory and administrative oversight’.
Thirdly, the DSA does not address the general UNGPs standards on remedial outcomes. In this context, the DSA missed an opportunity to provide harmonised guidance and steer this discussion on best suited remedial outcomes for online harms caused or contributed to online platforms, including (criminal, but not limited to) hate speech.
As a result, the DSA, alone, does not articulate a solid or comprehensive framework on the corporate human rights remedial responsibilities of internet intermediaries, including online platforms. This article defends that, similarly to the UNGPs, remedial responsibilities, processes and outcomes ought to have been addressed in the DSA as a whole and all together either under Chapter II after the liability provisions, or independently in a separate chapter on remedial responsibilities. Furthermore, in the context of hate speech, this article defends that the DSA should have clarified that online platforms, with a particular emphasis on VLOPs due to its systemic risks, which caused or substantively contributed to criminal hate speech have to comply with corporate remedial responsibilities. These corporate remedial responsibilities are heightened in the case of criminal hate speech amounting to gross human rights violations. A way to promote legal coherence between the DSA and the corporate human rights responsibilities is to read the DSA in conjunction with the CSDDD; this framework is advanced in Section V.B.Footnote 148
Supplementary Corporate Remedial Frameworks for Online Hate Speech
At the European level, there is one legal and two policy instruments that complement the corporate remedial framework in the DSA applicable to hate speech on online platforms, i.e., respectively, the 2018-revised AVMSD, the Code of Conduct on countering illegal hate speech online and the Recommendations CM/Rec(2022)16 and CM/Rec(2014)6.
The 2018-revised AVMSD prescribes the State’s obligation to regulate inter alia video-sharing platforms with the goals of protecting children and consumers, combating racial and religious hatred and safeguarding media pluralism.Footnote 149 In the AVMSD, video-sharing platforms include online platforms disseminating user-generated videos with the purpose to inform, entertain or educate, and where content organisation is decided by the video-sharing platform.Footnote 150 Art. 28b of the AVMSD addresses businesses directly and establishes the corporate human rights responsibilities of video-sharing platforms to moderate content.Footnote 151 Analysing the remedial responsibilities framework in the AVMSD, Art. 28b(3)(i) clarifies that video-sharing platforms should establish ‘easy-to-use’ complaints mechanisms.Footnote 152 Regarding the remedial outcomes, the AVMSD 2010 version had included a specific remedial outcome for audiovisual media services, i.e., the right of reply.Footnote 153 Nevertheless, the 2018-revised AVMSD did not clarify whether this provision applies to video-sharing platforms.Footnote 154
The Code of conduct on countering illegal hate speech online was agreed upon in 2016 between the European Commission and internet intermediaries, some of which qualify as VLOPs as per the DSA.Footnote 155 This co-regulatory instrument establishes minimum transparency requirements for content moderation aiming to counter online hate speech which include clear communication to the users regarding the processes to notify, review, and request removal of hate speech. Notwithstanding, similarly to the DSA, the Code of conduct does not provide a comprehensive framework on corporate remedial responsibilities, processes, or outcomes required from online platforms which have caused or contributed to hate speech.
CM/Rec(2022)16 reiterates the right to an effective remedy,Footnote 156 and clarifies that remedial processes should be accessible through civil, administrative, and out-of-court mechanisms.Footnote 157 Additionally, CM/Rec(2022)16 explains that some of the most adequate remedial outcomes for online hate speech include: compensation, deletion, blocking, injunctive relief, publication of an acknowledgment that a post constituted hate speech, fines and loss of licence.Footnote 158 Reviewing the CM/Rec(2022)16 corporate remedial standards against the UNGPs, it becomes clear that, though it expands on remedial processes and outcomes, CM/Rec(2022)16 missed an opportunity to distinguish between the State’s duty to ensure access to the right to remedy and the corporate remedial responsibilities of online platforms.
CM/Rec(2014)6 elaborates on human rights for internet users and advances that, for criminal acts committed online, the most effective remedies include inter alia an inquiry, an explanation by the service provider, the possibility to reply, reinstatement of user-created content, reconnection to the Internet and compensation.Footnote 159 Similarly to the CM/Rec(2022)16, this is an important analysis of the suitability of remedial outcomes for online criminal acts which sheds light on the application of the UNGPs remedial framework on online platforms.
Overall, despite occasional references in the European regulatory framework to the corporate remedial responsibilities of online platforms, these instruments lack a comprehensive approach to the framework on corporate remedial responsibilities, processes and required outcomes for online platforms that caused or contributed to criminal hate speech.
B. Proposed Standards for a Comprehensive European Framework
This section proposes standards to address existing loopholes and outlines a comprehensive framework on corporate human rights remedial responsibilities in Europe. Firstly, it expands on the complementarity between the DSA and the CSDDD. Secondly, it develops concepts around the types of regulatory or administrative oversight. Thirdly, this section advances avenues to clarify the application of the DSA to remedial responsibilities by exploring the modes of responsibility, remedial processes and remedial outcomes applicable to online platforms which caused or contributed to criminal hate speech. Finally, this section delves deeper into three types of remedial outcomes and how these could apply to cases of hate speech disseminated by platforms, i.e., restitution and satisfaction, compensation and rehabilitation and guarantees of non-repetition.
Complementarity between the DSA and the CSDDD
The DSA should be read in tandem with the CSDDD to ensure an adequate framing of the corporate remedial responsibilities in Europe. Since both instruments embody concepts of human rights due diligence, a joint analysis of the DSA and the CSDDD strengthens the European corporate remedial responsibilities framework applicable. Though the DSA does not clearly identify the modes of responsibility, the legal requirements for remedial processes or the applicable remedial outcomes for the online hate speech, this instrument does effectively prescribe corporate human rights responsibilities for internet intermediaries, including online platforms, to counter potential or actual adverse impacts on human rights.
Additionally, the CSDDD expands in its Preamble on crucial concepts pertaining to the corporate remedial framework. This instrument starts by clarifying the corporate responsibility to remediate when a business caused or jointly caused adverse impact on human rights as well as the corporate responsibility to use its leverage to influence business partners to prevent or mitigate adverse human rights impacts.Footnote 160
In this context, the CSDDD could have clarified that, following the UNGPs, remediation is due when a business caused or contributed to actual adverse impact on human rights and that the responsibility to use leverage applies in cases of potential avoidable harm or when a company is directly linked (without amounting to the legal threshold of having caused or contributed to)Footnote 161 to adverse human rights impacts.Footnote 162 Notwithstanding, this instrument clarifies that companies must receive complaints both about actual or potential human rights harms and that the complaint mechanisms must align with UN Guiding Principle 31 in that these mechanisms must be fair, accessible, publicly available, predictable and transparent.Footnote 163
The CSDDD also expands on remedial outcomes clarifying that remediation may be financial or non-financial and that its goal must be to restore affected individuals to a state as close as possible to that before the harm took place.Footnote 164 Hence, a joint analysis of both the DSA and the originally adopted text of the CSDDD results in a more solid framing of the corporate due diligence remedial responsibilities.
Nevertheless, this article recognises that significant revisions to the CSDDD are being discussed as part of the Omnibus Simplification Package and that any curtailments on the due diligence responsibilities deriving from the ongoing negotiations will lead to an increased necessity to clarify the EU corporate remedial responsibilities framework.Footnote 165 In the context of online platforms, this means that ensuring an adequate alignment of the DSA with the international corporate remedial responsibilities framework stemming from the UNGPs will remain an effective pathway to strengthen the applicable European corporate remedial framework.
Regulatory and Administrative Oversight
Another aspect that could strengthen the European corporate remedial framework is linked to the regulatory and administrative oversight of the provisions in the DSA and in the CSDDD. Remedy can be pursued directly with the company, through a non-judicial out-of-court mechanism, and through judicial proceedings.
In the context of the DSA, the National Digital Services CoordinatorsFootnote 166 will assume the role of independent administrative authorities key to monitor, supervise and enforce the provisions in the DSA. Hence, the National Digital Services Coordinators will also take on the role of ensuring adequate access to an effective remedy, for example, whether the ‘out-of-court mechanisms’ do provide access to an effective remedy.Footnote 167
Another figure central to ensure the access to an effective remedy are the national courts. Domestically, the judicial power will have the task to determine whether the implementation of the extra-judicial ‘out-of-court mechanism’ has adequately enforced the right to remedy.
It should be noted that the out-of-court mechanism represents only one type of available remedy, as affected individuals can always file judicial claims with domestic courts. The CSDDD confirms that affected stakeholders should not be required to file complaints with the company before seeking remediation through judicial or non-judicial mechanisms.Footnote 168 Similarly, affected stakeholders are not required to exhaust non-judicial mechanisms before lodging their remediation claims with the national courts.Footnote 169
The CSDDD also provides detailed guidance regarding situations where a company fails to remediate. In such cases, national authorities have an obligation to order remediation based on either the national authority’s own initiative or substantiated concerns raised under the CSDDD.Footnote 170 Thus, remedies can be imposed nationally or EU-wide in complement to those already existing in domestic jurisdictions. Finally, the CSDDD also clarifies that remedial measures do not prevent penalties or civil liability under national law.
Clarifications in the DSA
Despite the clarifications resulting from a joint analysis of the DSA and the CSDDD, this article suggests that the DSA could further elaborate on three remedial aspects: modes of responsibilities, remedial processes and remedial outcomes. These standards build on the general framework stemming from the UNGPs on corporate remedial responsibilities, incorporated with enforceable nature in Europe through the CSDDD.
Regarding the modes of responsibility, the European regulatory framework should clarify, in a consistent manner, that online platforms, with an emphasis on VLOPs as per the DSA, which caused or contributed to adverse human rights impacts are responsible for providing remediation. Hence, this remedial responsibility applies to online platforms which caused or contributed to criminal hate speech. This can be achieved, for example, through the development of an additional chapter in the DSA. The clarification of the modes of responsibility are all the more important in cases where the online platform caused or contributed to criminal hate speech amounting to gross violations of human rights. For cases where the online platform was directly linked to actual or potential but avoidable dissemination of criminal hate speech, they should use their leverage to prevent or mitigate said criminal hate speech.
Vis-à-vis the remedial processes, the European regulatory framework should clarify that remedial processes ought to be legitimate, prompt, and impartial in addressing the adverse human rights impacts, including the dissemination of criminal hate speech on online platforms. Though the DSA standardises operational-grievance mechanisms such as the internal appeals and transparency standards, European legislators should ensure that remedial processes apply human rights standards and not terms and conditions privately decided by online platforms and often in misalignment with human rights.
The European regulatory framework fails to establish a clear and comprehensive approach to corporate remedial outcomes required of online platforms which caused or contributed to adverse human rights impacts, including for cases of criminal hate speech and criminal hate speech amounting to gross human rights violations. The following subsections explore the suitability of remedial outcomesFootnote 171 by building on the framework of remedial outcomes for criminal acts online. The theoretical frameworks for remedial outcomes include: restitution and satisfaction as amplification of survivors’ speech; compensation and rehabilitation beyond the area of services; and guarantees of non-repetition as business models’ change. These remedial outcomes could be imposed by the European Commission as interim non-judicial measures applicable to online platforms which caused or contributed to criminal hate speech.Footnote 172
For the overall operationalization of these standards, this article recommends that the European Commission issues a detailed guidance on Art. 21 of the DSA which should expand on the corporate remedial responsibilities elaborated on the Preamble of the CSDDDFootnote 173 and further developed in the UNGPs corporate human rights remedial responsibilities framework. Such guidance should explicitly clarify the modes of responsibility, remedial processes, and remedial outcomes suitable to effectively and promptly remediate people harmed by criminal hate speech disseminated by online platforms. These standards are all the more urgent to clarify for VLOPs as per the DSA, and for cases of criminal hate speech amounting to gross violations of human rights. Furthermore, noting the broadly discussed ‘Brussels effect’Footnote 174 whereby the European Union policy and regulatory framework can have wide-reaching impact beyond the European Union’s borders, it is particularly relevant to have the European Commission issue clarifying guidance on how to strengthen the DSA’s connection with the corporate human rights due diligence remedial responsibilities.
Restitution and Satisfaction as Amplification of Survivors’ Speech
Online platforms which caused or contributed to criminal hate speech must provide for restitution as a means to restore, to the extent possible, the exercise of adverse human rights impacts. In compliance with the standards on satisfaction, businesses must recognise the acts that violated international law and restore the survivors’ dignity.Footnote 175
Though there is a vast array of harms resulting from human rights violations in these cases,Footnote 176 this section proposes a remedy for the specific harm of constrained online participation. To clarify, some of the most commonly reported harms resulting from online hate speech (and even more so from criminal hate speech) are disempowerment, silencing and ultimately disengagement from online platforms of targeted communities.Footnote 177
A remedy to the constrained participation of communities targeted by hate speech is the speaking back capabilities framework advanced by Gelber.Footnote 178 In this framework, Gelber contends that policy and legal approaches should support people targeted by hate speech who wish to respond to it.Footnote 179 This direct engagement in the response process is conceptualised as the empowering act which enables communities targeted by hate speech to overcome the oppression and harm of constrained participation.Footnote 180 Gelber explains that this framework can result in policies of affirmative speech in which actors that enabled and hosted hate speech should likewise facilitate the response and counter narratives.Footnote 181
This article expands on Gelber’s speaking back framework by applying it to the context of online platforms. Importantly, it is widely discussed how the harm caused by hate speech is aggravated by online platforms when their algorithms demote counter narratives.Footnote 182 In this context, this article suggests that online platforms which caused or contributed to criminal hate speech should, as an effective restitution remedy, introduce affirmation speech policies in their content ranking, moderation and recommendation algorithms.
As a result, for a given period, online platforms should amplify survivors’ speech through content ranking algorithms. Similarly, online platforms should deploy content moderation algorithms that will specifically detect and apply a higher scrutiny to hate speech posts targeting marginalized communities with the goal of avoiding double victimization. Finally, to ensure reconnection of marginalised people as groups, online platforms should adopt affirmative speech policies through their link recommendation algorithms by purposefully, for a given period, connecting people marginalised and targeted by such criminal hate speech.
Compensation and Rehabilitation Beyond Area of Services
Online platforms which caused or contributed to criminal hate speech should remediate psychological, physical and material harms through rehabilitation and compensation. This overarching remedial responsibility clarifies that online platforms are responsible to remediate survivors beyond their area of services.
For cases of criminal hate speech amounting to gross human rights violations, online platforms are explicitly required to ensure access and, importantly, pay for rehabilitation and compensation of medical and psychological services. Moreover, in these cases, the conceptualisation of victims expressly includes not only the directly affected persons but also others who are closely related. Finally, the European Commission may impose asset freezing on online platforms which caused or contributed to criminal hate speech amounting to gross human rights violations.Footnote 183
Applying these remedies to the example of Meta’s significant contribution to the genocide of the Rohingya in Myanmar, it becomes clear that Meta should allocate funds and has the corporate remedial responsibility to compensate and rehabilitate beyond its area of services. This responsibility should address material harms including lost opportunities such as limited access to employment or education. This could be achieved through a class action remedy where the company makes funds available to the affected community as part of a settlement agreement.
Guarantees of Non-Repetition Requiring Business Models’ Changes
Many online platforms have adopted business models through the design and deployment of content moderation, ranking and recommendation algorithms that maximise profit and user engagement often at the expense of human rights.Footnote 184 All online platforms have the corporate human rights responsibility to identify, prevent, mitigate and remediate adverse human rights impacts.Footnote 185 Online platforms which caused or contributed to adverse human rights impacts, such as criminal hate speech, have the heightened responsibility to remediate, including by adopting guarantees of non-repetition.
This article proposes the operationalisation of guarantees of non-repetition premised on a change of business models and grounded in two main elements: (1) enforcing content moderation, ranking and recommendation algorithms based on human rights standards; (2) enforcing an alignment of the terms of service with the international human rights standards on the conceptualisation of criminal hate speech and with the corporate human rights responsibilities framework in the UNGPs.
First, online platforms should ensure that their content moderation algorithms remove criminal hate speech. Notably, as per Art. 5(1)(a) of the AI Act, online platforms are prohibited from deploying algorithms that are likely to lead to violence, as is the case of criminal hate speech. A key provision in verifying compliance with these responsibilities is Art. 40 of the DSA, which enables researchers to access data from VLOPs to investigate the impact of algorithms on systemic risks, including hate speech. In this context, this article suggests that, when assessing compliance with Art. 5 of the AI Act (in non-judicial or judicial actions), the judicial burden of proof should be inverted to require online platforms to prove that they did not cause nor contributed to criminal hate speech.Footnote 186 Though this inversion of the burden of proof is not clarified within the CSDDD, this article proposes that this is a key means for online platforms to comply with their duty of care.Footnote 187
Regarding ranking and recommendation algorithms,Footnote 188 this article builds on two contextual variables utilised by the ECtHR to assess the severity of hate speechFootnote 189 to suggest a tighter framework for monitoring criminal hate speech, i.e., the political and social background, as well as the speaker’s status or role in society. This article suggests that, as a minimum legal standard especially during times of conflict or elections, online platforms should proactively monitor users and posts with high levels of engagement above a certain threshold of risk. The notion of engagement level expands on Gelber’s authority framework, whereby measuring authority of a certain speech-act is relevant to analyse the capability of harming.Footnote 190 In this context, the engagement level corresponds to the notion of authority and could track two parameters, i.e., the number of followers of a given user or the number of reactions (e.g., reposting, comments) to a given post.
Secondly, online platforms should reflect the corporate human rights responsibilities framework in their terms of service as instructed in the UNGPs and in the CSDDD, including by adopting a conceptualisation of criminal hate speech aligning with international human rights standards.Footnote 191 Furthermore, online platforms should transparently inform users about the proposed content moderation, ranking and recommendation standards, as well as the tighter contextual application during conflicts or elections. Finally, as a minimum legal standard, after detection of criminal hate speech, online platforms should be required to archive such content for future criminal investigations.Footnote 192
VI. Conclusion
This article addresses the key challenge of the lack of legal clarity about the corporate remedial responsibilities of online platforms that caused or contributed to criminal hate speech. The research question is two-fold: To ensure the right to an effective remedy, how can European legislators better align the legal framework on the corporate remedial responsibilities of online platforms which caused or contributed to criminal hate speech with the general framework on corporate remedial responsibilities? Additionally, are there heightened remedial responsibilities for very large online platforms or for cases of criminal hate speech amounting to gross violations of human rights?
By building upon the European conceptualisation of criminal hate speech, the European standards on the right to an effective remedy and the general framework of corporate human rights responsibilities, this article proposes three legal avenues for the European legislators to clarify the framework on corporate remedial responsibilities.
First, it is important to clarify that the individual right to an effective remedy results in, not only a State obligation to ensure the exercise of said right, but also in direct corporate remedial responsibilities. Second, the corporate remedial responsibilities framework must address: remedial responsibilities modes; remedial processes; and remedial outcomes. Third, the corporate remedial outcomes must be tailored to address the specific harms caused by criminal hate speech online through content moderation, ranking and recommendation algorithms.
Delving deeper into the most effective remedial outcomes for criminal hate speech, this article suggests the amplification of survivors’ speech as means to restore the harm of limited participation. For the remaining harms, online platforms should compensate and rehabilitate beyond their area of services. Finally, this article suggests that the only way in which online platforms can remediate through guarantees of non-repetition of harm is by ensuring that their business model prioritises human rights over profit.
The standards proposed in this article on corporate remedial responsibilities apply to online platforms, with increased corporate human rights responsibilities for VLOPs and platforms which caused or contributed to elements of criminal hate speech amounting to gross violations of human rights. These suggested legal avenues apply first and foremost to the European context given the existing regulatory framework clarifying the conceptualisation of criminal hate speech, particularly since the adoption of CM/Rec(2022)16.
Importantly, the interventions to counter criminal hate speech on online platforms should not be solely legalistic nor should they just rely on remedy after the adverse impact on human rights has occurred. There should be structural changes to addressing power imbalances and systems of privilege, namely through education, representation and through the overall regulation of the private sector requiring the prioritisation of human rights.
Acknowledgements
The author would like to thank Simone van der Hof and Tarlach McGonagle for their insightful comments, as well as Katharine Gelber for the thought-provoking discussions during the research visit at the School of Political Science and International Studies of the University of Queensland, Brisbane, Australia. This research is the fourth article of the PhD thesis of the author.
Financial support
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 861047 for the NETHATE project.
Competing interests
The author declares none.