Governments and companies are under increasing pressure to address illegal or undesirable content and expression online, but hasty or poorly crafted solutions can threaten human rights.
Laws, policies, and content moderation practices that hurt free expression have a disparate impact on those most at risk of human rights violations, including journalists, activists, human rights defenders, and members of oppressed or marginalized groups, such as women, religious or ethnic minority groups, people of color, and the LGBTQ community.
Companies make decisions about whether content is removed or amplified on their platforms, and follow their own rules, often in ways that are not transparent. Their actions — or lack of action — regarding content can cause or contribute to societal harm. Their role in spreading hate speech, disinformation, and illegal content, as well as facilitating discrimination, under a profit motive, is concerning.
At the same time, governments seeking to control the flow of information online have cited these very issues as a rationale for inherently blunt and disproportionate measures such as internet shutdowns. In some cases, state actors have also sought to deputize private companies to police expression using methods that are automated, opaque, lack remedy, and otherwise fail to align with international human rights principles. These “solutions” can be leveraged to silence entire communities, and can lead to or hide atrocities.
Notably, Access Now’s Digital Security Helpline, which works with individuals and organizations globally to keep them safe online, has seen an increase in cases related to content moderation decisions that affect users at risk. In 2019, approximately 20% of cases (~311) were related to content moderation.
When actors in this space make decisions about content moderation — that is, when they engage in “content governance” — they have the duty to consider human rights. Governments are obligated to protect these rights, while companies are responsible for respecting them.
To assist in this process, we have published 26 recommendations on content governance: a guide for lawmakers, regulators, and company policy makers. These human rights- and user-centric recommendations are summarized below and elaborated in full in the paper. Since the context is different for each country and region, our recommendations are not one-size-fits-all prescriptions. Instead, they are meant to serve as a baseline foundation for content governance that safeguards human rights.
We divide content governance into three main categories: state regulation, enforced by governments; self-regulation, exercised by platforms via content moderation or curation; and co-regulation, undertaken by governments and platforms together through mandatory or voluntary agreements.
Note that the following recommendations have been summarized; for detailed context and guidance, please refer to the full paper.
State regulation: 13 content governance recommendations
1. Abide by strict democratic principles
A formal legal instrument must contain protective safeguards that are established through a democratic process that respects the principles of multistakeholderism and transparency. They must be proportional to their legitimate aim.
2. Enact safe harbors and liability exemptions
Intermediaries should be protected from liability for third-party content by a safe harbor regime; however, we oppose full immunity. Rules that protect intermediaries must enable ways to address the spread of illegal content.
3. Do not impose a general monitoring obligation
A general monitoring obligation is a mandate that state actors directly or indirectly impose on intermediaries to undertake active monitoring of the content and information that users share. This violates the right to freedom of expression and access to information.
4. Define adequate response mechanisms
In order for the response to illegal content to be adequate and protect human rights, response mechanisms should be defined in national legislation, be tailored to specific categories of content, and include clear procedures and notification provisions, including notification to users acting as content providers.
5. Establish clear rules for when liability exemptions drop
A legal framework should establish when and how online platforms are understood to have “actual knowledge” of illegal content on their platform (such as upon receipt of a court order).
6. Evaluate manifestly illegal content carefully and in a limited manner
Content is manifestly illegal when it is easily recognizable as such without any further analysis, such as child sexual abuse material. Only a small percentage of content falls under this category, but all illegal content requires independent adjudication to be considered as such, and governments must be careful not to broaden the definition, to avoid opening the scope of censorship. The only situation when platforms should be held liable for not removing such content without an order from an independent adjudicator is after a private notification by a third party.
7. Build rights-respecting notice-and-action procedures
Notice-and-action procedures are mechanisms online platforms follow for the purpose of combating illegal content upon receipt of notification. To avoid broad censorship of context-dependent user-generated content, we suggest different types of notice-and-action mechanisms depending on the type of content being evaluated.
8. Limit temporary measures and include safeguards
The temporary blocking of content must be used only when it is time sensitive matter, and blocking must be strictly limited in time and constrained to specific types of illegal content. Those requirements should be clearly defined by law. This is so, in order to avoid state abuse of this tool to restrict access to information without an appropriate procedure for the determination of its illegality.
9. Make sanctions for non-compliance proportionate
If sanctions become disproportionate – such as the blocking of services or imprisonment of platform representatives – it is very likely that they will lead to over-compliance, which could harm free expression and access to information shared on online platforms.
10. Use automated measures only in limited cases
Due to the huge amount of online content being shared on platforms, the use of automated measures to detect illegal content is often necessary. However, the technology for these measures is not capable of interpreting context before flagging for blocking or takedown. Consequently, the use of automated measures should be strictly limited in scope, be based on clear and transparent regulation, and must include appropriate safeguards to mitigate their possible negative impact on users’ human rights.
11. Legislate safeguards for due process
To provide legal certainty, predictability, and proportionality in content takedown measures, it is essential to ensure a process for well-founded decision-making, notifications, and counter-notifications before any action is taken.
12. Create meaningful transparency and accountability obligations
Regulators cannot properly monitor the implementation and impact of content governance laws if states and intermediaries do not issue transparency reports. These reports should focus on the quality of adopted measures, instead of the quantity of content removed from platforms.
13. Guarantee users’ rights to appeal and effective remedy
Errors are inevitable in content governance decisions; therefore, appeal mechanisms, including the option of counter-notices for content providers, are the principal guarantee of procedural fairness.
Javier Pallero @javierpallero