Posts Tagged ‘content moderation’

How social media companies can identify and respond to threats against human rights defenders

October 15, 2019

global computer threats

Image from Shutterstock.

Ginna Anderson writes in the ABA Abroad of 3

..Unfortunately, social media platforms are now a primary tool for coordinated, state-aligned actors to harass, threaten and undermine advocates. Although public shaming, death threats, defamation and disinformation are not unique to the online sphere, the nature of the internet has given them unprecedented potency. Bad actors are able to rapidly deploy their poisoned content on a vast scale. Social media companies have only just begun to recognize, let alone respond, to the problem. Meanwhile, individuals targeted through such coordinated campaigns must painstakingly flag individual pieces of content, navigate opaque corporate structures and attempt to survive the fallout. To address this crisis, companies such as Facebook, Twitter and Youtube must dramatically increase their capacity and will to engage in transparent, context-driven content moderation.

For human rights defenders, the need is urgent. .. Since 2011, the ABA Center for Human Rights (CHR) has ..noted with concern the coordination of “traditional” judicial harassment of defenders by governments, such as frivolous criminal charges or arbitrary detention, with online campaigns of intimidation. State-aligned online disinformation campaigns against individual defenders often precede or coincide with official investigations and criminal charges.

……

While social media companies generally prohibit incitement of violence and hate speech on their platforms, CHR has had to engage in additional advocacy with social media companies requesting the removal of specific pieces of content or accounts that target defenders. This extra advocacy has been required even where the content clearly violates a social media company’s terms of service and despite initial flagging by a defender. The situation is even more difficult where the threatening content is only recognizable with sufficient local and political context. The various platforms all rely on artificial intelligence, to varying degrees, to identify speech that violates their respective community standards. Yet current iterations of artificial intelligence are often unable to adequately evaluate context and intent.

Online intimidation and smear campaigns against defenders often rely on existing societal fault lines to demean and discredit advocates. In Guatemala, CHR recently documented a coordinated social media campaign to defame, harass, intimidate and incite violence against human rights defenders. Several were linked with so-called “net centers,” where users were reportedly paid to amplify hateful content across platforms. Often, the campaigns relied on “coded” language that hark back to Guatemala’s civil war and the genocide of Mayan communities by calling indigenous leaders communists, terrorists and guerrillas.

These terms appear to have largely escaped social media company scrutiny, perhaps because none is a racist slur per se. And yet, the proliferation of these online attacks, as well as the status of those putting out the content, is contributing to a worsening climate of violence and impunity for violence against defenders by specifically alluding to terms used to justify violence against indigenous communities. In 2018 alone, NPR reports that 26 indigenous defenders were murdered in Guatemala. In such a climate, the fear and intimidation felt by those targeted in such campaigns is not hyperbolic but based on their understanding of how violence can be sparked in Guatemala.

In order to address such attacks, social media companies must adopt policies that allow them to designate defenders as temporarily protected groups in countries that are characterized by state-coordinated or state-condoned persecution of activists. This is in line with international law that prohibits states from targeting individuals for serious harm based on their political opinion. To increase their ability to recognize and respond to persecution and online violence against human rights defenders, companies must continue to invest in their context-driven content moderation capacity, including complementing algorithmic monitoring with human content moderators well-versed in local dialects and historical and political context.

Context-driven content moderation should also take into account factors that increase the risk that online behavior will contribute to offline violence by identifying high-risk countries. These factors include a history of intergroup conflict and an overall increase in the number of instances of intergroup violence in the past 12 months; a major national political election in the next 12 months; and significant polarization of political parties along religious, ethnic or racial lines. Countries where these and other risk factors are present call for proactive approaches to identify problematic accounts and coded threats against defenders and marginalized communities, such as those shown in Equality Labs’ “Facebook India” report.

Companies should identify, monitor and be prepared to deplatform key accounts that are consistently putting out denigrating language and targeting human rights defenders. This must go hand in hand with the greater efforts that companies are finally beginning to take to identify coordinated, state-aligned misinformation campaigns. Focusing on the networks of users who abuse the platform, instead of looking solely at how the online abuse affects defenders’ rights online, will also enable companies to more quickly evaluate whether the status of the speaker increases the likelihood that others will take up any implicit call to violence or will be unduly influenced by disinformation.

This abuser-focused approach will also help to decrease the burden on defenders to find and flag individual pieces of content and accounts as problematic. Many of the human rights defenders with whom CHR works are giving up on flagging, a phenomenon we refer to as flagging fatigue. Many have become fatalistic about the level of online harassment they face. This is particularly alarming as advocates targeted online may develop skins so thick that they are no longer able to assess when their actual risk of physical violence has increased.

Finally, it is vital that social media companies pursue, and civil society demand, transparency in content moderation policy and decision-making, in line with the Santa Clara Principles. Put forward in 2018 by a group of academic experts, organizations and advocates committed to freedom of expression online, the principles are meant to guide companies engaged in content moderation and ensure that the enforcement of their policies is “fair, unbiased, proportional and respectful of users’ rights.” In particular, the principles call upon companies to publicly report on the number of posts and accounts taken down or suspended on a regular basis, as well as to provide adequate notice and meaningful appeal to affected users.

CHR routinely supports human rights defenders facing frivolous criminal charges related to their human rights advocacy online or whose accounts and documentation have been taken down absent any clear justification. This contributes to a growing distrust of the companies among the human rights community as apparently arbitrary decisions about content moderation are leaving advocates both over- and under-protected online.

As the U.N. special rapporteur on freedom of expression explained in his 2018 report, content moderation processes must include the ability to appeal the removal, or refusal to remove, content or accounts. Lack of transparency heightens the risk that calls to address the persecution of human rights defenders online will be subverted into justifications for censorship and restrictions on speech that is protected under international human rights law.

A common response when discussing the feasibility of context-driven content moderation is to compare it to reviewing all the grains of sand on a beach. But human rights defenders are not asking for the impossible. We are merely pointing out that some of that sand is radioactive—it glows in the dark, it is lethal, and there is a moral and legal obligation upon those that profit from the beach to deal with it.

Ginna Anderson, senior counsel, joined ABA CHR in 2012. She is responsible for supporting the center’s work to advance the rights of human rights defenders and marginalized dommunities, including lawyers and journalists at risk. She is an expert in health and human rights, media freedom, freedom of expression and fair trial rights. As deputy director of the Justice Defenders Program since 2013, she has managed strategic litigation, fact-finding missions and advocacy campaigns on behalf of human rights defenders facing retaliation for their work in every region of the world

http://www.abajournal.com/news/article/how-can-social-media-companies-identify-and-respond-to-threats-against-human-rights-defenders

Social media councils – an answer to problems of content moderation and distribution??

June 17, 2019

In the running debate on the pros and cons of information technology, and it complex relation to freedom of information, the NGO Article 19 comes on 11 june 2019 with an interesting proposal “Social Media Councils“.

Social Media Councils: Consultation - Digital

In today’s world, dominant tech companies hold a considerable degree of control over what their users see or hear on a daily basis. Current practices of content moderation on social media offer very little in terms of transparency and virtually no remedy to individual users. The impact that content moderation and distribution (in other words, the composition of users’ feeds and the accessibility and visibility of content on social media) has on the public sphere is not yet fully understood, but legitimate concerns have been expressed, especially in relation to platforms that operate at such a level of market dominance that they can exert decisive influence on public debates.

This raises questions in relation to international laws on freedom of expression and has become a major issue for democratic societies. There are legitimate motives of concern that motivate various efforts to address this issue, particularly regarding the capacity of giant social media platforms to influence the public sphere. However, as with many modern communication technologies, the benefits that individuals and societies derive from the existence of these platforms should not be ignored. The responsibilities of the largest social media companies are currently being debated in legislative, policy and academic circles across the globe, but many of the numerous initiatives that are put forward do not sufficiently account for the protection of freedom of expression.

In this consultation paper, ARTICLE 19 outlines a roadmap for the creation of what we have called Social Media Councils (SMCs), a model for a multi-stakeholder accountability mechanism for content moderation on social media. SMCs aim to provide an open, transparent, accountable and participatory forum to address content moderation issues on social media platforms on the basis of international standards on human rights. The Social Media Council model puts forward a voluntary approach to the oversight of content moderation: participants (social media platforms and all stakeholders) sign up to a mechanism that does not create legal obligations. Its strength and efficiency rely on voluntary compliance by platforms, whose commitment, when signing up, will be to respect and execute the SMC’s decisions (or recommendations) in good faith.

With this document, we present these different options and submit them to a public consultation. The key issues we seek to address through this consultation are:

  • Substantive standards: could SMCs apply international standards directly or should they apply a ‘Code of Human Rights Principles for Content Moderation’?
  • Functions of SMCs: should SMCs have a purely advisory role or should they be able to review individual cases?
  • Global or national: should SMCs be created at the national level or should there be one global SMC?
  • Subject-matter jurisdiction: should SMCs deal with all content moderation decisions of social media companies, or should they have a more specialised area of focus, for example a particular type of content?

The consultation also seeks input on a number of technical issues that will be present in any configuration of the SMC, such as:

  1. Constitution process
  2. Structure
  3. Geographic jurisdiction (for a national SMC)
  4. Rules of procedure (if the SMC is an appeals mechanism)
  5. Funding

An important dimension of the Social Media Council concept is that the proposed structure has no exact precedent: the issue of online content moderation presents a new and challenging area. Only with a certain degree of creativity can the complexity of the issues raised by the creation of this new mechanism be solved.

ARTICLE 19’s objective is to ensure that decisions on these core questions and the solutions to practical problems sought by this initiative are compatible with the requirements of international human rights standards, and are shaped by a diverse range of expertise and perspectives.

Read the consultation paper

Complete the consultation survey

https://www.article19.org/resources/social-media-councils-consultation/