Posts Tagged ‘content moderation’

Arab Spring: information technology platforms no longer support human rights defenders in the Middle East and North Africa

December 18, 2020

Jason Kelley in the Electronic Frontier Foundation (EFF) of 17 December 2020 summarizes a joint statement by over 30 NGOs saying that the platform policies and content moderation procedures of the tech giants now too often lead to the silencing and erasure of critical voices from across the region. Arbitrary and non-transparent account suspension and removal of political and dissenting speech has become so frequent and systematic in the area that it cannot be dismissed as isolated incidents or the result of transitory errors in automated decision-making.

Young people protest in Morocco, 2011, photo by Magharebia

This year is the tenth anniversary of what became known as the “Arab Spring”, in which activists and citizens across the Middle East and North Africa (MENA) used social media to document the conditions in which they lived, to push for political change and social justice, and to draw the world’s attention to their movement. For many, it was the first time they had seen how the Internet could have a role to play in pushing for human rights across the world. Emerging social media platforms like Facebook, Twitter and YouTube all basked in the reflected glory of press coverage that centered their part in the protests: often to the exclusion of those who were actually on the streets. The years after the uprisings failed to live up to the optimism of the time. Offline, the authoritarian backlash against the democratic protests has meant that many of those who fought for justice a decade ago, are still fighting now.

The letter asks for several concrete measures to ensure that users across the region are treated fairly and are able to express themselves freely:

  • Do not engage in arbitrary or unfair discrimination.
  • Invest in the regional expertise to develop and implement context-based content moderation decisions aligned with human rights frameworks.
  • Pay special attention to cases arising from war and conflict zones.
  • Preserve restricted content related to cases arising from war and conflict zones.
  • Go beyond public apologies for technical failures, and provide greater transparency, notice, and offer meaningful and timely appeals for users by implementing the Santa Clara Principles on Transparency and Accountability in Content Moderation.

Content moderation policies are not only critical to ensuring robust political debate. They are key to expanding and protecting human rights.  Ten years out from those powerful protests, it’s clear that authoritarian and repressive regimes will do everything in their power to stop free and open expression. Platforms have an obligation to note and act on the effects content moderation has on oppressed communities, in MENA and elsewhere. [see also: https://humanrightsdefenders.blog/2020/06/03/more-on-facebook-and-twitter-and-content-moderation/]

In 2012, Mark Zuckerberg, CEO and Founder of Facebook, wrote

By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.

Instead, governments around the world have chosen authoritarianism, and platforms have contributed to the repression. It’s time for that to end.

Read the full letter demanding that Facebook, Twitter, and YouTube stop silencing critical voices from the Middle East and North Africa, reproduced below:

17 December 2020

Open Letter to Facebook, Twitter, and YouTube: Stop silencing critical voices from the Middle East and North Africa

Ten years ago today, 26-year old Tunisian street vendor Mohamed Bouazizi set himself on fire in protest over injustice and state marginalization, igniting mass uprisings in Tunisia, Egypt, and other countries across the Middle East and North Africa. 

As we mark the 10th anniversary of the Arab Spring, we, the undersigned activists, journalists, and human rights organizations, have come together to voice our frustration and dismay at how platform policies and content moderation procedures all too often lead to the silencing and erasure of critical voices from marginalized and oppressed communities across the Middle East and North Africa.

The Arab Spring is historic for many reasons, and one of its outstanding legacies is how activists and citizens have used social media to push for political change and social justice, cementing the internet as an essential enabler of human rights in the digital age.   

Social media companies boast of the role they play in connecting people. As Mark Zuckerberg famously wrote in his 2012 Founder’s Letter

“By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.”

Zuckerberg’s prediction was wrong. Instead, more governments around the world have chosen authoritarianism, and platforms have contributed to their repression by making deals with oppressive heads of state; opening doors to dictators; and censoring key activists, journalists, and other changemakers throughout the Middle East and North Africa, sometimes at the behest of other governments:

  • Tunisia: In June 2020, Facebook permanently disabled more than 60 accounts of Tunisian activists, journalists, and musicians on scant evidence. While many were reinstated, thanks to the quick reaction from civil society groups, accounts of Tunisian artists and musicians still have not been restored. We sent a coalition letter to Facebook on the matter but we didn’t receive a public response.
  • Syria: In early 2020, Syrian activists launched a campaign to denounce Facebook’s decision to take down/disable thousands of anti-Assad accounts and pages that documented war crimes since 2011, under the pretext of removing terrorist content. Despite the appeal, a number of those accounts remain suspended. Similarly, Syrians have documented how YouTube is literally erasing their history.
  • Palestine: Palestinian activists and social media users have been campaigning since 2016 to raise awareness around social media companies’ censorial practices. In May 2020, at least 52 Facebook accounts of Palestinian activists and journalists were suspended, and more have since been restricted. Twitter suspended the account of a verified media agency, Quds News Network, reportedly on suspicion that the agency was linked to terrorist groups. Requests to Twitter to look into the matter have gone unanswered. Palestinian social media users have also expressed concern numerous times about discriminatory platform policies.
  • Egypt: In early October 2019, Twitter suspended en masse the accounts of Egyptian dissidents living in Egypt and across the diaspora, directly following the eruption of anti-Sisi protests in Egypt. Twitter suspended the account of one activist with over 350,000 followers in December 2017, and the account still has yet to be restored. The same activist’s Facebook account was also suspended in November 2017 and restored only after international intervention. YouTube removed his account earlier in 2007.

Examples such as these are far too numerous, and they contribute to the widely shared perception among activists and users in MENA and the Global South that these platforms do not care about them, and often fail to protect human rights defenders when concerns are raised.  

Arbitrary and non-transparent account suspension and removal of political and dissenting speech has become so frequent and systematic that they cannot be dismissed as isolated incidents or the result of transitory errors in automated decision-making. 

While Facebook and Twitter can be swift in responding to public outcry from activists or private advocacy by human rights organizations (particularly in the United States and Europe), in most cases responses to advocates in the MENA region leave much to be desired. End-users are frequently not informed of which rule they violated, and are not provided a means to appeal to a human moderator. 

Remedy and redress should not be a privilege reserved for those who have access to power or can make their voices heard. The status quo cannot continue. 

The MENA region has one of the world’s worst records on freedom of expression, and social media remains critical for helping people connect, organize, and document human rights violations and abuses. 

We urge you to not be complicit in censorship and erasure of oppressed communities’ narratives and histories, and we ask you to implement the following measures to ensure that users across the region are treated fairly and are able to express themselves freely:

  • Do not engage in arbitrary or unfair discrimination. Actively engage with local users, activists, human rights experts, academics, and civil society from the MENA region to review grievances. Regional political, social, cultural context(s) and nuances must be factored in when implementing, developing, and revising policies, products and services. 
  • Invest in the necessary local and regional expertise to develop and implement context-based content moderation decisions aligned with human rights frameworks in the MENA region.  A bare minimum would be to hire content moderators who understand the various and diverse dialects and spoken Arabic in the twenty-two Arab states. Those moderators should be provided with the support they need to do their job safely, healthily, and in consultation with their peers, including senior management.
  • Pay special attention to cases arising from war and conflict zones to ensure content moderation decisions do not unfairly target marginalized communities. For example, documentation of human rights abuses and violations is a legitimate activity distinct from disseminating or glorifying terrorist or extremist content. As noted in a recent letter to the Global Internet Forum to Counter Terrorism, more transparency is needed regarding definitions and moderation of terrorist and violent extremist (TVEC) content
  • Preserve restricted content related to cases arising from war and conflict zones that Facebook makes unavailable, as it could serve as evidence for victims and organizations seeking to hold perpetrators accountable. Ensure that such content is made available to international and national judicial authorities without undue delay.
  • Public apologies for technical errors are not sufficient when erroneous content moderation decisions are not changed. Companies must provide greater transparency, notice, and offer meaningful and timely appeals for users. The Santa Clara Principles on Transparency and Accountability in Content Moderation, which Facebook, Twitter, and YouTube endorsed in 2019, offer a baseline set of guidelines that must be immediately implemented. 

Signed,

Access Now
Arabic Network for Human Rights Information — ANHRI
Article 19
Association for Progressive Communications — APC
Association Tunisienne de Prévention Positive
Avaaz
Cairo Institute for Human Rights Studies (CIHRS)
The Computational Propaganda Project
Daaarb — News — website
Egyptian Initiative for Personal Rights
Electronic Frontier Foundation
Euro-Mediterranean Human Rights Monitor
Global Voices
Gulf Centre for Human Rights (GCHR)
Hossam el-Hamalawy, journalist and member of the Egyptian Revolutionary Socialists Organization
Humena for Human Rights and Civic Engagement
IFEX
Ilam- Media Center For Arab Palestinians In Israel
ImpACT International for Human Rights Policies
Initiative Mawjoudin pour l’égalité
Iraqi Network for Social Media – INSMnetwork
I WATCH Organisation (Transparency International — Tunisia)
Khaled Elbalshy – Daaarb website – Editor in Chief
Mahmoud Ghazayel, Independent
Marlena Wisniak, European Center for Not-for-Profit Law
Masaar — Technology and Law Community
Michael Karanicolas, Wikimedia/Yale Law School Initiative on Intermediaries and Information
Mohamed Suliman, Internet activist
My.Kali magazine — Middle East and North Africa
Palestine Digital Rights Coalition (PDRC)
The Palestine Institute for Public Diplomacy
Pen Iraq
Quds News Network
Ranking Digital Rights
Rima Sghaier, Independent
Sada Social Center
Skyline International for Human Rights
SMEX
Syrian Center for Media and Freedom of Expression (SCM)
The Tahrir Institute for Middle East Policy (TIMEP)
Taraaz
Temi Lasade-Anderson, Digital Action
WITNESS
Vigilance Association for Democracy and the Civic State — Tunisia
7amleh – The Arab Center for the Advancement of Social Media

https://www.eff.org/deeplinks/2020/12/decade-after-arab-spring-platforms-have-turned-their-backs-critical-voices-middle

Facebook and YouTube are allowing themselves to become tools of the Vietnamese authorities’ censorship and harassment

December 1, 2020

On 1 December 2020, Amnesty International published a new report on how Facebook and YouTube are allowing themselves to become tools of the Vietnamese authorities’ censorship and harassment of its population, in an alarming sign of how these companies could increasingly operate in repressive countries. [see also: https://humanrightsdefenders.blog/2020/06/03/more-on-facebook-and-twitter-and-content-moderation/].

The 78-page report, “Let us Breathe!”: Censorship and criminalization of online expression in Viet Nam”, documents the systematic repression of peaceful online expression in Viet Nam, including the widespread “geo-blocking” of content deemed critical of the authorities, all while groups affiliated to the government deploy sophisticated campaigns on these platforms to harass everyday users into silence and fear.

The report is based on dozens of interviews with human rights defenders and activists, including former prisoners of conscience, lawyers, journalists and writers, in addition to information provided by Facebook and Google. It also reveals that Viet Nam is currently holding 170 prisoners of conscience, of whom 69 are behind bars solely for their social media activity. This represents a significant increase in the number of prisoners of conscience estimated by Amnesty International in 2018.

In the last decade, the right to freedom of expression flourished on Facebook and YouTube in Viet Nam. More recently, however, authorities began focusing on peaceful online expression as an existential threat to the regime,” said Ming Yu Hah, Amnesty International’s Deputy Regional Director for Campaigns.

Today these platforms have become hunting grounds for censors, military cyber-troops and state-sponsored trolls. The platforms themselves are not merely letting it happen – they’re increasingly complicit.

In 2018, Facebook’s income from Viet Nam neared US$1 billion – almost one third of all revenue from Southeast Asia. Google, which owns YouTube, earned US$475 million in Viet Nam during the same period, mainly from YouTube advertising. The size of these profits underlines the importance for Facebook and Google of maintaining market access in Viet Nam.”

In April 2020, Facebook announced it had agreed to “significantly increase” its compliance with requests from the Vietnamese government to censor “anti-state” posts. It justified this policy shift by claiming the Vietnamese authorities were deliberately slowing traffic to the platform as a warning to the company.

Last month, in Facebook’s latest Transparency Report – its first since it revealed its policy of increased compliance with the Vietnamese authorities’ censorship demands – the company revealed a 983% increase in content restrictions based on local law as compared with the previous reporting period, from 77 to 834. Meanwhile, YouTube has consistently won praise from Vietnamese censors for its relatively high rate of compliance with censorship demands.

State-owned media reported Information Minister Nguyen Manh Hung as saying in October that compliance with the removal of “bad information, propaganda against the Party and the State” was higher than ever, with Facebook and Google complying with 95% and 90% of censorship requests, respectively.

Based on dozens of testimonies and evidence, Amnesty International’s report shows how Facebook and YouTube’s increasing censorship of content in Vietnam operates in practice.

In some cases, users see their content censored under vaguely worded local laws, including offences such as “abusing democratic freedoms” under the country’s Criminal Code. Amnesty International views these laws as inconsistent with Viet Nam’s obligations under international human rights law. Facebook then “geo-blocks” content, meaning it becomes invisible to anyone accessing the platform in Viet Nam.

Nguyen Van Trang, a pro-democracy activist now seeking asylum in Thailand, told Amnesty International that in May 2020, Facebook notified him that one of his posts had been restricted due to “local legal restrictions”. Since then, Facebook has blocked every piece of content he has tried to post containing the names of senior members of the Communist Party. 

Nguyen Van Trang has experienced similar restrictions on YouTube, which, unlike Facebook, gave him the option to appeal such restrictions. Some appeals have succeeded and others not, without YouTube providing any explanation.

Truong Chau Huu Danh is a well-known freelance journalist with 150,000 followers and a verified Facebook account. He told Amnesty International that between 26 March and 8 May 2020, he posted hundreds of pieces of content about a ban on rice exports and the high-profile death penalty case of Ho Duy Hai. In June, he realized these posts had all vanished without any notification from Facebook whatsoever.

Amnesty International heard similar accounts from other Facebook users, particularly when they tried to post about a high-profile land dispute in the village of Dong Tam, which opposed local villagers to military-run telecommunications company Viettel. The dispute culminated in a confrontation between villagers and security forces in January 2020 that saw the village leader and three police officers killed.

After Facebook announced its new policy in April 2020, land rights activists Trinh Ba Phuong and Trinh Ba Tu reported that all the content they had shared about the Dong Tam incident had been removed from their timelines without their knowledge and without notification.

On 24 June 2020, the pair were arrested and charged with “making, storing, distributing or disseminating information, documents and items against the Socialist Republic of Vietnam” under Article 117 of the Criminal Code after they reported extensively on the Dong Tam incident. They are currently in detention. Their Facebook accounts have disappeared since their arrests under unknown circumstances. Amnesty International considers both Trinh Ba Phuong and Trinh Ba Tu to be prisoners of conscience.

The Vietnamese authorities’ campaign of repression often results in the harassment, intimidation, prosecution and imprisonment of people for their social media use. There are currently 170 prisoners of conscience imprisoned in Viet Nam, the highest number ever recorded in the country by Amnesty International. Nearly two in five (40%) have been imprisoned because of their peaceful social media activity.

Twenty-one of the 27 prisoners of conscience jailed in 2020, or 78%, were prosecuted because of their peaceful online activity under Articles 117 or 331 of the Criminal Code – the same repressive provisions that often form the basis of ‘local legal restrictions’ implemented by Facebook and YouTube. For every prisoner of conscience behind bars, there are countless people in Viet Nam who see this pattern of repression and intimidation and are understandably terrified about speaking their mind. Ming Yu Hah

These individuals’ supposed “crimes” include peacefully criticizing the authorities’ COVID-19 response on Facebook and sharing independent information about human rights online.

For every prisoner of conscience behind bars, there are countless people in Viet Nam who see this pattern of repression and intimidation and are understandably terrified about speaking their minds,” said Ming Yu Hah.

Amnesty International has documented dozens of incidents in recent years in which human rights defenders have received messages meant to harass and intimidate, including death threats. The systematic and organized nature of these harassment campaigns consistently bear the hallmarks of state-sponsored cyber-troops such as Du Luan Vien or “public opinion shapers” – people recruited and managed by the Communist Party of Viet Nam (CPV)’s Department of Propaganda to engage in psychological warfare online.

The activities of Du Luan Vien are complemented by those of “Force 47”, a cyberspace military battalion made up of some 10,000 state security forces whose function is to “fight against wrong views and distorted information on the internet”.

While “Force 47” and groups such as Du Luan Vien operate opaquely, they are known to engage in mass reporting campaigns targeting human rights –related content, often leading to their removal and account suspensions by Facebook and YouTube.

Additionally, Amnesty International’s investigation documented multiple cases of bloggers and social media users being physically attacked because of their posts by the police or plainclothes assailants, who operate with the apparent acquiescence of state authorities and with virtually no accountability for such crimes.


Putting an end to complicity

The Vietnamese authorities must stop stifling freedom of expression online. Amnesty International is calling for all prisoners of conscience in Viet Nam to be released immediately and unconditionally and for the amendment of repressive laws that muzzle freedom of expression.

Companies – including Facebook and Google – have a responsibility to respect all human rights wherever they operate. They should respect the right to freedom of expression in their content moderation decisions globally, regardless of local laws that muzzle freedom of expression. Tech giants should also overhaul their content moderation policies to ensure their decisions align with international human rights standards.

In October 2020, Facebook launched a global Oversight Board – presented as the company’s independent “Supreme Court” and its solution to the human rights challenges presented by content moderation. Amnesty International’s report reveals, however, that the Board’s bylaws will prevent it from reviewing the company’s censorship actions pursuant to local law in countries like Vet Nam. It’s increasingly obvious that the Oversight Board is incapable of solving Facebook’s human rights problems. Ming Yu Hah

“It’s increasingly obvious that the Oversight Board is incapable of solving Facebook’s human rights problems. Facebook should expand the scope of the Oversight Board to include content moderation decisions pursuant to local law; if not, the Board – and Facebook – will have again failed Facebook users,” said Ming Yu Hah.

[see also: https://humanrightsdefenders.blog/2020/04/11/algorithms-designed-to-suppress-isis-content-may-also-suppress-evidence-of-human-rights-violations/]

“Far from the public relations fanfare, countless people who dare to speak their minds in Viet Nam are being silenced. The precedent set by this complicity is a grave blow to freedom of expression around the world.”

https://www.amnesty.org/en/latest/news/2020/12/viet-nam-tech-giants-complicit/

https://www.theguardian.com/world/2020/dec/01/facebook-youtube-google-accused-complicity-vietnam-repression

https://thediplomat.com/2020/07/facebook-vietnams-fickle-partner-in-crime/

Facebook engineers resign due to Zuckerberg’s political stance

June 6, 2020

Image courtesy of Yang Jing/Unsplash

Yen Palec on 6 June 2020 writes that a group of Facebook employees recently resigned. They do not agree with Mark Zuckerberg’s political stance. Some engineers are condemning the executive for his refusal to act on issues about politics and police brutality.

See: https://humanrightsdefenders.blog/2020/06/03/more-on-facebook-and-twitter-and-content-moderation/

In a blog post, the engineers claim that Facebook has become a “platform that enables politicians to radicalize individuals and glorify violence.”.. Several employees, many of whom are working at home due to the pandemic, are criticizing the company. While some claim that the First Amendment protects these hate posts, many are arguing that it has gone too much…..

These criticisms are coming from some of Facebook’s early and tenured employees. Among those that vent their criticism are Dave Willner and Brandee Barker. They claim that the company’s policy may result in a double standard when it comes to political speech…..

In terms of political speech, Twitter’s action set the standard for tech companies to follow. Human rights activists and free speech defenders rally into Twitter’s side, one major platform did not: and that’s Facebook. As the biggest social media platform in the world, the company has the power to influence almost any political debate.

https://micky.com.au/facebook-engineers-resign-due-to-zuckerbergs-political-stance/

More on Facebook and Twitter and content moderation

June 3, 2020

On 2 June 2020 many media (here Natasha Kuma) wrote about the ‘hot potatoe’ in the social media debate about which posts are harmful and should be deleted or given a warning. Interesting to note that the European Commission supported the unprecedented decision of Twitter to mark the message of the President Trump about the situation in Minneapolis as violating the rules of the company about the glorification of violence.

The EU Commissioner Thierry Breton said: “we welcome the contribution of Twitter, directed to the social network of respected European approach”. Breton also wrote: “Recent events in the United States show that we need to find the right answers to difficult questions. What should be the role of digital platforms in terms of preventing the flow of misinformation during the election, or the crisis in health care? How to prevent the spread of hate speech on the Internet?” Vice-President of the European Commission Faith Jourova in turn, said that politicians should respond to criticism with facts, not resorting to threats and attacks.

Some employees of Facebook staged a virtual protest against the decision of Mark Zuckerberg not to take any action on the statements of Trum,. The leaders of the three American civil rights groups after a conversation with Zuckerberg and COO Sheryl Sandberg, released a joint statement in which they say that human rights defenders were not satisfied with the explanation of Mark Zuckerberg position: “He (Zuckerberg) refuses to acknowledge that Facebook is promoting trump’s call for violence against the protesters. Mark sets a very dangerous precedent.”

————-

Earlier – on 14 May 2020 – David Cohen wrote about Facebook having outlined learnings and steps it has taken as a result of its Human Rights Impact Assessments in Cambodia, Indonesia, Sri Lanka

Facebook shared results from a human rights impact assessments it commissioned in 2018 to evaluate the role of its services in Cambodia, Indonesia and Sri Lanka.

Director of human rights Miranda Sissons and product policy manager, human rights Alex Warofka said in a Newsroom post, “Freedom of expression is a foundational human right that allows for the free flow of information. We’re reminded how vital this is, in particular, as the world grapples with Covid-19, and accurate and authoritative information is more important than ever. Human rights defenders know this and fight for these freedoms every day. For Facebook, which stands for giving people voice, these rights are core to why we exist.

Sissons and Warofka said that since this research was conducted, Facebook took steps to formalize an approach to determine which countries require more investment, including increased staffing, product changes and further research.

Facebook worked with BSR on the assessment of its role in Cambodia, and with Article One for Indonesia and Sri Lanka.

Recommendations that were similar across all three reports:

  • Improving corporate accountability around human rights.
  • Updating community standards and improving enforcement.
  • Investing in changes to platform architecture to promote authoritative information and reduce the spread of abusive content.
  • Improving reporting mechanisms and response times.
  • Engaging more regularly and substantively with civil society organizations.
  • Increasing transparency so that people better understand Facebook’s approach to content, misinformation and News Feed ranking.
  • Continuing human rights due diligence.

…Key updates to the social network’s community standards included a policy to remove verified misinformation that contributes to the risk of imminent physical harm, as well as protections for vulnerable groups (veiled women, LGBTQ+ individuals, human rights activists) who would run the risk of offline harm if they were “outed.”

Engagement with civil society organizations was formalized, and local fact-checking partnerships were bolstered in Indonesia and Sri Lanka.

Sissons and Warofka concluded, “As we work to protect human rights and mitigate the adverse impacts of our platform, we have sought to communicate more transparently and build trust with rights holders. We also aim to use our presence in places like Sri Lanka, Indonesia and Cambodia to advance human rights, as outlined in the United Nations Guiding Principles on Business and Human Rights and in Article One and BSR’s assessments. In particular, we are deeply troubled by the arrests of people who have used Facebook to engage in peaceful political expression, and we will continue to advocate for freedom of expression and stronger protections of user data.

https://www.adweek.com/digital/facebook-details-human-rights-impact-assessments-in-cambodia-indonesia-sri-lanka/

————

But it is not all roses for Twitter either: On 11 May 2020 Frances Eve (deputy director of research at Chinese Human Rights Defenders) wrote about Twitter becoming the “Chinese Government’s Double Weapon: Punishing Dissent and Propagating Disinformation”.

She relates the story of former journalist Zhang Jialong whose “criminal activity,” according to the prosecutor’s charge sheet, is that “from 2016 onwards, the defendant Zhang Jialong used his phone and computer…. many times to log onto the overseas platform ‘Twitter,’ and through the account ‘张贾龙@zhangjialong’ repeatedly used the platform to post and retweet a great amount of false information that defamed the image of the [Chinese Communist] Party, the state, and the government.”…..

Human rights defenders like Zhang are increasingly being accused of using Twitter, alongside Chinese social media platforms like Weibo, WeChat, and QQ, to commit the “crime” of “slandering” the Chinese Communist Party or the government by expressing their opinions. As many Chinese human rights activists have increasingly tried to express themselves uncensored on Twitter, police have stepped up its monitoring of the platform. Thirty minutes after activist Deng Chuanbin sent a tweet on May 16, 2019 that referenced the 30th anniversary of the Tiananmen Massacre, Sichuan police were outside his apartment building. He has been in pre-trial detention ever since, accused of “picking quarrels and provoking trouble.”

…..While the Chinese government systematically denies Chinese people their right to express themselves freely on the Internet, … the government has aggressively used blocked western social media platforms like Twitter to promote its propaganda and launch disinformation campaigns overseas…

Zhang Jialong’s last tweet was an announcement of the birth of his daughter on June 8, 2019. He should be free and be able to watch her grow up. She deserves to grow up in a country where her father isn’t jailed for his speech.

https://www.vice.com/en_us/article/v7ggvy/chinas-unleashing-a-propaganda-wolfpack-on-twitter-even-though-citizens-go-to-jail-for-tweeting

To see some other posts on content moderation: https://humanrightsdefenders.blog/tag/content-moderation/

Emi Palmor’s selection to Facebook oversight board criticised by Palestinian NGOs

May 16, 2020

After reporting on the Saudi criticism regarding the composition of Facebook’s new oversight board [https://humanrightsdefenders.blog/2020/05/13/tawakkol-karman-on-facebooks-oversight-board-doesnt-please-saudis/], here the position of Palestinian civil society organizations who are very unhappy with the selection of the former General Director of the Israeli Ministry of Justice.

On 15 May 2020, MENAFN – Palestine News Network – reports that Palestinian civil society organizations condemn the selection of Emi Palmor, the former General Director of the Israeli Ministry of Justice, to Facebook’s Oversight Board and raises the alarm about the impact that her role will play in further shrinking the space for freedom of expression online and the protection of human rights. While it is important that the Members of the Oversight Board should be diverse, it is equally essential that they are known to be leaders in upholding the rule of law and protecting human rights worldwide.

Under Emi Palmor’s direction, the Israeli Ministry of Justice petitioned Facebook to censor legitimate speech of human rights defenders and journalists because it was deemed politically undesirable. This is contrary to international human rights law standards and recommendations issued by the United Nations (UN) Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, as well as digital rights experts and activists which argue that censorship must be rare and well justified to protect freedom of speech and that companies should develop tools that ‘prevent or mitigate the human rights risks caused by national laws or demands inconsistent with international standards.’

During Palmor’s time at the Israeli Ministry of Justice (2014-2019), the Ministry established the Israeli Cyber Unit, ……….

Additionally, as documented in Facebook’s Transparency Report, since 2016, there has been an increase in the number of Israeli government requests for data, which now total over 700, 50 percent of which were submitted under ’emergency requests’ and were not related to legal processes. These are not isolated attempts to restrict Palestinian digital rights and freedom of expression online. Instead, they fall within the context of a widespread and systematic attempt by the Israeli government, particularly through the Cyber Unit formerly headed by Emi Palmor, to silence Palestinians, to remove social media content critical of Israeli policies and practices and to smear and delegitmize human rights defenders, activists and organizations seeking to challenge Israeli rights abuses against the Palestinian people.

 

Tawakkol Karman on Facebook’s Oversight Board doesn’t please Saudis

May 13, 2020

Nobel Peace Prize laureate Yemeni Tawakkol Karman (AFP)

Nobel Peace Prize laureate Yemeni Tawakkol Karman (AFP)

On 10 May 2020 AlBawaba reported that Facebook had appointed Yemeni Nobel Peace Prize laureate Tawakkol Karman as a member of its newly-launched Oversight Board, an independent committee which will have the final say in whether Facebook and Instagram should allow or remove specific content. [ see also: https://humanrightsdefenders.blog/2020/04/11/algorithms-designed-to-suppress-isis-content-may-also-suppress-evidence-of-human-rights-violations/]

Karman, a human rights activist, journalist and politician, won the Nobel Peace Prize in 2011 for her role in Yemen’s Arab Spring uprising. Her appointment to the Facebook body has led to sharp reaction in the Saudi social media. She said that she has been subjected to a campaign of online harassment by Saudi media ever since she was appointed to Facebook’s Oversight Board. In a Twitter post on Monday she said, “I am subjected to widespread bullying & a smear campaign by #Saudi media & its allies.” Karman referred to the 2018 killing of Jamal Khashoggi indicating fears that she could be the target of physical violence.

Tawakkol Karman @TawakkolKarman

I am subjected to widespread bullying&a smear campaign by ’s media&its allies. What is more important now is to be safe from the saw used to cut ’s body into pieces.I am in my way to &I consider this as a report to the international public opinion.

However, previous Saudi Twitter campaigns have been proven by social media analysts to be manufactured and unrepresentative of public opinion, with thousands of suspicious Twitter accounts churning out near-identical tweets in support of the Saudi government line. The Yemeni human rights organization SAM for Rights and Liberties condemned the campaign against Karman, saying in a statement that “personalities close to the rulers of Saudi Arabia and the Emirates, as well as newspapers and satellite channels financed by these two regimes had joined a campaign of hate, and this was not a normal manifestation of responsible expression of opinion“.

Tengku Emma – spokesperson for Rohingyas – attacked on line in Malaysia

April 28, 2020

In an open letter in the Malay Mail of 28 April 2020 over 50 civil society organisations (CSO) and human rights activists, expressed their shock and condemnation about the mounting racist and xenophobic attacks in Malaysia against the Rohingya people and especially the targeted cyber attacks against Tengku Emma Zuriana Tengku Azmi, the representative of the European Rohingya Council’s (https://www.theerc.eu/about/) in Malaysia, and other concerned individuals for expressing their opinion and support for the rights of the Rohingya people seeking refuge in Malaysia.

[On 21 April 2020, Tengku Emma had her letter regarding her concern over the pushback of the Rohingya boat to sea published in the media. Since then she has received mobbed attacks and intimidation online, especially on Facebook.  The attacks, targeted her gender, particularly, with some including calls for rape. They were also intensely racist, both specifically targeted at her as well as the Rohingya. The following forms of violence have been documented thus far: 

● Doxxing – a gross violation by targeted research into her personal information and publishing it online, including her NRIC, phone number, car number plate, personal photographs, etc.; 

● Malicious distribution of a photograph of her son, a minor, and other personal information, often accompanied by aggressive, racist or sexist comments; 

● Threat of rape and other physical harm, and; 

● Distribution of fake and sexually explicit images. 

….One Facebook post that attacked her was shared more than 18,000 times since 23 April 2020. 

….We are deeply concerned and raise the question if there is indeed a concerted effort to spread inhumane, xenophobic and widespread hate that seem be proliferating in social media spaces on the issue of Rohingya seeking refuge in Malaysia, as a tool to divert attention from the current COVID-19 crisis response and mitigation.
When the attacks were reported to Facebook by Tengku Emma, no action was taken. Facebook responded by stating that the attacks did not amount to a breach of their Community Standards. With her information being circulated, accompanied by calls of aggression and violence, Tengku Emma was forced to deactivate her Facebook account. She subsequently lodged a police report in fear for her own safety and that of her family. 

There is, to date, no clear protection measures from either the police or Facebook regarding her reports. 

It is clear that despite direct threats to her safety and the cumulative nature of the attacks, current reporting mechanisms on Facebook are inadequate to respond, whether in timely or decisive ways, to limit harm. It is also unclear to what extent the police or the Malaysian Communications and Multimedia Commission (MCMC) are willing and able to respond to attacks such as this. 

It has been seven (7) days since Tengku Emma received her first attack, which has since ballooned outwards to tens of thousands. The only recourse she seems to have is deactivating her Facebook account, while the proponents of hatred and xenophobia continue to act unchallenged. This points to the systemic gaps in policy and laws in addressing xenophobia, online gender-based violence and hate speech, and even where legislation exists, implementation is far from sufficient. ]

Our demands: 

It must be stressed that the recent emergence and reiteration of xenophobic rhetoric and pushback against the Rohingya, including those already in Malaysia as well as those adrift at sea seeking asylum from Malaysia, is inhumane and against international norms and standards. The current COVID-19 pandemic is not an excuse for Malaysia to abrogate its duty as part of the international community. 

1.         The Malaysian government must, with immediate effect, engage with the United Nations, specifically the United Nations High Commissioner for Refugee (UNHCR), and civil society organisations to find a durable solution in support of the Rohingya seeking asylum in Malaysia on humanitarian grounds. 

2.         We also call on Malaysia to implement the Rabat Plan of Action on the prohibition of advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence, through a multistakeholder framework that promotes freedom of expression based on the principles of gender equality, non-discrimination and diversity.

3. Social media platforms, meanwhile, have the obligation to review and improve their existing standards and guidelines based on the lived realities of women and marginalised communities, who are often the target of online hate speech and violence, including understanding the cumulative impact of mobbed attacks and how attacks manifest in local contexts.

4. We must end all xenophobic and racist attacks and discrimination against Rohingya who seek asylum in Malaysia; and stop online harassment, bullying and intimidation against human rights defenders working on the Rohingya crisis.

For more posts on content moderation: https://humanrightsdefenders.blog/tag/content-moderation/

https://www.malaymail.com/news/what-you-think/2020/04/28/civil-society-orgs-stand-in-solidarity-with-women-human-rights-defender-ten/1861015

Algorithms designed to suppress ISIS content, may also suppress evidence of human rights violations

April 11, 2020

Facebook and YouTube designed algorithms to suppress ISIS content. They're having unexpected side effects.

Illustration by Leo Acadia for TIME
TIME of 11 April 2020 carries a long article by Billy Perrigo entitled “These Tech Companies Managed to Eradicate ISIS Content. But They’re Also Erasing Crucial Evidence of War Crimes” It is a very interseting piece that clearly spells out the dilemma of supressing too much or too little on Facebook, YouTube, etc.  Algorithms designed to suppress ISIS content, are having unexpected side effects such as suppressing evidence of human rights violations.
…..Images by citizen journalist Abo Liath Aljazarawy to his Facebook page (Eye on Alhasakah’s) showed the ground reality of the Syrian civil war. His page was banned. Facebook confirmed to TIME that Eye on Alhasakah was flagged in late 2019 by its algorithms, as well as users, for sharing “extremist content.” It was then funneled to a human moderator, who decided to remove it. After being notified by TIME, Facebook restored the page in early February, some 12 weeks later, saying the moderator had made a mistake. (Facebook declined to say which specific videos were wrongly flagged, except that there were several.)The algorithms were developed largely in reaction to ISIS, who shocked the world in 2014 when they began to share slickly-produced online videos of executions and battles as propaganda. Because of the very real way these videos radicalized viewers, the U.S.-led coalition in Iraq and Syria worked overtime to suppress them, and enlisted social networks to help. Quickly, the companies discovered that there was too much content for even a huge team of humans to deal with. (More than 500 hours of video are uploaded to YouTube every minute.) So, since 2017, beg have been using algorithms to automatically detect extremist content. Early on, those algorithms were crude, and only supplemented the human moderators’ work. But now, following three years of training, they are responsible for an overwhelming proportion of detections. Facebook now says more than 98% of content removed for violating its rules on extremism is flagged automatically. On YouTube, across the board, more than 20 million videos were taken down before receiving a single view in 2019. And as the coronavirus spread across the globe in early 2020, Facebook, YouTube and Twitter announced their algorithms would take on an even larger share of content moderation, with human moderators barred from taking sensitive material home with them.

But algorithms are notoriously worse than humans at understanding one crucial thing: context. Now, as Facebook and YouTube have come to rely on them more and more, even innocent photos and videos, especially from war zones, are being swept up and removed. Such content can serve a vital purpose for both civilians on the ground — for whom it provides vital real-time information — and human rights monitors far away. In 2017, for the first time ever, the International Criminal Court in the Netherlands issued a war-crimes indictment based on videos from Libya posted on social media. And as violence-detection algorithms have developed, conflict monitors are noticing an unexpected side effect, too: these algorithms could be removing evidence of war crimes from the Internet before anyone even knows it exists.

…..
It was an example of how even one mistaken takedown can make the work of human rights defenders more difficult. Yet this is happening on a wider scale: of the 1.7 million YouTube videos preserved by Syrian Archive, a Berlin-based non-profit that downloads evidence of human rights violations, 16% have been removed. A huge chunk were taken down in 2017, just as YouTube began using algorithms to flag violent and extremist content. And useful content is still being removed on a regular basis. “We’re still seeing that this is a problem,” says Jeff Deutsch, the lead researcher at Syrian Archive. “We’re not saying that all this content has to remain public forever. But it’s important that this content is archived, so it’s accessible to researchers, to human rights groups, to academics, to lawyers, for use in some kind of legal accountability.” (YouTube says it is working with Syrian Archive to improve how they identify and preserve footage that could be useful for human rights groups.)

…..

Facebook and YouTube’s detection systems work by using a technology called machine learning, by which colossal amounts of data (in this case, extremist images, videos, and their metadata) are fed to an artificial intelligence adept at spotting patterns. Early types of machine learning could be trained to identify images containing a house, or a car, or a human face. But since 2017, Facebook and YouTube have been feeding these algorithms content that moderators have flagged as extremist — training them to automatically identify beheadings, propaganda videos and other unsavory content.

Both Facebook and YouTube are notoriously secretive about what kind of content they’re using to train the algorithms responsible for much of this deletion. That means there’s no way for outside observers to know whether innocent content — like Eye on Alhasakah’s — has already been fed in as training data, which would compromise the algorithm’s decision-making. In the case of Eye on Alhasakah’s takedown, “Facebook said, ‘oops, we made a mistake,’” says Dia Kayyali, the Tech and Advocacy coordinator at Witness, a human rights group focused on helping people record digital evidence of abuses. “But what if they had used the page as training data? Then that mistake has been exponentially spread throughout their system, because it’s going to train the algorithm more, and then more of that similar content that was mistakenly taken down is going to get taken down. I think that is exactly what’s happening now.” Facebook and YouTube, however, both deny this is possible. Facebook says it regularly retrains its algorithms to avoid this happening. In a statement, YouTube said: “decisions made by human reviewers help to improve the accuracy of our automated flagging systems.”

…….
That’s because Facebook’s policies allow some types of violence and extremism but not others — meaning decisions on whether to take content down is often based on cultural context. Has a video of an execution been shared by its perpetrators to spread fear? Or by a citizen journalist to ensure the wider world sees a grave human rights violation? A moderator’s answer to those questions could mean that of two identical videos, one remains online and the other is taken down. “This technology can’t yet effectively handle everything that is against our rules,” Saltman said. “Many of the decisions we have to make are complex and involve decisions around intent and cultural nuance which still require human eye and judgement.”

In this balancing act, it’s Facebook’s army of human moderators — many of them outsourced contractors — who carry the pole. And sometimes, they lose their footing. After several of Eye on Alhasakah’s posts were flagged by algorithms and humans alike, a Facebook moderator wrongly decided the page should be banned entirely for sharing violent videos in order to praise them — a violation of Facebook’s rules on violence and extremism, which state that some content can remain online if it is newsworthy, but not if it encourages violence or valorizes terrorism. The nuance, Facebook representatives told TIME, is important for balancing freedom of speech with a safe environment for its users — and keeping Facebook on the right side of government regulations.

Facebook’s set of rules on the topic reads like a gory textbook on ethics: beheadings, decomposed bodies, throat-slitting and cannibalism are all classed as too graphic, and thus never allowed; neither is dismemberment — unless it’s being performed in a medical setting; nor burning people, unless they are practicing self-immolation as an act of political speech, which is protected. Moderators are given discretion, however, if violent content is clearly being shared to spread awareness of human rights abuses. “In these cases, depending on how graphic the content is, we may allow it, but we place a warning screen in front of the content and limit the visibility to people aged 18 or over,” said Saltman. “We know not everyone will agree with these policies and we respect that.”

But civilian journalists operating in the heat of a civil war don’t always have time to read the fine print. And conflict monitors say it’s not enough for Facebook and YouTube to make all the decisions themselves. “Like it or not, people are using these social media platforms as a place of permanent record,” says Woods. “The social media sites don’t get to choose what’s of value and importance.”

See also: https://humanrightsdefenders.blog/2019/06/17/social-media-councils-an-answer-to-problems-of-content-moderation-and-distribution/

https://time.com/5798001/facebook-youtube-algorithms-extremism/

Policy response from Human Rights NGOs to COVID-19: Witness

April 5, 2020

In the midst of the COVID-19 crisis, many human rights organisations have been formulating a policy response. While I cannot be complete or undertake comparisons, I will try and give some examples in the course of the coming weeks. Here the one by Sam Gregory of

…..The immediate implications of coronavirus – quarantine, enhanced emergency powers, restrictions on sharing information –  make it harder for individuals all around the world to document and share the realities of government repression and private actors’ violations.  In states of emergency, authoritarian governments in particular can operate with further impunity, cracking down on free speech and turning to increasingly repressive measures. The threat of coronavirus and its justifying power provides cover for rights-violating laws and measures that history tells us may long outlive the actual pandemic. And the attention on coronavirus distracts focus from rights issues that are both compounded by the impact of the virus and cannot claim the spotlight now.

At WITNESS we are adapting and responding, led by what we learn and hear from the communities of activism, human rights and civic journalism with which we collaborate closely across the world. We will continue to ensure that our guidance on direct documentation helps people document the truth even under trying circumstances and widespread misinformation. We will draw on our experience curating voices and information from closed situations to make sense in confusion. We will provide secure online training while options for physical meeting are curtailed. We will provide meaningful localized guidance on how to document and verify amid an information pandemic; and we will ensure that long-standing struggles are not neglected now when they need it most.

In this crisis moment, it is critical that we enhance the abilities and defend the rights of people who document and share critical realities from the ground. Across the three core thematic issues we currently work on, the need is critical. For issues such as video as evidence from conflict zones, these wars continue on and reach their apex even as coronavirus takes all the attention away. We need only look to the current situation in Idlib, Yemen or in other states of conflict in the Middle East.

For other issues, like state violence against minorities, many people already live in a state of emergency.

Coronavirus response in Complexo do Alemão favela, Rio de Janeiro (credit: Raull Santiago)

Favela residents in Brazil have lived with vastly elevated levels of police killings of civilians for years, and now face a parallel health emergency. Meanwhile immigrant communities in the US have lived in fear of ICE for years and must now weigh their physical health against their physical safety and family integrity. Many communities – in Kashmir and in Rakhine State, Burma – live without access to the internet on an ongoing basis and must still try and share what is happening. And for those who fight for their land rights and environmental justice, coronavirus is both a threat to vulnerable indigenous and poor communities lacking health care, sanitation and state support as well as a powerful distraction from their battle against structural injustice.

A critical part of WITNESS’ strategy is our work to ensure technology companies actions and government regulation of technology are accountable to the most vulnerable members of our global society – marginalized populations globally, particularly those outside the US and Europe, as well as human rights defenders and civic journalists. As responses to coronavirus kick-in there are critical implications in how both civic technology and commercial technology are now being deployed and will be deployed.

Already, coronavirus has acted as an accelerant – like fuel on the fire – to existing trends in technology. Some of these have potentially profound negative impacts for human rights values, human rights documentation and human rights defenders; others may hold a silver lining.

My colleague Dia Kayyali has already written about the sudden shift to much broader algorithmic content moderation that took place last week as Facebook, Twitter, Google and YouTube sent home their human moderators. Over the past years, we’ve seen the implications of both a move to algorithmic moderation and a lack of will and resourcing: from hate speech staying on platforms in vulnerable societies, to the removal critical war crimes evidence at scale from YouTube, to a lack of accountability for decisions made under the guise of countering terrorist and violent extremist content. But in civil society we did not anticipate that such a shift to more broad algorithmic control would happen so rapidly in such a short period of time. We must closely monitor and push for this change not to adversely affect societies and critical struggles worldwide in a moment when they are already threatened by isolation and increased government repression. As Dia suggests, now is the moment for these companies to finally make their algorithms and content moderation processes more transparent to critical civil society experts, as well as reset on how they support and treat the human beings who do the dirty work of moderation.

WITNESS’s work on misinformation and disinformation spans a decade of supporting the production of truthful, trustworthy content in war zones, crises and long-standing struggles for rights. Most recently we have focused on the emerging threats from deepfakes and other forms of synthetic media that enable increasingly realistic fakery of what looks like a real person saying or doing something they never did.

We’ve led the first global expert meetings in Brazil, Southern Africa and Southeast Asia on what a rights-respecting, global responses should look like in terms of understanding threats and solutions. Feedback from these sessions has stressed the need for attention to a continuum of audiovisual misinformation including ‘shallowfakes’, the simpler forms of miscontextualized and lightly edited videos that dominate attempts to confuse and deceive. Right now, social media platforms are unleashing a series of responses to misinformation around Coronavirus – from highlighting authoritative health information from country-level and international sources, to curating resources, offering help centers, and taking down a wider range of content that misinforms, deceives or price gouges including even leading politicians, such as President Bolsonaro in Brazil. The question we must ask is what we want to see internet companies continue to do after the crisis: what should they do for a wider range of misinformation and disinformation outside of health – and what do we not want them to do? We’ll be sharing more about this in the coming weeks.

And where can we find a technological silver lining? One area may be the potential to discover and explore new ways to act in solidarity and agency with each other online. A long-standing area of work at WITNESS is how to use ‘co-presence’ and livestreaming to bridge social distances and help people witness snd support one another when physical proximity is not possible.

Our Mobil-Eyes Us project supported favela-based activists to use live video to better engage their audiences to be with them, and provide meaningful support. In parts of the world that benefit from broadband internet access, and the absence of arbitrary shutdowns, and the ability to physically isolate, we are seeing an explosion of experimentation in how to operate better in a world that is both physically distanced, yet still socially proximate. We should learn from this and drive experimentation and action in ensuring that even as our freedom of assembly in physical space is curtailed for legitimate (and illegitimate) reasons, our ability to assemble online in meaningful action is not curtailed but enhanced.

In moments of crisis good and bad actors alike will try and push the agenda that they want. In this moment of acceleration and crisis, WITNESS is committed to ensuring an agenda firmly grounded, and led by a human rights vision and the wants and needs of vulnerable communities and human rights defenders worldwide.

Coronavirus and human rights: Preparing WITNESS’s response

 

How social media companies can identify and respond to threats against human rights defenders

October 15, 2019

global computer threats

Image from Shutterstock.

Ginna Anderson writes in the ABA Abroad of 3

..Unfortunately, social media platforms are now a primary tool for coordinated, state-aligned actors to harass, threaten and undermine advocates. Although public shaming, death threats, defamation and disinformation are not unique to the online sphere, the nature of the internet has given them unprecedented potency. Bad actors are able to rapidly deploy their poisoned content on a vast scale. Social media companies have only just begun to recognize, let alone respond, to the problem. Meanwhile, individuals targeted through such coordinated campaigns must painstakingly flag individual pieces of content, navigate opaque corporate structures and attempt to survive the fallout. To address this crisis, companies such as Facebook, Twitter and Youtube must dramatically increase their capacity and will to engage in transparent, context-driven content moderation.

For human rights defenders, the need is urgent. .. Since 2011, the ABA Center for Human Rights (CHR) has ..noted with concern the coordination of “traditional” judicial harassment of defenders by governments, such as frivolous criminal charges or arbitrary detention, with online campaigns of intimidation. State-aligned online disinformation campaigns against individual defenders often precede or coincide with official investigations and criminal charges.

……

While social media companies generally prohibit incitement of violence and hate speech on their platforms, CHR has had to engage in additional advocacy with social media companies requesting the removal of specific pieces of content or accounts that target defenders. This extra advocacy has been required even where the content clearly violates a social media company’s terms of service and despite initial flagging by a defender. The situation is even more difficult where the threatening content is only recognizable with sufficient local and political context. The various platforms all rely on artificial intelligence, to varying degrees, to identify speech that violates their respective community standards. Yet current iterations of artificial intelligence are often unable to adequately evaluate context and intent.

Online intimidation and smear campaigns against defenders often rely on existing societal fault lines to demean and discredit advocates. In Guatemala, CHR recently documented a coordinated social media campaign to defame, harass, intimidate and incite violence against human rights defenders. Several were linked with so-called “net centers,” where users were reportedly paid to amplify hateful content across platforms. Often, the campaigns relied on “coded” language that hark back to Guatemala’s civil war and the genocide of Mayan communities by calling indigenous leaders communists, terrorists and guerrillas.

These terms appear to have largely escaped social media company scrutiny, perhaps because none is a racist slur per se. And yet, the proliferation of these online attacks, as well as the status of those putting out the content, is contributing to a worsening climate of violence and impunity for violence against defenders by specifically alluding to terms used to justify violence against indigenous communities. In 2018 alone, NPR reports that 26 indigenous defenders were murdered in Guatemala. In such a climate, the fear and intimidation felt by those targeted in such campaigns is not hyperbolic but based on their understanding of how violence can be sparked in Guatemala.

In order to address such attacks, social media companies must adopt policies that allow them to designate defenders as temporarily protected groups in countries that are characterized by state-coordinated or state-condoned persecution of activists. This is in line with international law that prohibits states from targeting individuals for serious harm based on their political opinion. To increase their ability to recognize and respond to persecution and online violence against human rights defenders, companies must continue to invest in their context-driven content moderation capacity, including complementing algorithmic monitoring with human content moderators well-versed in local dialects and historical and political context.

Context-driven content moderation should also take into account factors that increase the risk that online behavior will contribute to offline violence by identifying high-risk countries. These factors include a history of intergroup conflict and an overall increase in the number of instances of intergroup violence in the past 12 months; a major national political election in the next 12 months; and significant polarization of political parties along religious, ethnic or racial lines. Countries where these and other risk factors are present call for proactive approaches to identify problematic accounts and coded threats against defenders and marginalized communities, such as those shown in Equality Labs’ “Facebook India” report.

Companies should identify, monitor and be prepared to deplatform key accounts that are consistently putting out denigrating language and targeting human rights defenders. This must go hand in hand with the greater efforts that companies are finally beginning to take to identify coordinated, state-aligned misinformation campaigns. Focusing on the networks of users who abuse the platform, instead of looking solely at how the online abuse affects defenders’ rights online, will also enable companies to more quickly evaluate whether the status of the speaker increases the likelihood that others will take up any implicit call to violence or will be unduly influenced by disinformation.

This abuser-focused approach will also help to decrease the burden on defenders to find and flag individual pieces of content and accounts as problematic. Many of the human rights defenders with whom CHR works are giving up on flagging, a phenomenon we refer to as flagging fatigue. Many have become fatalistic about the level of online harassment they face. This is particularly alarming as advocates targeted online may develop skins so thick that they are no longer able to assess when their actual risk of physical violence has increased.

Finally, it is vital that social media companies pursue, and civil society demand, transparency in content moderation policy and decision-making, in line with the Santa Clara Principles. Put forward in 2018 by a group of academic experts, organizations and advocates committed to freedom of expression online, the principles are meant to guide companies engaged in content moderation and ensure that the enforcement of their policies is “fair, unbiased, proportional and respectful of users’ rights.” In particular, the principles call upon companies to publicly report on the number of posts and accounts taken down or suspended on a regular basis, as well as to provide adequate notice and meaningful appeal to affected users.

CHR routinely supports human rights defenders facing frivolous criminal charges related to their human rights advocacy online or whose accounts and documentation have been taken down absent any clear justification. This contributes to a growing distrust of the companies among the human rights community as apparently arbitrary decisions about content moderation are leaving advocates both over- and under-protected online.

As the U.N. special rapporteur on freedom of expression explained in his 2018 report, content moderation processes must include the ability to appeal the removal, or refusal to remove, content or accounts. Lack of transparency heightens the risk that calls to address the persecution of human rights defenders online will be subverted into justifications for censorship and restrictions on speech that is protected under international human rights law.

A common response when discussing the feasibility of context-driven content moderation is to compare it to reviewing all the grains of sand on a beach. But human rights defenders are not asking for the impossible. We are merely pointing out that some of that sand is radioactive—it glows in the dark, it is lethal, and there is a moral and legal obligation upon those that profit from the beach to deal with it.

Ginna Anderson, senior counsel, joined ABA CHR in 2012. She is responsible for supporting the center’s work to advance the rights of human rights defenders and marginalized dommunities, including lawyers and journalists at risk. She is an expert in health and human rights, media freedom, freedom of expression and fair trial rights. As deputy director of the Justice Defenders Program since 2013, she has managed strategic litigation, fact-finding missions and advocacy campaigns on behalf of human rights defenders facing retaliation for their work in every region of the world

http://www.abajournal.com/news/article/how-can-social-media-companies-identify-and-respond-to-threats-against-human-rights-defenders