Posts Tagged ‘on-line harassment’

New study shows that on-line attacks against women human rights defenders doubled

December 16, 2025
Stressed businesswoman working on laptop at home, tired freelancer

On 15 December 2025 Emma Woollacott, in Forbes, referred to a new study that shows that 7 in 10 women human rights defenders, activists and journalists have experienced online violence in the course of their work. Produced through the UN Women’s ACT to End Violence against Women program and supported by the European Commission, “Tipping point: The chilling escalation of violence against women in the public sphere” draws on a global survey of women from 119 countries.

Along with online threats and harassment, more than 4 in 10 have experienced offline harm linked to online abuse — more than twice as many as in 2020, the researchers found. This can range from verbal harassment right up to physical assault, stalking and swatting.

“These figures confirm that digital violence is not virtual — it’s real violence with real-world consequences,” said Sarah Hendricks, director of policy, programme and intergovernmental division at UN Women.

“Women who speak up for our human rights, report the news or lead social movements are being targeted with abuse designed to shame, silence and push them out of public debate. Increasingly, those attacks do not stop at the screen — they end at women’s front doors. We cannot allow online spaces to become platforms for intimidation that silence women and undermine democracy.”

And AI is only making things worse, with almost 1 in 4 women human rights defenders, activists and journalists having experienced AI-assisted online violence, such as deepfake imagery and manipulated content. This is most often the case for writers and public communicators who focus on human rights issues, such as social media content creators and influencers, for whom the figure reaches 30%.

“Gender-based online violence is not a new phenomenon, but its scale certainly is,” said report co-author Lea Hellmueller, associate professor in journalism at City St George’s and associate dean for research in the School of Communication and Creativity.

“AI tools enable the production of cheaper and faster abusive content, which is detrimental to women in public life — and beyond,” Hellmueller added.

Tech firms are partly responsible, the researchers said, with the report calling for better tools to identify, monitor, report and fend off AI-assisted online violence. The researchers also want to see more legal and regulatory mechanisms to force tech firms to prevent their technologies being deployed against women in the public sphere.

“Our next steps include publishing data from the survey about the opportunities for, and barriers to, law enforcement and legal redress for survivors of online violence,” said Julie Posetti, chair of the Centre for Journalism and Democracy at City St George’s, University of London, one of the authors of the report. “We will also focus on creative efforts to counter gender-based online violence and policy recommendations to help hold the Big Tech facilitators of this dangerous phenomenon accountable.”

https://www.forbes.com/sites/emmawoollacott/2025/12/15/online-attacks-against-women-human-rights-workers-double-in-five-years/

https://www.globalissues.org/news/2025/12/15/41907

https://theconversation.com/ai-tools-are-being-used-to-subject-women-in-public-life-to-online-violence-271703

META Oversight Board overturns decision re Human Rights Defender in Peru

June 24, 2025

On 27 May 2025, the Oversight Board overturned Meta’s decision to leave up content targeting one of Peru’s leading human rights defenders:

Summary

The Oversight Board overturns Meta’s decision to leave up content targeting one of Peru’s leading human rights defenders. Restrictions on fundamental freedoms, such as the right to assembly and association, are increasing in Peru, with non-governmental organizations (NGOs) among those impacted. Containing an image of the defender that has been altered, likely with AI, to show blood dripping down her face, the post was shared by a member of La Resistencia. This group targets journalists, NGOs, human rights activists and institutions in Peru with disinformation, intimidation and violence. Taken in its whole context, this post qualifies as a “veiled threat” under the Violence and Incitement policy. As this case reveals potential underenforcement of veiled or coded threats on Meta’s platforms, the Board makes two related recommendations.

……

The Oversight Board’s Decision

The Oversight Board overturns Meta’s decision to leave up the content. The Board also recommends that Meta:

  • Clarify that “coded statements where the method of violence is not clearly articulated” are prohibited in written, visual and verbal form, under the Violence and Incitement Community Standard.
  • Produce an annual accuracy assessment on potential veiled threats, including a specific focus on content containing threats against human rights defenders that incorrectly remains up on the platform and instances of political speech incorrectly being taken down.

Return to Case Decisions and Policy Advisory Opinions

Amnesty finds that young human rights defenders face online harassment for posting on human rights

July 3, 2024
Amnesty International
an illustration with a young person speaking into a megaphone. Around them are images of fists coming out of phone screens.

On July 1, 2024 AI published the findings of a survey which says that three out of five child and young human rights defenders face online harassment in connection with their activism, according to a new analysis of 400 responses to an Amnesty International questionnaire, distributed to young activists across 59 countries. More than 1400 young activists participated in the survey conducted as a part of Amnesty International’s global campaign to “Protect the Protest.”

Of those, 400 youth activists aged between 13 to 24 years agreed to the publication of their data.

They faced harassment in the form of hateful comments, threats, hacking and doxing which is often linked to offline abuse and political persecution often perpetrated by state actors with little or no response from Big Tech platforms resulting in the silencing of young people. 

The highest rates of online harassment were reported by young activists in Nigeria and Argentina.

“I have been harassed […] by a stranger because of my pronouns. The stranger told me it is not possible to be a ‘they/them’ and kept sending messages about how I am crazy for identifying the way I identify. I had to ignore the person’s messages,” said a 17-year-old Nigerian queer LGBTI activist who asked not to be identified.

Another young activist – 21-year-old male Nigerian LBGTI rights activist said, “People disagree with my liberal progressive views, and immediately check my profile to see that I am queer Nigerian living in Nigeria, and they come at me with so much vitriol. I am usually scared to share my opinion on apps like TikTok because I can go viral. The internet can be a very scary place,” he said adding that, “Someone cat fishing as a gay man, lured me into coming out to see him after befriending me for a while, and then he attacked me with his friends. This is Nigeria, I couldn’t go to the police for secondary victimization.”

Twenty-one percent of respondents say they are trolled or threatened on a weekly basis and close to a third of the young activists say that they have censored themselves in response to tech-facilitated violence, with a further 14 percent saying they have stopped posting about human rights and their activism altogether.

“I always think twice before making a comment, when I express my political position, I start to get many comments that not only have to do with my position, but also with my body, my gender identity or my sexuality,” said Sofía*, a 23-year-old human rights defender from Argentina shared her experience on X formerly known as Twitter.

The survey respondents said they faced the most abuse on Facebook, with 87 percent of the platform’s users reporting experiences of harassment, compared to 52 percent on X and 51 percent on Instagram.

The most common forms of online harassment are upsetting and disrespectful “troll” comments (60 percent) and upsetting or threatening direct messages (52 percent).

Five percent of the young activists say they have faced online sexual harassment, too, reporting that users posted intimate images (including real and AI-generated images) of them without consent.

For many of the survey participants harassment in relation to their online activism is not limited to the digital world either. Almost a third of respondents reported facing offline forms of harassment, from family members and people in their personal lives to negative repercussions in school, police questioning and political persecution.

Twenty-year-old non-binary activist Aree* from Thailand shared their experience of facing politically motivated prosecution in five different cases whilst they were still a child.

Abdul* a 23-year-old Afghan activist reported being denied work at a hospital after authorities found out about his social media activism.

The Israel-Gaza war currently stands out as an issue attracting high levels of abusive online behaviour, but the threat of online harassment appears to be omnipresent across all leading human rights issues. Peace and security, the rule of law, economic and gender equality, social and racial justice, and environmental protection all served as “trigger topics” for the attacks.

However, the way young activists are targeted varies and appears to be closely linked to intersectional experiences of discrimination, likely harming survivors of identity-based abuse in longer lasting ways than issue-based harassment.

Twenty-one percent of respondents say they have been harassed in connection with their gender and twenty percent in connection with their race or ethnicity. Smaller percentages said they face abuse in connection with their socio-economic background, age, sexual orientation and/or disabilities.

“At first it was simply hateful comments since the posts I published were daring and spoke openly about LGBT rights, which later made me receive threats in private messages and it went further when my account was hacked,” said Paul a 24-year-old activist from Cameroon, on being targeted for his LGBTI related activism adding that, “For 2 years, I have been living in total insecurity because of the work I do as an advocate for the rights of my community online.”

For Paul and many other young activists, online harassment is having deep effects on their mental health. Forty percent of the respondents say they have felt a sense of powerlessness and nervousness or are afraid of using social media. Some respondents have even felt unable to perform everyday tasks and felt physically unsafe. Accordingly, psychological support is the most popular form of support which young activists call for, ahead of easier to use reporting mechanisms and legal support.

Many of the young activists voiced frustrations over leading social media platforms’ failure to adequately respond to their reports of harassment saying the abusive comments are left on the platforms long after being flagged.

Some respondents also felt that social media platforms are playing an active part in silencing them; multiple activists reported that they found posts about the war in Gaza removed, echoing previous reports of content advocating for Palestinian rights being subject to potentially discriminatory moderation by various platforms.

Others highlighted platforms’ role in enabling state-led intimidation and censorship campaigns, undermining activists’ hope for government regulation to provide answers to the challenge of tech-facilitated violence.

Amnesty International has previously documented the repression of peaceful online speech by states including India, the Philippines and Vietnam and is currently calling for global solidarity actions in support of women and LGBTI activists facing state-backed online violence in Thailand.

*The young activists’ names have been changed to protect their identities.

Q&A: Transnational Repression

June 14, 2024

On 12 June 2024, Human Rights Watch published a useful, short “questions-and-answers” document which outlines key questions on the global trend of transnational repression. 

Illustration of a map being used to bind someone's mouth
© 2024 Brian Stauffer for Human Rights Watch
  1. What is transnational repression?
  2. What tactics are used?
  3. Is transnational repression a new phenomenon?  
  4. Where is transnational repression happening? 
  5. Do only “repressive” states commit transnational repression?
  6. Are steps being taken to recognize and address transnational repression? 
  7. What should be done? 

What is transnational repression?

The term “transnational repression” is increasingly used to refer to state actors reaching beyond their borders to suppress or stifle dissent by targeting human rights defenders, journalists, government critics and opposition activists, academics and others, in violation of their human rights. Particularly vulnerable are nationals or former nationals, members of diaspora communities and those living in exile. Many are asylum seekers or refugees in their place of exile, while others may be at risk of extradition or forced return. Back home, a person’s family members and friends may also be targeted, by way of retribution and with the aim of silencing a relative in exile or forcing their return.

Transnational repression can have far-reaching consequences, including a chilling effect on the rights to freedom of expression and association. While there is no formal legal definition, the framing of transnational repression, which encompasses a wide range of rights abuses, allows us to better understand it and propose victim-centered responses.

What tactics are used?

Documented tactics of transnational repression include killings, abductions, enforced disappearances, unlawful removals, online harassment, the use of digital surveillance including spyware, targeting of relatives, and the abuse of consular services.  Interpol’s Red Notice system has also been used as a tool of transnational repression, to facilitate unlawful extraditions. Interpol has made advances in improving its vetting systems, yet governments continue to abuse the Red Notice system by publishing unlawful notices seeking citizens who have fled abroad on spurious charges. This leaves targets vulnerable to arrest and return to their country of origin to be mistreated, even after they have fled to seek safety abroad.

Is transnational repression a new phenomenon?

No, the practice of governments violating human rights beyond their borders is not new. Civil society organizations have been documenting such abuses for decades. What is new, however, is the growing recognition of transnational repression as more than a collection of grave incidents, but also as an increasing phenomenon of global concern, requiring global responses. What is also new is the increasing access to and use of sophisticated technology to harass, threaten, surveil and track people no matter where they are. This makes the reach of transnational repression even more pervasive. 

Where is transnational repression happening? 

Transnational repression is a global phenomenon. Cases have been documented in countries and regions around the world. The use of technology such as spyware increases the reach of transnational repression, essentially turning an infected device, such as a mobile phone, into a portable surveillance tool, allowing targeted individuals to be spied on and tracked around the world. 

Do only “repressive” states commit transnational repression?

While many authoritarian states resort to repressive tactics beyond their own borders, any government that seeks to silence dissent by targeting critics abroad is committing transnational repression. Democratic governments have also contributed to cases of transnational repression, for example through the provision of spyware, collaborating with repressive governments to deny visas or facilitate returns, or relying upon flawed Interpol Red Notices that expose targeted individuals to risk.

Are steps being taken to recognize and address transnational repression? 

Increasingly, human rights organizations, UN experts and states are documenting and taking steps to address transnational repression.

For example, Freedom House has published several reports on transnational repression and maintains an online resource documenting incidents globally. Human Rights Watch has published reports, including one outlining cases of transnational repression globally and another focusing on Southeast Asia. Amnesty International has published a report on transnational repression in Europe. Many other nongovernmental organizations are increasingly producing research and reports on the issue. In her report on journalists in exile, the UN Special Rapporteur on freedom of expression dedicated a chapter to transnational repression. The UN High Commissioner for Human Rights used the term in a June 2024 statement.

Certain governments are increasingly aware of the harms posed by transnational repression. Some are passing legislation to address the problem, while others are signing joint statements or raising transnational repression in international forums. However, government responses are often piecemeal, and a more cohesive and coordinated approach is needed. 

What should be done? 

Governments should speak out and condemn all cases of transnational repression, including by their friends and allies. They should take tangible steps to address transnational repression, including by adopting rights-respecting legal frameworks and policies to address it. Governments should put victims at the forefront of their response to these forms of repression. They should be particularly mindful of the risks and fears experienced by refugee and asylum communities. They should investigate and appropriately prosecute those responsible. Interpol should continue to improve vetting process by subjecting governments with a poor human rights record to more scrutiny when they submit Red Notices. Interpol should be transparent on which governments are continually abusing the Red Notice system, and limit their access to the database.  

At the international level, more can be done to integrate transnational repression within existing human rights reporting, and to mandate dedicated reporting on cases of transnational repression, trends, and steps needed to address it.

see also: https://humanrightsdefenders.blog/2024/03/19/transnational-repression-human-rights-watch-and-other-reports/

https://www.hrw.org/news/2024/06/12/qa-transnational-repression

U.S. State Department and the EU release an approach for protecting human rights defenders from online attacks.

March 13, 2024

On 12 March 2024 the U.S. and European Union issued new joint guidance on Monday for online platforms to help mitigate virtual attacks targeting human rights defenders, reports Alexandra Kelley,
Staff Correspondent, Nextgov/FCW.

Outlined in 10 steps, the guidance was formed following stakeholder consulting from January 2023 to February 2024. Entities including nongovernmental organizations, trade unionists, journalists, lawyers, environmental and land activists advised both governments on how to protect human rights defenders on the internet.

Recommendations within the guidance include: committing to an HRD [human rights defender] protection policy; identifying risks to HRDs; sharing information with peers and select stakeholders; creating policy to monitoring performance metric base marks; resource staff adequately; build a capacity to address local risks; offer safety tools education; create an incident reporting channel; provide access to help for HRDs; and incorporate a strong transparent infrastructure.

Digital threats HRDs face include target Internet shutdowns, censorship, malicious cyber activity, unlawful surveillance, and doxxing. Given the severity and reported increase of digital attacks against HRDs, the guidance calls upon online platforms to take mitigating measures.

The United States and the European Union encourage online platforms to use these recommendations to determine and implement concrete steps to identify and mitigate risks to HRDs on or through their services or products,” the guidance reads. 

The ten guiding points laid out in the document reflect existing transatlantic policy commitments, including the Declaration for the Future of the Internet. Like other digital guidance, however, these actions are voluntary. 

“These recommendations may be followed by further actions taken by the United States or the European Union to promote rights-respecting approaches by online platforms to address the needs of HRDs,” the document said

https://www.nextgov.com/digital-government/2024/03/us-eu-recommend-protections-human-rights-defenders-online/394865

Climate defense suffers from on-line abuse of Environmental Defenders

October 30, 2020

Deutsche Welle carries a long but interesting piece on “What impact is hate speech having on climate activism around the world?”

From the Philippines to Brazil and Germany, environmental activists are reporting a rise in online abuse. What might seem like empty threats and insults, can silence debate and lead to violence.

Hate speech online

Renee Karunungan, an environmental campaigner from the Philippines, says being an activist leaves you “exposed” and an easy target for online hate. And she would know.  “I’ve had a lot of comments about my body and face,” she says, “things like ‘you’re so fat’ or ‘ugly’,” she says. “But also, things like ‘I will rape you‘.”  Such threats were one reason she decided to leave the country.  

There isn’t much data on online abuse against environmentalists. But Karunungan is one of many saying it’s on the rise.  

As it becomes woven into the fabric of digital life, we sometimes forget the impact a single comment can have, Karunungan says: “The trauma that an activist feels – it is not just ‘online’, it is real. It can get you into a very dark place.”  

Platforms like TikTok and Facebook have begun responding to calls for stricter regulation for stricter regulations.  [see also: https://humanrightsdefenders.blog/2020/06/03/more-on-facebook-and-twitter-and-content-moderation/]..

“There is also a huge gray zone,” says Josephine Schmitt, researcher on hate speech at the Centre for Advanced Internet Studies, and definitions can be “very subjective.”  ..

While no international legal definition exists, the UN describes hate speech as communication attacking people or a group “based on their religion, ethnicity, nationality, race, color, descent, gender or other identity factor.” 

According to several researchers and activists, environmental campaigning also serves as an identifying factor that attracts hate.  

Environmental defenders are attacked because they serve as a projection surface for all kinds of group-based enmity,” says Lorenz Blumenthaler of the anti-racist Amadeu Antonio Foundation. 

Blumenthaler says his foundation has seen an “immense increase” in hate speech against climate activists in Germany – and particularly against those who are young and female.  This year Luisa Neubauer, prominent organiser of Germany’s Fridays for Future movement, won a court case regarding hateful comments she received online. This came after far-right party Alternative für Deutschland’s criticisms of Greta Thunberg included likening her to a cult figure and mocking her autism

In Bolsonaro’s Brazil, for example, Mary Menton, environmental justice research fellow at Sussex University, says in there is often a fine line between hate speech and smear campaigns.  She has seen an increase in the use of fake news and smear campaigns – on both social and traditional media – aiming to discredit the character of Indigenous leaders or make them look like criminals. Coming from high-level sources, as well as local lobbies and rural conglomerates, these attacks create an atmosphere of impunity for attacks against these Indigenous activists, Menton says, while for activists themselves, “it creates the sense there is a target on their backs.“.. 

Some of it comes from international “climate trolls” calling climate change a hoax or the activists too young and uninformed. But the most frightening come from closer to home. “Some people outrightly say we are terrorists and don’t deserve to live,” Mitzi says.  In Philippines, eco-activists are targets for “red-tagging” – where government and security forces brand critics as “terrorists” or “communists.” 

Global Witness ranks it the second most dangerous place in the world for environmental defenders, with 46 murders last year, and Mitzi believes there is a clear link between hate speech online and actual violence. 

Online hate can delegitimize certain political views and be the first step in escalating intimidation. Mitzi says many environmental groups are frightened of having their offices raided by the police and have experienced being put under surveillance.

Ed O’Donovan, of Irish-based human rights organization Frontline Defenders says in contrast to the anonymous targeting of human rights defenders by bots, attacks on climate activists “often originate with state-controlled media or government officials.” 

And they can serve a very strategic purpose, dehumanizing activists so that there is less outrage when they are subject to criminal process, or even attacked and killed.  Extractive industries and businesses are also involved, he adds, highlighting how “very calculated” hate speech campaigns are used to divide local communities and gain consent for development projects.  

Indigenous people protesting against large-scale projects, like these activists against a mine in Peru, are particular targets for hate campaigns For those invested in suppressing climate activism, Wodtke says hate speech can be a low-cost, high-impact strategy. For environmental defenders, it diverts their “attention, resources and energy,” forcing them into a position of defence against attacks on their legitimacy.  …

https://www.dw.com/en/what-impact-is-hate-speech-having-on-climate-activism-around-the-world/a-55420930

Tawakkol Karman on Facebook’s Oversight Board doesn’t please Saudis

May 13, 2020

Nobel Peace Prize laureate Yemeni Tawakkol Karman (AFP)

Nobel Peace Prize laureate Yemeni Tawakkol Karman (AFP)

On 10 May 2020 AlBawaba reported that Facebook had appointed Yemeni Nobel Peace Prize laureate Tawakkol Karman as a member of its newly-launched Oversight Board, an independent committee which will have the final say in whether Facebook and Instagram should allow or remove specific content. [ see also: https://humanrightsdefenders.blog/2020/04/11/algorithms-designed-to-suppress-isis-content-may-also-suppress-evidence-of-human-rights-violations/]

Karman, a human rights activist, journalist and politician, won the Nobel Peace Prize in 2011 for her role in Yemen’s Arab Spring uprising. Her appointment to the Facebook body has led to sharp reaction in the Saudi social media. She said that she has been subjected to a campaign of online harassment by Saudi media ever since she was appointed to Facebook’s Oversight Board. In a Twitter post on Monday she said, “I am subjected to widespread bullying & a smear campaign by #Saudi media & its allies.” Karman referred to the 2018 killing of Jamal Khashoggi indicating fears that she could be the target of physical violence.

Tawakkol Karman @TawakkolKarman

I am subjected to widespread bullying&a smear campaign by ’s media&its allies. What is more important now is to be safe from the saw used to cut ’s body into pieces.I am in my way to &I consider this as a report to the international public opinion.

However, previous Saudi Twitter campaigns have been proven by social media analysts to be manufactured and unrepresentative of public opinion, with thousands of suspicious Twitter accounts churning out near-identical tweets in support of the Saudi government line. The Yemeni human rights organization SAM for Rights and Liberties condemned the campaign against Karman, saying in a statement that “personalities close to the rulers of Saudi Arabia and the Emirates, as well as newspapers and satellite channels financed by these two regimes had joined a campaign of hate, and this was not a normal manifestation of responsible expression of opinion“.

Tengku Emma – spokesperson for Rohingyas – attacked on line in Malaysia

April 28, 2020
In an open letter in the Malay Mail of 28 April 2020 over 50 civil society organisations (CSO) and human rights activists, expressed their shock and condemnation about the mounting racist and xenophobic attacks in Malaysia against the Rohingya people and especially the targeted cyber attacks against Tengku Emma Zuriana Tengku Azmi, the representative of the European Rohingya Council’s (https://www.theerc.eu/about/) in Malaysia, and other concerned individuals for expressing their opinion and support for the rights of the Rohingya people seeking refuge in Malaysia.

[On 21 April 2020, Tengku Emma had her letter regarding her concern over the pushback of the Rohingya boat to sea published in the media. Since then she has received mobbed attacks and intimidation online, especially on Facebook.  The attacks, targeted her gender, particularly, with some including calls for rape. They were also intensely racist, both specifically targeted at her as well as the Rohingya. The following forms of violence have been documented thus far: 

● Doxxing – a gross violation by targeted research into her personal information and publishing it online, including her NRIC, phone number, car number plate, personal photographs, etc.; 

● Malicious distribution of a photograph of her son, a minor, and other personal information, often accompanied by aggressive, racist or sexist comments; 

● Threat of rape and other physical harm, and; 

● Distribution of fake and sexually explicit images. 

….One Facebook post that attacked her was shared more than 18,000 times since 23 April 2020. 

….We are deeply concerned and raise the question if there is indeed a concerted effort to spread inhumane, xenophobic and widespread hate that seem be proliferating in social media spaces on the issue of Rohingya seeking refuge in Malaysia, as a tool to divert attention from the current COVID-19 crisis response and mitigation.
When the attacks were reported to Facebook by Tengku Emma, no action was taken. Facebook responded by stating that the attacks did not amount to a breach of their Community Standards. With her information being circulated, accompanied by calls of aggression and violence, Tengku Emma was forced to deactivate her Facebook account. She subsequently lodged a police report in fear for her own safety and that of her family. 

There is, to date, no clear protection measures from either the police or Facebook regarding her reports. 

It is clear that despite direct threats to her safety and the cumulative nature of the attacks, current reporting mechanisms on Facebook are inadequate to respond, whether in timely or decisive ways, to limit harm. It is also unclear to what extent the police or the Malaysian Communications and Multimedia Commission (MCMC) are willing and able to respond to attacks such as this. 

It has been seven (7) days since Tengku Emma received her first attack, which has since ballooned outwards to tens of thousands. The only recourse she seems to have is deactivating her Facebook account, while the proponents of hatred and xenophobia continue to act unchallenged. This points to the systemic gaps in policy and laws in addressing xenophobia, online gender-based violence and hate speech, and even where legislation exists, implementation is far from sufficient. ]

Our demands: 

It must be stressed that the recent emergence and reiteration of xenophobic rhetoric and pushback against the Rohingya, including those already in Malaysia as well as those adrift at sea seeking asylum from Malaysia, is inhumane and against international norms and standards. The current COVID-19 pandemic is not an excuse for Malaysia to abrogate its duty as part of the international community. 

1.         The Malaysian government must, with immediate effect, engage with the United Nations, specifically the United Nations High Commissioner for Refugee (UNHCR), and civil society organisations to find a durable solution in support of the Rohingya seeking asylum in Malaysia on humanitarian grounds. 

2.         We also call on Malaysia to implement the Rabat Plan of Action on the prohibition of advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence, through a multistakeholder framework that promotes freedom of expression based on the principles of gender equality, non-discrimination and diversity.

3. Social media platforms, meanwhile, have the obligation to review and improve their existing standards and guidelines based on the lived realities of women and marginalised communities, who are often the target of online hate speech and violence, including understanding the cumulative impact of mobbed attacks and how attacks manifest in local contexts.

4. We must end all xenophobic and racist attacks and discrimination against Rohingya who seek asylum in Malaysia; and stop online harassment, bullying and intimidation against human rights defenders working on the Rohingya crisis.

For more posts on content moderation: https://humanrightsdefenders.blog/tag/content-moderation/

https://www.malaymail.com/news/what-you-think/2020/04/28/civil-society-orgs-stand-in-solidarity-with-women-human-rights-defender-ten/1861015

How social media companies can identify and respond to threats against human rights defenders

October 15, 2019

global computer threats

Image from Shutterstock.

Ginna Anderson writes in the ABA Abroad of 3

..Unfortunately, social media platforms are now a primary tool for coordinated, state-aligned actors to harass, threaten and undermine advocates. Although public shaming, death threats, defamation and disinformation are not unique to the online sphere, the nature of the internet has given them unprecedented potency. Bad actors are able to rapidly deploy their poisoned content on a vast scale. Social media companies have only just begun to recognize, let alone respond, to the problem. Meanwhile, individuals targeted through such coordinated campaigns must painstakingly flag individual pieces of content, navigate opaque corporate structures and attempt to survive the fallout. To address this crisis, companies such as Facebook, Twitter and Youtube must dramatically increase their capacity and will to engage in transparent, context-driven content moderation.

For human rights defenders, the need is urgent. .. Since 2011, the ABA Center for Human Rights (CHR) has ..noted with concern the coordination of “traditional” judicial harassment of defenders by governments, such as frivolous criminal charges or arbitrary detention, with online campaigns of intimidation. State-aligned online disinformation campaigns against individual defenders often precede or coincide with official investigations and criminal charges.

……

While social media companies generally prohibit incitement of violence and hate speech on their platforms, CHR has had to engage in additional advocacy with social media companies requesting the removal of specific pieces of content or accounts that target defenders. This extra advocacy has been required even where the content clearly violates a social media company’s terms of service and despite initial flagging by a defender. The situation is even more difficult where the threatening content is only recognizable with sufficient local and political context. The various platforms all rely on artificial intelligence, to varying degrees, to identify speech that violates their respective community standards. Yet current iterations of artificial intelligence are often unable to adequately evaluate context and intent.

Online intimidation and smear campaigns against defenders often rely on existing societal fault lines to demean and discredit advocates. In Guatemala, CHR recently documented a coordinated social media campaign to defame, harass, intimidate and incite violence against human rights defenders. Several were linked with so-called “net centers,” where users were reportedly paid to amplify hateful content across platforms. Often, the campaigns relied on “coded” language that hark back to Guatemala’s civil war and the genocide of Mayan communities by calling indigenous leaders communists, terrorists and guerrillas.

These terms appear to have largely escaped social media company scrutiny, perhaps because none is a racist slur per se. And yet, the proliferation of these online attacks, as well as the status of those putting out the content, is contributing to a worsening climate of violence and impunity for violence against defenders by specifically alluding to terms used to justify violence against indigenous communities. In 2018 alone, NPR reports that 26 indigenous defenders were murdered in Guatemala. In such a climate, the fear and intimidation felt by those targeted in such campaigns is not hyperbolic but based on their understanding of how violence can be sparked in Guatemala.

In order to address such attacks, social media companies must adopt policies that allow them to designate defenders as temporarily protected groups in countries that are characterized by state-coordinated or state-condoned persecution of activists. This is in line with international law that prohibits states from targeting individuals for serious harm based on their political opinion. To increase their ability to recognize and respond to persecution and online violence against human rights defenders, companies must continue to invest in their context-driven content moderation capacity, including complementing algorithmic monitoring with human content moderators well-versed in local dialects and historical and political context.

Context-driven content moderation should also take into account factors that increase the risk that online behavior will contribute to offline violence by identifying high-risk countries. These factors include a history of intergroup conflict and an overall increase in the number of instances of intergroup violence in the past 12 months; a major national political election in the next 12 months; and significant polarization of political parties along religious, ethnic or racial lines. Countries where these and other risk factors are present call for proactive approaches to identify problematic accounts and coded threats against defenders and marginalized communities, such as those shown in Equality Labs’ “Facebook India” report.

Companies should identify, monitor and be prepared to deplatform key accounts that are consistently putting out denigrating language and targeting human rights defenders. This must go hand in hand with the greater efforts that companies are finally beginning to take to identify coordinated, state-aligned misinformation campaigns. Focusing on the networks of users who abuse the platform, instead of looking solely at how the online abuse affects defenders’ rights online, will also enable companies to more quickly evaluate whether the status of the speaker increases the likelihood that others will take up any implicit call to violence or will be unduly influenced by disinformation.

This abuser-focused approach will also help to decrease the burden on defenders to find and flag individual pieces of content and accounts as problematic. Many of the human rights defenders with whom CHR works are giving up on flagging, a phenomenon we refer to as flagging fatigue. Many have become fatalistic about the level of online harassment they face. This is particularly alarming as advocates targeted online may develop skins so thick that they are no longer able to assess when their actual risk of physical violence has increased.

Finally, it is vital that social media companies pursue, and civil society demand, transparency in content moderation policy and decision-making, in line with the Santa Clara Principles. Put forward in 2018 by a group of academic experts, organizations and advocates committed to freedom of expression online, the principles are meant to guide companies engaged in content moderation and ensure that the enforcement of their policies is “fair, unbiased, proportional and respectful of users’ rights.” In particular, the principles call upon companies to publicly report on the number of posts and accounts taken down or suspended on a regular basis, as well as to provide adequate notice and meaningful appeal to affected users.

CHR routinely supports human rights defenders facing frivolous criminal charges related to their human rights advocacy online or whose accounts and documentation have been taken down absent any clear justification. This contributes to a growing distrust of the companies among the human rights community as apparently arbitrary decisions about content moderation are leaving advocates both over- and under-protected online.

As the U.N. special rapporteur on freedom of expression explained in his 2018 report, content moderation processes must include the ability to appeal the removal, or refusal to remove, content or accounts. Lack of transparency heightens the risk that calls to address the persecution of human rights defenders online will be subverted into justifications for censorship and restrictions on speech that is protected under international human rights law.

A common response when discussing the feasibility of context-driven content moderation is to compare it to reviewing all the grains of sand on a beach. But human rights defenders are not asking for the impossible. We are merely pointing out that some of that sand is radioactive—it glows in the dark, it is lethal, and there is a moral and legal obligation upon those that profit from the beach to deal with it.

Ginna Anderson, senior counsel, joined ABA CHR in 2012. She is responsible for supporting the center’s work to advance the rights of human rights defenders and marginalized dommunities, including lawyers and journalists at risk. She is an expert in health and human rights, media freedom, freedom of expression and fair trial rights. As deputy director of the Justice Defenders Program since 2013, she has managed strategic litigation, fact-finding missions and advocacy campaigns on behalf of human rights defenders facing retaliation for their work in every region of the world

http://www.abajournal.com/news/article/how-can-social-media-companies-identify-and-respond-to-threats-against-human-rights-defenders