Posts Tagged ‘content moderation’

New report: You Tube also needs scrutiny

June 22, 2022

In June 2022, Paul M. Barrett and Justin Hendrix of NYU’s STERN Centre for Business and Human Rights came with a very timely report: “A Platform ‘Weaponized’: How YouTube Spreads Harmful Content— And What Can Be Done About It“. We know less about YouTube than the other major social media platforms. YouTube, with more than 2 billion users, is the most popular social media site not just in the United States, but in India and Russia as well. But because of the relative difficulty of analyzing long-form videos, as compared to text or still images, YouTube has received less scrutiny from researchers and policymakers. This in-depth report addresses the knowledge gap.

Like other major platforms, You Tube has a dual nature: It provides two billion users access to news, entertainment, and do-it-yourself videos, but it also serves as a venue for political disinformation, public health myths, and incitement of violence.

——————————————————————-

YouTube’s role in Russia illustrates this duality. Since Russia launched its invasion of Ukraine in February 2022, YouTube has offered ordinary Russians factual information about the war, even as the Kremlin has blocked or restricted other Western-based social media platforms and pressured foreign journalists in the country to silence themselves. But for years before the brutal incursion, YouTube served as a megaphone for Vladimir Putin’s disinformation about Ukraine and its relations with the West. Despite its heft and influence, less is known about YouTube than other major social media sites.

Does YouTube send unwitting users down a ‘rabbit hole’ of extremism?

In response to reports that the platform’s own recommendations were “radicalizing” impressionable individuals, YouTube and its parent, Google, altered its recommendation algorithm, apparently reducing the volume of recommendations of misinformation and conspiratorial content. But platform recommendations aren’t the only way people find potentially harmful material. Some, like the white 18-year-old accused of shooting and killing 10 Black people in a Buffalo, N.Y., grocery store, seek out videos depicting violence and bigotry. These self-motivated extremists can find affirmation and encouragement to turn their resentments into dangerous action.

A social media venue with global reach

Roughly 80% of YouTube traffic comes from outside the United States, and because of language and cultural barriers, the platform’s content moderation efforts are less successful abroad than at home. The report explores how YouTube is exploited by Hindu nationalists persecuting Muslims in India, right-wing anti-vaccine advocates in Brazil, and supporters of the military junta in Myanmar.


In Part 2, we examine YouTube’s role as the internet’s vast video library, one which has contributed to the spread of misinformation and other harmful content. In 2019, for example, YouTube reacted to com-
plaints that its recommendations were pushing impressionable users toward extremist right-wing views.
The company made a series of changes to its algorithms, resulting in a decline in recommendations of conspiratorial and false content. But recommendations are not the only way that people find videos on YouTube. A troubling amount of extremist content remains available for users who search for it. Moreover, YouTube’s extensive program for sharing advertising revenue with popular creators means that purveyors of misinformation can make a living while amplifying the grievances and resentments that foment partisan hatred, particularly on the political right.

In Part 3, we turn our attention to YouTube’s role in countries outside of the U.S., where more than 80%
of the platform’s traffic originates and where a profusion of languages, ethnic tensions, and cultural variations make the company’s challenges more complicated than in its home market. Organized misogynists in South Korea, far-right ideologues in Brazil, anti-Muslim Hindu nationalists, and supporters of Myanmar’s oppressive military regime have all exploited YouTube’s extraordinary reach to
spread pernicious messages and rally like minded users. [see also: https://humanrightsdefenders.blog/2020/11/02/bbc-podcast-on-the-framing-of-video-monk-luon-sovath/]


Recommendations to the U.S. government

Allocate political capital to reduce the malign side effects of social media: President Biden’s off-the-
cuff expressions of impatience with the industry aren’t sufficient. He ought to make a carefully considered statement and lend his authority to legislative efforts to extend federal oversight authority. Former President Obama’s recent speech at Stanford about disinformation provided a helpful foundation.
Enhance the FTC’s authority to oversee social media: Some of the issues raised in this report could
be addressed by a proposal we made in a February 2022 white paper—namely, that Congress should
authorize the Federal Trade Commission to use its consumer protection authority to require social media companies to disclose more data about their business models and operations, as well as provide procedurally adequate content moderation.

To YouTube:
Disclose more information about how the platform works: A place to start is explaining the criteria
algorithms use to rank, recommend, and remove content—as well as how the criteria are weighted relative to one another.
Facilitate greater access to data that researchers need to study YouTube: The platform should ease
its resistance to providing social scientists with information for empirical studies, including random samples of videos.
Expand and improve human review of potential harmful content: YouTube’s parent company, Google,
says that it has more than 20,000 people around the world working on content moderation, but it declines to specify how many do hands-on review of YouTube videos. Whatever that number is, it needs to grow, and outsourced moderators should be brought in-house.
Invest more in relationships with civil society and news organizations: In light of their contribution to the
collapse of the advertising-based business model of many U.S. news-gathering organizations, the platforms should step up current efforts to ensure the viability of the journalism business, especially at the local level.

The NYU Center for Business and Human Rights began publishing reports on the effects of social media on democracy in the wake of Russia’s exploitation of Facebook, Twitter, and YouTube during the 2016 U.S. presidential campaign. We initially advocated for heightened industry self-regulation, in part to forestall government intervention that could lead to First Amendment complications. As the inadequacy of industry reforms has become clear, we have supplemented our calls for self-regulation with a proposal for enhancement of the Federal Trade Commission’s consumer protection
authority to oversee the industry.

In Part 4, we offer a concise version of the FTC proposal, as well as a series of recommendations to YouTube itself. The report does not address the problem of YouTube hosting potentially harmful videos aimed at children and teenagers. This persistent phenomenon deserves continued scrutiny but is beyond the scope of our analysis.

VIEW FULL REPORT

https://bhr.stern.nyu.edu/blogs/2022/6/10/report-a-platform-weaponized-how-youtube-spreads-harmful-content-and-what-can-be-done-about-it

Facebook’s ‘supreme court’ is not a court

April 21, 2022
author profile picture

Abe Chauhan (a BCL candidate at the University of Oxford) wrote an interesting opinion about Facebook’s Oversight Board.

The Oversight Board is an independent institution created by Meta which reviews – in light of human rights law – the decisions of its platforms, Facebook and Instagram, on whether posts violate their policies and should be removed. The Board represents a novel, decentralised approach to protecting freedom of expression and other rights, calling into question whether private entities should perform judicial human rights functions.See also: https://humanrightsdefenders.blog/2021/03/17/facebook-launches-a-human-rights-policy-and-fund-aimed-for-human-rights-defenders/

Following calls for more robust, transparent and accountable regulation of Facebook, and after a year of consultations, interviews and research, in October 2020 the Facebook (now Meta) Oversight Board became operational. It is an independent institution, funded indirectly by Meta through a $130 million trust arrangement, which makes binding determinations on content decisions appealed by users. Board members include law professors, human rights practitioners and civil society actors from around the world. Under Article 2, Section 2 of the Board’s Charter, it applies Meta’s Community Standards “in light of human rights norms protecting free expression”. At the time of writing, the Board has released 23 decisions on posts concerning various issues ranging from unmarked graves at former residential schools in Canada to the promotion of ayahuasca. The most widely publicised of these concerned an appeal against Facebook’s decision to block then-President Donald Trump from the platform following his posts in relation to the attack on the US Capitol. The Board decided to uphold the decision, but criticised Facebook’s use of the penalty of indefinite suspension, requiring it to determine a proportionate response. In that case, the Board engaged in extensive discussion on freedom of expression under the ICCPR, deriving a requirement for proportionate limitation from international jurisprudence and developing a number of factors influencing how this assessment should operate.

What, then, is the legal character of the Board? Although it applies human rights law in representative appeals from around the world, like an international human rights court, the Board is a private body and its determinations cannot preclude individual petitions. In this light, the Board is just an additional stage to Meta’s internal review procedure. What cannot be understated, however, is the reach of the Board’s decisions – it makes binding determinations on content decisions about the posts of Facebook’s 3 billion and Instagram’s 1.5 billion active users. Not only are its decisions wide-reaching, but they may have norm-shaping effect as a subsidiary means for interpreting human rights law. Many of the Board’s members are highly qualified human rights scholars and the unanimous decision-making system it applies increases the normative weight of decisions. While in a formal sense the Board is little more than an arbitral tribunal binding only Meta, its decisions have material effects on the rights enjoyment of many individuals around the world. This suggests that rather than merely an internal review procedure, the Board is a private human rights court. Viewed in this light, the Board fills a gap. The effectiveness of the human rights regime depends on the ability of States to protect rights. However, it is large multinational social media corporations which have factual control over many forums of expression. States can only indirectly regulate these, with potentially limited actual impact on the enjoyment of freedom of expression by users. The Board fills this ability gap by implementing rights adjudication internally within Meta.

Some might welcome this novel approach as it signals the greater horizontalisation of human rights. However, while it may be correct that these organisations ought to bear obligations to respect, protect and promote human rights violations, the role the Board is playing goes much further than this. In making final determinations on human rights limitations by Meta, it subsumes – de facto rather than de jure – part of the exclusively State function of human rights adjudication. This is problematic for two reasons. Firstly, it is unacceptable for private institutions to make rights determinations with wide-ranging effects, absent a delegation of this power by the State. Secondly, alongside this absence of legitimacy, the Board may slowly diminish rights protection. Even if it acts in the public interest, the Board is free to develop its own jurisprudence. As Benesch suggests, over time it will have to depart from international human rights law because this field was not designed for application by private companies and its norms and principles need to be adapted accordingly.

While the Board may well be an effective institution – its initial decisions apply close analysis to human rights issues and frequently overturn Meta’s content decisions – it is highly questionable that a private institution should perform the exclusive State function of human rights adjudication.

https://ohrh.law.ox.ac.uk/the-facebook-supreme-court-and-private-human-rights-adjudication/

25 March: IACHR Hearing on Internet Content Moderation

March 22, 2021

The Inter-American Commission on Human Rights (IACHR) is in the middle of its 179th Period of Sessions, which is being held again in an all-virtual format.  The IACHR has called a hearing on its own initiative (an ex officio hearing) on the important topic of content moderation: “Internet content moderation and freedom of expression in the Americas”, scheduled for Thursday 25 March 2021, from 2-3:30pm ET.

For some of my earlier posts on this topic, see: https://humanrightsdefenders.blog/tag/content-moderation/

To register to watch the virtual hearing on Internet Content Moderation, visit: https://cidh-org.zoom.us/j/85942567179?pwd=SWY1cTVTOUp6MmhyTjR6bFNPZTV1Zz09

Event: IACHR Hearing on Internet Content Moderation and the Freedom of Expression in the Americas

Arab Spring: information technology platforms no longer support human rights defenders in the Middle East and North Africa

December 18, 2020

Jason Kelley in the Electronic Frontier Foundation (EFF) of 17 December 2020 summarizes a joint statement by over 30 NGOs saying that the platform policies and content moderation procedures of the tech giants now too often lead to the silencing and erasure of critical voices from across the region. Arbitrary and non-transparent account suspension and removal of political and dissenting speech has become so frequent and systematic in the area that it cannot be dismissed as isolated incidents or the result of transitory errors in automated decision-making.

Young people protest in Morocco, 2011, photo by Magharebia

This year is the tenth anniversary of what became known as the “Arab Spring”, in which activists and citizens across the Middle East and North Africa (MENA) used social media to document the conditions in which they lived, to push for political change and social justice, and to draw the world’s attention to their movement. For many, it was the first time they had seen how the Internet could have a role to play in pushing for human rights across the world. Emerging social media platforms like Facebook, Twitter and YouTube all basked in the reflected glory of press coverage that centered their part in the protests: often to the exclusion of those who were actually on the streets. The years after the uprisings failed to live up to the optimism of the time. Offline, the authoritarian backlash against the democratic protests has meant that many of those who fought for justice a decade ago, are still fighting now.

The letter asks for several concrete measures to ensure that users across the region are treated fairly and are able to express themselves freely:

  • Do not engage in arbitrary or unfair discrimination.
  • Invest in the regional expertise to develop and implement context-based content moderation decisions aligned with human rights frameworks.
  • Pay special attention to cases arising from war and conflict zones.
  • Preserve restricted content related to cases arising from war and conflict zones.
  • Go beyond public apologies for technical failures, and provide greater transparency, notice, and offer meaningful and timely appeals for users by implementing the Santa Clara Principles on Transparency and Accountability in Content Moderation.

Content moderation policies are not only critical to ensuring robust political debate. They are key to expanding and protecting human rights.  Ten years out from those powerful protests, it’s clear that authoritarian and repressive regimes will do everything in their power to stop free and open expression. Platforms have an obligation to note and act on the effects content moderation has on oppressed communities, in MENA and elsewhere. [see also: https://humanrightsdefenders.blog/2020/06/03/more-on-facebook-and-twitter-and-content-moderation/]

In 2012, Mark Zuckerberg, CEO and Founder of Facebook, wrote

By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.

Instead, governments around the world have chosen authoritarianism, and platforms have contributed to the repression. It’s time for that to end.

Read the full letter demanding that Facebook, Twitter, and YouTube stop silencing critical voices from the Middle East and North Africa, reproduced below:

17 December 2020

Open Letter to Facebook, Twitter, and YouTube: Stop silencing critical voices from the Middle East and North Africa

Ten years ago today, 26-year old Tunisian street vendor Mohamed Bouazizi set himself on fire in protest over injustice and state marginalization, igniting mass uprisings in Tunisia, Egypt, and other countries across the Middle East and North Africa. 

As we mark the 10th anniversary of the Arab Spring, we, the undersigned activists, journalists, and human rights organizations, have come together to voice our frustration and dismay at how platform policies and content moderation procedures all too often lead to the silencing and erasure of critical voices from marginalized and oppressed communities across the Middle East and North Africa.

The Arab Spring is historic for many reasons, and one of its outstanding legacies is how activists and citizens have used social media to push for political change and social justice, cementing the internet as an essential enabler of human rights in the digital age.   

Social media companies boast of the role they play in connecting people. As Mark Zuckerberg famously wrote in his 2012 Founder’s Letter

“By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.”

Zuckerberg’s prediction was wrong. Instead, more governments around the world have chosen authoritarianism, and platforms have contributed to their repression by making deals with oppressive heads of state; opening doors to dictators; and censoring key activists, journalists, and other changemakers throughout the Middle East and North Africa, sometimes at the behest of other governments:

  • Tunisia: In June 2020, Facebook permanently disabled more than 60 accounts of Tunisian activists, journalists, and musicians on scant evidence. While many were reinstated, thanks to the quick reaction from civil society groups, accounts of Tunisian artists and musicians still have not been restored. We sent a coalition letter to Facebook on the matter but we didn’t receive a public response.
  • Syria: In early 2020, Syrian activists launched a campaign to denounce Facebook’s decision to take down/disable thousands of anti-Assad accounts and pages that documented war crimes since 2011, under the pretext of removing terrorist content. Despite the appeal, a number of those accounts remain suspended. Similarly, Syrians have documented how YouTube is literally erasing their history.
  • Palestine: Palestinian activists and social media users have been campaigning since 2016 to raise awareness around social media companies’ censorial practices. In May 2020, at least 52 Facebook accounts of Palestinian activists and journalists were suspended, and more have since been restricted. Twitter suspended the account of a verified media agency, Quds News Network, reportedly on suspicion that the agency was linked to terrorist groups. Requests to Twitter to look into the matter have gone unanswered. Palestinian social media users have also expressed concern numerous times about discriminatory platform policies.
  • Egypt: In early October 2019, Twitter suspended en masse the accounts of Egyptian dissidents living in Egypt and across the diaspora, directly following the eruption of anti-Sisi protests in Egypt. Twitter suspended the account of one activist with over 350,000 followers in December 2017, and the account still has yet to be restored. The same activist’s Facebook account was also suspended in November 2017 and restored only after international intervention. YouTube removed his account earlier in 2007.

Examples such as these are far too numerous, and they contribute to the widely shared perception among activists and users in MENA and the Global South that these platforms do not care about them, and often fail to protect human rights defenders when concerns are raised.  

Arbitrary and non-transparent account suspension and removal of political and dissenting speech has become so frequent and systematic that they cannot be dismissed as isolated incidents or the result of transitory errors in automated decision-making. 

While Facebook and Twitter can be swift in responding to public outcry from activists or private advocacy by human rights organizations (particularly in the United States and Europe), in most cases responses to advocates in the MENA region leave much to be desired. End-users are frequently not informed of which rule they violated, and are not provided a means to appeal to a human moderator. 

Remedy and redress should not be a privilege reserved for those who have access to power or can make their voices heard. The status quo cannot continue. 

The MENA region has one of the world’s worst records on freedom of expression, and social media remains critical for helping people connect, organize, and document human rights violations and abuses. 

We urge you to not be complicit in censorship and erasure of oppressed communities’ narratives and histories, and we ask you to implement the following measures to ensure that users across the region are treated fairly and are able to express themselves freely:

  • Do not engage in arbitrary or unfair discrimination. Actively engage with local users, activists, human rights experts, academics, and civil society from the MENA region to review grievances. Regional political, social, cultural context(s) and nuances must be factored in when implementing, developing, and revising policies, products and services. 
  • Invest in the necessary local and regional expertise to develop and implement context-based content moderation decisions aligned with human rights frameworks in the MENA region.  A bare minimum would be to hire content moderators who understand the various and diverse dialects and spoken Arabic in the twenty-two Arab states. Those moderators should be provided with the support they need to do their job safely, healthily, and in consultation with their peers, including senior management.
  • Pay special attention to cases arising from war and conflict zones to ensure content moderation decisions do not unfairly target marginalized communities. For example, documentation of human rights abuses and violations is a legitimate activity distinct from disseminating or glorifying terrorist or extremist content. As noted in a recent letter to the Global Internet Forum to Counter Terrorism, more transparency is needed regarding definitions and moderation of terrorist and violent extremist (TVEC) content
  • Preserve restricted content related to cases arising from war and conflict zones that Facebook makes unavailable, as it could serve as evidence for victims and organizations seeking to hold perpetrators accountable. Ensure that such content is made available to international and national judicial authorities without undue delay.
  • Public apologies for technical errors are not sufficient when erroneous content moderation decisions are not changed. Companies must provide greater transparency, notice, and offer meaningful and timely appeals for users. The Santa Clara Principles on Transparency and Accountability in Content Moderation, which Facebook, Twitter, and YouTube endorsed in 2019, offer a baseline set of guidelines that must be immediately implemented. 

Signed,

Access Now
Arabic Network for Human Rights Information — ANHRI
Article 19
Association for Progressive Communications — APC
Association Tunisienne de Prévention Positive
Avaaz
Cairo Institute for Human Rights Studies (CIHRS)
The Computational Propaganda Project
Daaarb — News — website
Egyptian Initiative for Personal Rights
Electronic Frontier Foundation
Euro-Mediterranean Human Rights Monitor
Global Voices
Gulf Centre for Human Rights (GCHR)
Hossam el-Hamalawy, journalist and member of the Egyptian Revolutionary Socialists Organization
Humena for Human Rights and Civic Engagement
IFEX
Ilam- Media Center For Arab Palestinians In Israel
ImpACT International for Human Rights Policies
Initiative Mawjoudin pour l’égalité
Iraqi Network for Social Media – INSMnetwork
I WATCH Organisation (Transparency International — Tunisia)
Khaled Elbalshy – Daaarb website – Editor in Chief
Mahmoud Ghazayel, Independent
Marlena Wisniak, European Center for Not-for-Profit Law
Masaar — Technology and Law Community
Michael Karanicolas, Wikimedia/Yale Law School Initiative on Intermediaries and Information
Mohamed Suliman, Internet activist
My.Kali magazine — Middle East and North Africa
Palestine Digital Rights Coalition (PDRC)
The Palestine Institute for Public Diplomacy
Pen Iraq
Quds News Network
Ranking Digital Rights
Rima Sghaier, Independent
Sada Social Center
Skyline International for Human Rights
SMEX
Syrian Center for Media and Freedom of Expression (SCM)
The Tahrir Institute for Middle East Policy (TIMEP)
Taraaz
Temi Lasade-Anderson, Digital Action
WITNESS
Vigilance Association for Democracy and the Civic State — Tunisia
7amleh – The Arab Center for the Advancement of Social Media

https://www.eff.org/deeplinks/2020/12/decade-after-arab-spring-platforms-have-turned-their-backs-critical-voices-middle

Facebook and YouTube are allowing themselves to become tools of the Vietnamese authorities’ censorship and harassment

December 1, 2020

On 1 December 2020, Amnesty International published a new report on how Facebook and YouTube are allowing themselves to become tools of the Vietnamese authorities’ censorship and harassment of its population, in an alarming sign of how these companies could increasingly operate in repressive countries. [see also: https://humanrightsdefenders.blog/2020/06/03/more-on-facebook-and-twitter-and-content-moderation/].

The 78-page report, “Let us Breathe!”: Censorship and criminalization of online expression in Viet Nam”, documents the systematic repression of peaceful online expression in Viet Nam, including the widespread “geo-blocking” of content deemed critical of the authorities, all while groups affiliated to the government deploy sophisticated campaigns on these platforms to harass everyday users into silence and fear.

The report is based on dozens of interviews with human rights defenders and activists, including former prisoners of conscience, lawyers, journalists and writers, in addition to information provided by Facebook and Google. It also reveals that Viet Nam is currently holding 170 prisoners of conscience, of whom 69 are behind bars solely for their social media activity. This represents a significant increase in the number of prisoners of conscience estimated by Amnesty International in 2018.

In the last decade, the right to freedom of expression flourished on Facebook and YouTube in Viet Nam. More recently, however, authorities began focusing on peaceful online expression as an existential threat to the regime,” said Ming Yu Hah, Amnesty International’s Deputy Regional Director for Campaigns.

Today these platforms have become hunting grounds for censors, military cyber-troops and state-sponsored trolls. The platforms themselves are not merely letting it happen – they’re increasingly complicit.

In 2018, Facebook’s income from Viet Nam neared US$1 billion – almost one third of all revenue from Southeast Asia. Google, which owns YouTube, earned US$475 million in Viet Nam during the same period, mainly from YouTube advertising. The size of these profits underlines the importance for Facebook and Google of maintaining market access in Viet Nam.”

In April 2020, Facebook announced it had agreed to “significantly increase” its compliance with requests from the Vietnamese government to censor “anti-state” posts. It justified this policy shift by claiming the Vietnamese authorities were deliberately slowing traffic to the platform as a warning to the company.

Last month, in Facebook’s latest Transparency Report – its first since it revealed its policy of increased compliance with the Vietnamese authorities’ censorship demands – the company revealed a 983% increase in content restrictions based on local law as compared with the previous reporting period, from 77 to 834. Meanwhile, YouTube has consistently won praise from Vietnamese censors for its relatively high rate of compliance with censorship demands.

State-owned media reported Information Minister Nguyen Manh Hung as saying in October that compliance with the removal of “bad information, propaganda against the Party and the State” was higher than ever, with Facebook and Google complying with 95% and 90% of censorship requests, respectively.

Based on dozens of testimonies and evidence, Amnesty International’s report shows how Facebook and YouTube’s increasing censorship of content in Vietnam operates in practice.

In some cases, users see their content censored under vaguely worded local laws, including offences such as “abusing democratic freedoms” under the country’s Criminal Code. Amnesty International views these laws as inconsistent with Viet Nam’s obligations under international human rights law. Facebook then “geo-blocks” content, meaning it becomes invisible to anyone accessing the platform in Viet Nam.

Nguyen Van Trang, a pro-democracy activist now seeking asylum in Thailand, told Amnesty International that in May 2020, Facebook notified him that one of his posts had been restricted due to “local legal restrictions”. Since then, Facebook has blocked every piece of content he has tried to post containing the names of senior members of the Communist Party. 

Nguyen Van Trang has experienced similar restrictions on YouTube, which, unlike Facebook, gave him the option to appeal such restrictions. Some appeals have succeeded and others not, without YouTube providing any explanation.

Truong Chau Huu Danh is a well-known freelance journalist with 150,000 followers and a verified Facebook account. He told Amnesty International that between 26 March and 8 May 2020, he posted hundreds of pieces of content about a ban on rice exports and the high-profile death penalty case of Ho Duy Hai. In June, he realized these posts had all vanished without any notification from Facebook whatsoever.

Amnesty International heard similar accounts from other Facebook users, particularly when they tried to post about a high-profile land dispute in the village of Dong Tam, which opposed local villagers to military-run telecommunications company Viettel. The dispute culminated in a confrontation between villagers and security forces in January 2020 that saw the village leader and three police officers killed.

After Facebook announced its new policy in April 2020, land rights activists Trinh Ba Phuong and Trinh Ba Tu reported that all the content they had shared about the Dong Tam incident had been removed from their timelines without their knowledge and without notification.

On 24 June 2020, the pair were arrested and charged with “making, storing, distributing or disseminating information, documents and items against the Socialist Republic of Vietnam” under Article 117 of the Criminal Code after they reported extensively on the Dong Tam incident. They are currently in detention. Their Facebook accounts have disappeared since their arrests under unknown circumstances. Amnesty International considers both Trinh Ba Phuong and Trinh Ba Tu to be prisoners of conscience.

The Vietnamese authorities’ campaign of repression often results in the harassment, intimidation, prosecution and imprisonment of people for their social media use. There are currently 170 prisoners of conscience imprisoned in Viet Nam, the highest number ever recorded in the country by Amnesty International. Nearly two in five (40%) have been imprisoned because of their peaceful social media activity.

Twenty-one of the 27 prisoners of conscience jailed in 2020, or 78%, were prosecuted because of their peaceful online activity under Articles 117 or 331 of the Criminal Code – the same repressive provisions that often form the basis of ‘local legal restrictions’ implemented by Facebook and YouTube. For every prisoner of conscience behind bars, there are countless people in Viet Nam who see this pattern of repression and intimidation and are understandably terrified about speaking their mind. Ming Yu Hah

These individuals’ supposed “crimes” include peacefully criticizing the authorities’ COVID-19 response on Facebook and sharing independent information about human rights online.

For every prisoner of conscience behind bars, there are countless people in Viet Nam who see this pattern of repression and intimidation and are understandably terrified about speaking their minds,” said Ming Yu Hah.

Amnesty International has documented dozens of incidents in recent years in which human rights defenders have received messages meant to harass and intimidate, including death threats. The systematic and organized nature of these harassment campaigns consistently bear the hallmarks of state-sponsored cyber-troops such as Du Luan Vien or “public opinion shapers” – people recruited and managed by the Communist Party of Viet Nam (CPV)’s Department of Propaganda to engage in psychological warfare online.

The activities of Du Luan Vien are complemented by those of “Force 47”, a cyberspace military battalion made up of some 10,000 state security forces whose function is to “fight against wrong views and distorted information on the internet”.

While “Force 47” and groups such as Du Luan Vien operate opaquely, they are known to engage in mass reporting campaigns targeting human rights –related content, often leading to their removal and account suspensions by Facebook and YouTube.

Additionally, Amnesty International’s investigation documented multiple cases of bloggers and social media users being physically attacked because of their posts by the police or plainclothes assailants, who operate with the apparent acquiescence of state authorities and with virtually no accountability for such crimes.


Putting an end to complicity

The Vietnamese authorities must stop stifling freedom of expression online. Amnesty International is calling for all prisoners of conscience in Viet Nam to be released immediately and unconditionally and for the amendment of repressive laws that muzzle freedom of expression.

Companies – including Facebook and Google – have a responsibility to respect all human rights wherever they operate. They should respect the right to freedom of expression in their content moderation decisions globally, regardless of local laws that muzzle freedom of expression. Tech giants should also overhaul their content moderation policies to ensure their decisions align with international human rights standards.

In October 2020, Facebook launched a global Oversight Board – presented as the company’s independent “Supreme Court” and its solution to the human rights challenges presented by content moderation. Amnesty International’s report reveals, however, that the Board’s bylaws will prevent it from reviewing the company’s censorship actions pursuant to local law in countries like Vet Nam. It’s increasingly obvious that the Oversight Board is incapable of solving Facebook’s human rights problems. Ming Yu Hah

“It’s increasingly obvious that the Oversight Board is incapable of solving Facebook’s human rights problems. Facebook should expand the scope of the Oversight Board to include content moderation decisions pursuant to local law; if not, the Board – and Facebook – will have again failed Facebook users,” said Ming Yu Hah.

[see also: https://humanrightsdefenders.blog/2020/04/11/algorithms-designed-to-suppress-isis-content-may-also-suppress-evidence-of-human-rights-violations/]

“Far from the public relations fanfare, countless people who dare to speak their minds in Viet Nam are being silenced. The precedent set by this complicity is a grave blow to freedom of expression around the world.”

https://www.amnesty.org/en/latest/news/2020/12/viet-nam-tech-giants-complicit/

https://www.theguardian.com/world/2020/dec/01/facebook-youtube-google-accused-complicity-vietnam-repression

https://thediplomat.com/2020/07/facebook-vietnams-fickle-partner-in-crime/

Facebook engineers resign due to Zuckerberg’s political stance

June 6, 2020
Image courtesy of Yang Jing/Unsplash

Yen Palec on 6 June 2020 writes that a group of Facebook employees recently resigned. They do not agree with Mark Zuckerberg’s political stance. Some engineers are condemning the executive for his refusal to act on issues about politics and police brutality.

See: https://humanrightsdefenders.blog/2020/06/03/more-on-facebook-and-twitter-and-content-moderation/

In a blog post, the engineers claim that Facebook has become a “platform that enables politicians to radicalize individuals and glorify violence.”.. Several employees, many of whom are working at home due to the pandemic, are criticizing the company. While some claim that the First Amendment protects these hate posts, many are arguing that it has gone too much…..

These criticisms are coming from some of Facebook’s early and tenured employees. Among those that vent their criticism are Dave Willner and Brandee Barker. They claim that the company’s policy may result in a double standard when it comes to political speech…..

In terms of political speech, Twitter’s action set the standard for tech companies to follow. Human rights activists and free speech defenders rally into Twitter’s side, one major platform did not: and that’s Facebook. As the biggest social media platform in the world, the company has the power to influence almost any political debate.

https://micky.com.au/facebook-engineers-resign-due-to-zuckerbergs-political-stance/

More on Facebook and Twitter and content moderation

June 3, 2020

On 2 June 2020 many media (here Natasha Kuma) wrote about the ‘hot potatoe’ in the social media debate about which posts are harmful and should be deleted or given a warning. Interesting to note that the European Commission supported the unprecedented decision of Twitter to mark the message of the President Trump about the situation in Minneapolis as violating the rules of the company about the glorification of violence.

The EU Commissioner Thierry Breton said: “we welcome the contribution of Twitter, directed to the social network of respected European approach”. Breton also wrote: “Recent events in the United States show that we need to find the right answers to difficult questions. What should be the role of digital platforms in terms of preventing the flow of misinformation during the election, or the crisis in health care? How to prevent the spread of hate speech on the Internet?” Vice-President of the European Commission Faith Jourova in turn, said that politicians should respond to criticism with facts, not resorting to threats and attacks.

Some employees of Facebook staged a virtual protest against the decision of Mark Zuckerberg not to take any action on the statements of Trum,. The leaders of the three American civil rights groups after a conversation with Zuckerberg and COO Sheryl Sandberg, released a joint statement in which they say that human rights defenders were not satisfied with the explanation of Mark Zuckerberg position: “He (Zuckerberg) refuses to acknowledge that Facebook is promoting trump’s call for violence against the protesters. Mark sets a very dangerous precedent.”

————-

Earlier – on 14 May 2020 – David Cohen wrote about Facebook having outlined learnings and steps it has taken as a result of its Human Rights Impact Assessments in Cambodia, Indonesia, Sri Lanka

Facebook shared results from a human rights impact assessments it commissioned in 2018 to evaluate the role of its services in Cambodia, Indonesia and Sri Lanka.

Director of human rights Miranda Sissons and product policy manager, human rights Alex Warofka said in a Newsroom post, “Freedom of expression is a foundational human right that allows for the free flow of information. We’re reminded how vital this is, in particular, as the world grapples with Covid-19, and accurate and authoritative information is more important than ever. Human rights defenders know this and fight for these freedoms every day. For Facebook, which stands for giving people voice, these rights are core to why we exist.

Sissons and Warofka said that since this research was conducted, Facebook took steps to formalize an approach to determine which countries require more investment, including increased staffing, product changes and further research.

Facebook worked with BSR on the assessment of its role in Cambodia, and with Article One for Indonesia and Sri Lanka.

Recommendations that were similar across all three reports:

  • Improving corporate accountability around human rights.
  • Updating community standards and improving enforcement.
  • Investing in changes to platform architecture to promote authoritative information and reduce the spread of abusive content.
  • Improving reporting mechanisms and response times.
  • Engaging more regularly and substantively with civil society organizations.
  • Increasing transparency so that people better understand Facebook’s approach to content, misinformation and News Feed ranking.
  • Continuing human rights due diligence.

…Key updates to the social network’s community standards included a policy to remove verified misinformation that contributes to the risk of imminent physical harm, as well as protections for vulnerable groups (veiled women, LGBTQ+ individuals, human rights activists) who would run the risk of offline harm if they were “outed.”

Engagement with civil society organizations was formalized, and local fact-checking partnerships were bolstered in Indonesia and Sri Lanka.

Sissons and Warofka concluded, “As we work to protect human rights and mitigate the adverse impacts of our platform, we have sought to communicate more transparently and build trust with rights holders. We also aim to use our presence in places like Sri Lanka, Indonesia and Cambodia to advance human rights, as outlined in the United Nations Guiding Principles on Business and Human Rights and in Article One and BSR’s assessments. In particular, we are deeply troubled by the arrests of people who have used Facebook to engage in peaceful political expression, and we will continue to advocate for freedom of expression and stronger protections of user data.

https://www.adweek.com/digital/facebook-details-human-rights-impact-assessments-in-cambodia-indonesia-sri-lanka/

————

But it is not all roses for Twitter either: On 11 May 2020 Frances Eve (deputy director of research at Chinese Human Rights Defenders) wrote about Twitter becoming the “Chinese Government’s Double Weapon: Punishing Dissent and Propagating Disinformation”.

She relates the story of former journalist Zhang Jialong whose “criminal activity,” according to the prosecutor’s charge sheet, is that “from 2016 onwards, the defendant Zhang Jialong used his phone and computer…. many times to log onto the overseas platform ‘Twitter,’ and through the account ‘张贾龙@zhangjialong’ repeatedly used the platform to post and retweet a great amount of false information that defamed the image of the [Chinese Communist] Party, the state, and the government.”…..

Human rights defenders like Zhang are increasingly being accused of using Twitter, alongside Chinese social media platforms like Weibo, WeChat, and QQ, to commit the “crime” of “slandering” the Chinese Communist Party or the government by expressing their opinions. As many Chinese human rights activists have increasingly tried to express themselves uncensored on Twitter, police have stepped up its monitoring of the platform. Thirty minutes after activist Deng Chuanbin sent a tweet on May 16, 2019 that referenced the 30th anniversary of the Tiananmen Massacre, Sichuan police were outside his apartment building. He has been in pre-trial detention ever since, accused of “picking quarrels and provoking trouble.”

…..While the Chinese government systematically denies Chinese people their right to express themselves freely on the Internet, … the government has aggressively used blocked western social media platforms like Twitter to promote its propaganda and launch disinformation campaigns overseas…

Zhang Jialong’s last tweet was an announcement of the birth of his daughter on June 8, 2019. He should be free and be able to watch her grow up. She deserves to grow up in a country where her father isn’t jailed for his speech.

https://www.vice.com/en_us/article/v7ggvy/chinas-unleashing-a-propaganda-wolfpack-on-twitter-even-though-citizens-go-to-jail-for-tweeting

To see some other posts on content moderation: https://humanrightsdefenders.blog/tag/content-moderation/

Emi Palmor’s selection to Facebook oversight board criticised by Palestinian NGOs

May 16, 2020

After reporting on the Saudi criticism regarding the composition of Facebook’s new oversight board [https://humanrightsdefenders.blog/2020/05/13/tawakkol-karman-on-facebooks-oversight-board-doesnt-please-saudis/], here the position of Palestinian civil society organizations who are very unhappy with the selection of the former General Director of the Israeli Ministry of Justice.

On 15 May 2020, MENAFN – Palestine News Network – reports that Palestinian civil society organizations condemn the selection of Emi Palmor, the former General Director of the Israeli Ministry of Justice, to Facebook’s Oversight Board and raises the alarm about the impact that her role will play in further shrinking the space for freedom of expression online and the protection of human rights. While it is important that the Members of the Oversight Board should be diverse, it is equally essential that they are known to be leaders in upholding the rule of law and protecting human rights worldwide.

Under Emi Palmor’s direction, the Israeli Ministry of Justice petitioned Facebook to censor legitimate speech of human rights defenders and journalists because it was deemed politically undesirable. This is contrary to international human rights law standards and recommendations issued by the United Nations (UN) Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, as well as digital rights experts and activists which argue that censorship must be rare and well justified to protect freedom of speech and that companies should develop tools that ‘prevent or mitigate the human rights risks caused by national laws or demands inconsistent with international standards.’

During Palmor’s time at the Israeli Ministry of Justice (2014-2019), the Ministry established the Israeli Cyber Unit, ……….

Additionally, as documented in Facebook’s Transparency Report, since 2016, there has been an increase in the number of Israeli government requests for data, which now total over 700, 50 percent of which were submitted under ’emergency requests’ and were not related to legal processes. These are not isolated attempts to restrict Palestinian digital rights and freedom of expression online. Instead, they fall within the context of a widespread and systematic attempt by the Israeli government, particularly through the Cyber Unit formerly headed by Emi Palmor, to silence Palestinians, to remove social media content critical of Israeli policies and practices and to smear and delegitmize human rights defenders, activists and organizations seeking to challenge Israeli rights abuses against the Palestinian people.

 

Tawakkol Karman on Facebook’s Oversight Board doesn’t please Saudis

May 13, 2020

Nobel Peace Prize laureate Yemeni Tawakkol Karman (AFP)

Nobel Peace Prize laureate Yemeni Tawakkol Karman (AFP)

On 10 May 2020 AlBawaba reported that Facebook had appointed Yemeni Nobel Peace Prize laureate Tawakkol Karman as a member of its newly-launched Oversight Board, an independent committee which will have the final say in whether Facebook and Instagram should allow or remove specific content. [ see also: https://humanrightsdefenders.blog/2020/04/11/algorithms-designed-to-suppress-isis-content-may-also-suppress-evidence-of-human-rights-violations/]

Karman, a human rights activist, journalist and politician, won the Nobel Peace Prize in 2011 for her role in Yemen’s Arab Spring uprising. Her appointment to the Facebook body has led to sharp reaction in the Saudi social media. She said that she has been subjected to a campaign of online harassment by Saudi media ever since she was appointed to Facebook’s Oversight Board. In a Twitter post on Monday she said, “I am subjected to widespread bullying & a smear campaign by #Saudi media & its allies.” Karman referred to the 2018 killing of Jamal Khashoggi indicating fears that she could be the target of physical violence.

Tawakkol Karman @TawakkolKarman

I am subjected to widespread bullying&a smear campaign by ’s media&its allies. What is more important now is to be safe from the saw used to cut ’s body into pieces.I am in my way to &I consider this as a report to the international public opinion.

However, previous Saudi Twitter campaigns have been proven by social media analysts to be manufactured and unrepresentative of public opinion, with thousands of suspicious Twitter accounts churning out near-identical tweets in support of the Saudi government line. The Yemeni human rights organization SAM for Rights and Liberties condemned the campaign against Karman, saying in a statement that “personalities close to the rulers of Saudi Arabia and the Emirates, as well as newspapers and satellite channels financed by these two regimes had joined a campaign of hate, and this was not a normal manifestation of responsible expression of opinion“.

Tengku Emma – spokesperson for Rohingyas – attacked on line in Malaysia

April 28, 2020
In an open letter in the Malay Mail of 28 April 2020 over 50 civil society organisations (CSO) and human rights activists, expressed their shock and condemnation about the mounting racist and xenophobic attacks in Malaysia against the Rohingya people and especially the targeted cyber attacks against Tengku Emma Zuriana Tengku Azmi, the representative of the European Rohingya Council’s (https://www.theerc.eu/about/) in Malaysia, and other concerned individuals for expressing their opinion and support for the rights of the Rohingya people seeking refuge in Malaysia.

[On 21 April 2020, Tengku Emma had her letter regarding her concern over the pushback of the Rohingya boat to sea published in the media. Since then she has received mobbed attacks and intimidation online, especially on Facebook.  The attacks, targeted her gender, particularly, with some including calls for rape. They were also intensely racist, both specifically targeted at her as well as the Rohingya. The following forms of violence have been documented thus far: 

● Doxxing – a gross violation by targeted research into her personal information and publishing it online, including her NRIC, phone number, car number plate, personal photographs, etc.; 

● Malicious distribution of a photograph of her son, a minor, and other personal information, often accompanied by aggressive, racist or sexist comments; 

● Threat of rape and other physical harm, and; 

● Distribution of fake and sexually explicit images. 

….One Facebook post that attacked her was shared more than 18,000 times since 23 April 2020. 

….We are deeply concerned and raise the question if there is indeed a concerted effort to spread inhumane, xenophobic and widespread hate that seem be proliferating in social media spaces on the issue of Rohingya seeking refuge in Malaysia, as a tool to divert attention from the current COVID-19 crisis response and mitigation.
When the attacks were reported to Facebook by Tengku Emma, no action was taken. Facebook responded by stating that the attacks did not amount to a breach of their Community Standards. With her information being circulated, accompanied by calls of aggression and violence, Tengku Emma was forced to deactivate her Facebook account. She subsequently lodged a police report in fear for her own safety and that of her family. 

There is, to date, no clear protection measures from either the police or Facebook regarding her reports. 

It is clear that despite direct threats to her safety and the cumulative nature of the attacks, current reporting mechanisms on Facebook are inadequate to respond, whether in timely or decisive ways, to limit harm. It is also unclear to what extent the police or the Malaysian Communications and Multimedia Commission (MCMC) are willing and able to respond to attacks such as this. 

It has been seven (7) days since Tengku Emma received her first attack, which has since ballooned outwards to tens of thousands. The only recourse she seems to have is deactivating her Facebook account, while the proponents of hatred and xenophobia continue to act unchallenged. This points to the systemic gaps in policy and laws in addressing xenophobia, online gender-based violence and hate speech, and even where legislation exists, implementation is far from sufficient. ]

Our demands: 

It must be stressed that the recent emergence and reiteration of xenophobic rhetoric and pushback against the Rohingya, including those already in Malaysia as well as those adrift at sea seeking asylum from Malaysia, is inhumane and against international norms and standards. The current COVID-19 pandemic is not an excuse for Malaysia to abrogate its duty as part of the international community. 

1.         The Malaysian government must, with immediate effect, engage with the United Nations, specifically the United Nations High Commissioner for Refugee (UNHCR), and civil society organisations to find a durable solution in support of the Rohingya seeking asylum in Malaysia on humanitarian grounds. 

2.         We also call on Malaysia to implement the Rabat Plan of Action on the prohibition of advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence, through a multistakeholder framework that promotes freedom of expression based on the principles of gender equality, non-discrimination and diversity.

3. Social media platforms, meanwhile, have the obligation to review and improve their existing standards and guidelines based on the lived realities of women and marginalised communities, who are often the target of online hate speech and violence, including understanding the cumulative impact of mobbed attacks and how attacks manifest in local contexts.

4. We must end all xenophobic and racist attacks and discrimination against Rohingya who seek asylum in Malaysia; and stop online harassment, bullying and intimidation against human rights defenders working on the Rohingya crisis.

For more posts on content moderation: https://humanrightsdefenders.blog/tag/content-moderation/

https://www.malaymail.com/news/what-you-think/2020/04/28/civil-society-orgs-stand-in-solidarity-with-women-human-rights-defender-ten/1861015