Posts Tagged ‘content moderation’

Emi Palmor’s selection to Facebook oversight board criticised by Palestinian NGOs

May 16, 2020

After reporting on the Saudi criticism regarding the composition of Facebook’s new oversight board [https://humanrightsdefenders.blog/2020/05/13/tawakkol-karman-on-facebooks-oversight-board-doesnt-please-saudis/], here the position of Palestinian civil society organizations who are very unhappy with the selection of the former General Director of the Israeli Ministry of Justice.

On 15 May 2020, MENAFN – Palestine News Network – reports that Palestinian civil society organizations condemn the selection of Emi Palmor, the former General Director of the Israeli Ministry of Justice, to Facebook’s Oversight Board and raises the alarm about the impact that her role will play in further shrinking the space for freedom of expression online and the protection of human rights. While it is important that the Members of the Oversight Board should be diverse, it is equally essential that they are known to be leaders in upholding the rule of law and protecting human rights worldwide.

Under Emi Palmor’s direction, the Israeli Ministry of Justice petitioned Facebook to censor legitimate speech of human rights defenders and journalists because it was deemed politically undesirable. This is contrary to international human rights law standards and recommendations issued by the United Nations (UN) Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, as well as digital rights experts and activists which argue that censorship must be rare and well justified to protect freedom of speech and that companies should develop tools that ‘prevent or mitigate the human rights risks caused by national laws or demands inconsistent with international standards.’

During Palmor’s time at the Israeli Ministry of Justice (2014-2019), the Ministry established the Israeli Cyber Unit, ……….

Additionally, as documented in Facebook’s Transparency Report, since 2016, there has been an increase in the number of Israeli government requests for data, which now total over 700, 50 percent of which were submitted under ’emergency requests’ and were not related to legal processes. These are not isolated attempts to restrict Palestinian digital rights and freedom of expression online. Instead, they fall within the context of a widespread and systematic attempt by the Israeli government, particularly through the Cyber Unit formerly headed by Emi Palmor, to silence Palestinians, to remove social media content critical of Israeli policies and practices and to smear and delegitmize human rights defenders, activists and organizations seeking to challenge Israeli rights abuses against the Palestinian people.

 

Tawakkol Karman on Facebook’s Oversight Board doesn’t please Saudis

May 13, 2020

Nobel Peace Prize laureate Yemeni Tawakkol Karman (AFP)

Nobel Peace Prize laureate Yemeni Tawakkol Karman (AFP)

On 10 May 2020 AlBawaba reported that Facebook had appointed Yemeni Nobel Peace Prize laureate Tawakkol Karman as a member of its newly-launched Oversight Board, an independent committee which will have the final say in whether Facebook and Instagram should allow or remove specific content. [ see also: https://humanrightsdefenders.blog/2020/04/11/algorithms-designed-to-suppress-isis-content-may-also-suppress-evidence-of-human-rights-violations/]

Karman, a human rights activist, journalist and politician, won the Nobel Peace Prize in 2011 for her role in Yemen’s Arab Spring uprising. Her appointment to the Facebook body has led to sharp reaction in the Saudi social media. She said that she has been subjected to a campaign of online harassment by Saudi media ever since she was appointed to Facebook’s Oversight Board. In a Twitter post on Monday she said, “I am subjected to widespread bullying & a smear campaign by #Saudi media & its allies.” Karman referred to the 2018 killing of Jamal Khashoggi indicating fears that she could be the target of physical violence.

Tawakkol Karman @TawakkolKarman

I am subjected to widespread bullying&a smear campaign by ’s media&its allies. What is more important now is to be safe from the saw used to cut ’s body into pieces.I am in my way to &I consider this as a report to the international public opinion.

However, previous Saudi Twitter campaigns have been proven by social media analysts to be manufactured and unrepresentative of public opinion, with thousands of suspicious Twitter accounts churning out near-identical tweets in support of the Saudi government line. The Yemeni human rights organization SAM for Rights and Liberties condemned the campaign against Karman, saying in a statement that “personalities close to the rulers of Saudi Arabia and the Emirates, as well as newspapers and satellite channels financed by these two regimes had joined a campaign of hate, and this was not a normal manifestation of responsible expression of opinion“.

Tengku Emma – spokesperson for Rohingyas – attacked on line in Malaysia

April 28, 2020

In an open letter in the Malay Mail of 28 April 2020 over 50 civil society organisations (CSO) and human rights activists, expressed their shock and condemnation about the mounting racist and xenophobic attacks in Malaysia against the Rohingya people and especially the targeted cyber attacks against Tengku Emma Zuriana Tengku Azmi, the representative of the European Rohingya Council’s (https://www.theerc.eu/about/) in Malaysia, and other concerned individuals for expressing their opinion and support for the rights of the Rohingya people seeking refuge in Malaysia.

[On 21 April 2020, Tengku Emma had her letter regarding her concern over the pushback of the Rohingya boat to sea published in the media. Since then she has received mobbed attacks and intimidation online, especially on Facebook.  The attacks, targeted her gender, particularly, with some including calls for rape. They were also intensely racist, both specifically targeted at her as well as the Rohingya. The following forms of violence have been documented thus far: 

● Doxxing – a gross violation by targeted research into her personal information and publishing it online, including her NRIC, phone number, car number plate, personal photographs, etc.; 

● Malicious distribution of a photograph of her son, a minor, and other personal information, often accompanied by aggressive, racist or sexist comments; 

● Threat of rape and other physical harm, and; 

● Distribution of fake and sexually explicit images. 

….One Facebook post that attacked her was shared more than 18,000 times since 23 April 2020. 

….We are deeply concerned and raise the question if there is indeed a concerted effort to spread inhumane, xenophobic and widespread hate that seem be proliferating in social media spaces on the issue of Rohingya seeking refuge in Malaysia, as a tool to divert attention from the current COVID-19 crisis response and mitigation.
When the attacks were reported to Facebook by Tengku Emma, no action was taken. Facebook responded by stating that the attacks did not amount to a breach of their Community Standards. With her information being circulated, accompanied by calls of aggression and violence, Tengku Emma was forced to deactivate her Facebook account. She subsequently lodged a police report in fear for her own safety and that of her family. 

There is, to date, no clear protection measures from either the police or Facebook regarding her reports. 

It is clear that despite direct threats to her safety and the cumulative nature of the attacks, current reporting mechanisms on Facebook are inadequate to respond, whether in timely or decisive ways, to limit harm. It is also unclear to what extent the police or the Malaysian Communications and Multimedia Commission (MCMC) are willing and able to respond to attacks such as this. 

It has been seven (7) days since Tengku Emma received her first attack, which has since ballooned outwards to tens of thousands. The only recourse she seems to have is deactivating her Facebook account, while the proponents of hatred and xenophobia continue to act unchallenged. This points to the systemic gaps in policy and laws in addressing xenophobia, online gender-based violence and hate speech, and even where legislation exists, implementation is far from sufficient. ]

Our demands: 

It must be stressed that the recent emergence and reiteration of xenophobic rhetoric and pushback against the Rohingya, including those already in Malaysia as well as those adrift at sea seeking asylum from Malaysia, is inhumane and against international norms and standards. The current COVID-19 pandemic is not an excuse for Malaysia to abrogate its duty as part of the international community. 

1.         The Malaysian government must, with immediate effect, engage with the United Nations, specifically the United Nations High Commissioner for Refugee (UNHCR), and civil society organisations to find a durable solution in support of the Rohingya seeking asylum in Malaysia on humanitarian grounds. 

2.         We also call on Malaysia to implement the Rabat Plan of Action on the prohibition of advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence, through a multistakeholder framework that promotes freedom of expression based on the principles of gender equality, non-discrimination and diversity.

3. Social media platforms, meanwhile, have the obligation to review and improve their existing standards and guidelines based on the lived realities of women and marginalised communities, who are often the target of online hate speech and violence, including understanding the cumulative impact of mobbed attacks and how attacks manifest in local contexts.

4. We must end all xenophobic and racist attacks and discrimination against Rohingya who seek asylum in Malaysia; and stop online harassment, bullying and intimidation against human rights defenders working on the Rohingya crisis.

For more posts on content moderation: https://humanrightsdefenders.blog/tag/content-moderation/

https://www.malaymail.com/news/what-you-think/2020/04/28/civil-society-orgs-stand-in-solidarity-with-women-human-rights-defender-ten/1861015

Algorithms designed to suppress ISIS content, may also suppress evidence of human rights violations

April 11, 2020

Facebook and YouTube designed algorithms to suppress ISIS content. They're having unexpected side effects.

Illustration by Leo Acadia for TIME
TIME of 11 April 2020 carries a long article by Billy Perrigo entitled “These Tech Companies Managed to Eradicate ISIS Content. But They’re Also Erasing Crucial Evidence of War Crimes” It is a very interseting piece that clearly spells out the dilemma of supressing too much or too little on Facebook, YouTube, etc.  Algorithms designed to suppress ISIS content, are having unexpected side effects such as suppressing evidence of human rights violations.
…..Images by citizen journalist Abo Liath Aljazarawy to his Facebook page (Eye on Alhasakah’s) showed the ground reality of the Syrian civil war. His page was banned. Facebook confirmed to TIME that Eye on Alhasakah was flagged in late 2019 by its algorithms, as well as users, for sharing “extremist content.” It was then funneled to a human moderator, who decided to remove it. After being notified by TIME, Facebook restored the page in early February, some 12 weeks later, saying the moderator had made a mistake. (Facebook declined to say which specific videos were wrongly flagged, except that there were several.)The algorithms were developed largely in reaction to ISIS, who shocked the world in 2014 when they began to share slickly-produced online videos of executions and battles as propaganda. Because of the very real way these videos radicalized viewers, the U.S.-led coalition in Iraq and Syria worked overtime to suppress them, and enlisted social networks to help. Quickly, the companies discovered that there was too much content for even a huge team of humans to deal with. (More than 500 hours of video are uploaded to YouTube every minute.) So, since 2017, beg have been using algorithms to automatically detect extremist content. Early on, those algorithms were crude, and only supplemented the human moderators’ work. But now, following three years of training, they are responsible for an overwhelming proportion of detections. Facebook now says more than 98% of content removed for violating its rules on extremism is flagged automatically. On YouTube, across the board, more than 20 million videos were taken down before receiving a single view in 2019. And as the coronavirus spread across the globe in early 2020, Facebook, YouTube and Twitter announced their algorithms would take on an even larger share of content moderation, with human moderators barred from taking sensitive material home with them.

But algorithms are notoriously worse than humans at understanding one crucial thing: context. Now, as Facebook and YouTube have come to rely on them more and more, even innocent photos and videos, especially from war zones, are being swept up and removed. Such content can serve a vital purpose for both civilians on the ground — for whom it provides vital real-time information — and human rights monitors far away. In 2017, for the first time ever, the International Criminal Court in the Netherlands issued a war-crimes indictment based on videos from Libya posted on social media. And as violence-detection algorithms have developed, conflict monitors are noticing an unexpected side effect, too: these algorithms could be removing evidence of war crimes from the Internet before anyone even knows it exists.

…..
It was an example of how even one mistaken takedown can make the work of human rights defenders more difficult. Yet this is happening on a wider scale: of the 1.7 million YouTube videos preserved by Syrian Archive, a Berlin-based non-profit that downloads evidence of human rights violations, 16% have been removed. A huge chunk were taken down in 2017, just as YouTube began using algorithms to flag violent and extremist content. And useful content is still being removed on a regular basis. “We’re still seeing that this is a problem,” says Jeff Deutsch, the lead researcher at Syrian Archive. “We’re not saying that all this content has to remain public forever. But it’s important that this content is archived, so it’s accessible to researchers, to human rights groups, to academics, to lawyers, for use in some kind of legal accountability.” (YouTube says it is working with Syrian Archive to improve how they identify and preserve footage that could be useful for human rights groups.)

…..

Facebook and YouTube’s detection systems work by using a technology called machine learning, by which colossal amounts of data (in this case, extremist images, videos, and their metadata) are fed to an artificial intelligence adept at spotting patterns. Early types of machine learning could be trained to identify images containing a house, or a car, or a human face. But since 2017, Facebook and YouTube have been feeding these algorithms content that moderators have flagged as extremist — training them to automatically identify beheadings, propaganda videos and other unsavory content.

Both Facebook and YouTube are notoriously secretive about what kind of content they’re using to train the algorithms responsible for much of this deletion. That means there’s no way for outside observers to know whether innocent content — like Eye on Alhasakah’s — has already been fed in as training data, which would compromise the algorithm’s decision-making. In the case of Eye on Alhasakah’s takedown, “Facebook said, ‘oops, we made a mistake,’” says Dia Kayyali, the Tech and Advocacy coordinator at Witness, a human rights group focused on helping people record digital evidence of abuses. “But what if they had used the page as training data? Then that mistake has been exponentially spread throughout their system, because it’s going to train the algorithm more, and then more of that similar content that was mistakenly taken down is going to get taken down. I think that is exactly what’s happening now.” Facebook and YouTube, however, both deny this is possible. Facebook says it regularly retrains its algorithms to avoid this happening. In a statement, YouTube said: “decisions made by human reviewers help to improve the accuracy of our automated flagging systems.”

…….
That’s because Facebook’s policies allow some types of violence and extremism but not others — meaning decisions on whether to take content down is often based on cultural context. Has a video of an execution been shared by its perpetrators to spread fear? Or by a citizen journalist to ensure the wider world sees a grave human rights violation? A moderator’s answer to those questions could mean that of two identical videos, one remains online and the other is taken down. “This technology can’t yet effectively handle everything that is against our rules,” Saltman said. “Many of the decisions we have to make are complex and involve decisions around intent and cultural nuance which still require human eye and judgement.”

In this balancing act, it’s Facebook’s army of human moderators — many of them outsourced contractors — who carry the pole. And sometimes, they lose their footing. After several of Eye on Alhasakah’s posts were flagged by algorithms and humans alike, a Facebook moderator wrongly decided the page should be banned entirely for sharing violent videos in order to praise them — a violation of Facebook’s rules on violence and extremism, which state that some content can remain online if it is newsworthy, but not if it encourages violence or valorizes terrorism. The nuance, Facebook representatives told TIME, is important for balancing freedom of speech with a safe environment for its users — and keeping Facebook on the right side of government regulations.

Facebook’s set of rules on the topic reads like a gory textbook on ethics: beheadings, decomposed bodies, throat-slitting and cannibalism are all classed as too graphic, and thus never allowed; neither is dismemberment — unless it’s being performed in a medical setting; nor burning people, unless they are practicing self-immolation as an act of political speech, which is protected. Moderators are given discretion, however, if violent content is clearly being shared to spread awareness of human rights abuses. “In these cases, depending on how graphic the content is, we may allow it, but we place a warning screen in front of the content and limit the visibility to people aged 18 or over,” said Saltman. “We know not everyone will agree with these policies and we respect that.”

But civilian journalists operating in the heat of a civil war don’t always have time to read the fine print. And conflict monitors say it’s not enough for Facebook and YouTube to make all the decisions themselves. “Like it or not, people are using these social media platforms as a place of permanent record,” says Woods. “The social media sites don’t get to choose what’s of value and importance.”

See also: https://humanrightsdefenders.blog/2019/06/17/social-media-councils-an-answer-to-problems-of-content-moderation-and-distribution/

https://time.com/5798001/facebook-youtube-algorithms-extremism/

Policy response from Human Rights NGOs to COVID-19: Witness

April 5, 2020

In the midst of the COVID-19 crisis, many human rights organisations have been formulating a policy response. While I cannot be complete or undertake comparisons, I will try and give some examples in the course of the coming weeks. Here the one by Sam Gregory of

…..The immediate implications of coronavirus – quarantine, enhanced emergency powers, restrictions on sharing information –  make it harder for individuals all around the world to document and share the realities of government repression and private actors’ violations.  In states of emergency, authoritarian governments in particular can operate with further impunity, cracking down on free speech and turning to increasingly repressive measures. The threat of coronavirus and its justifying power provides cover for rights-violating laws and measures that history tells us may long outlive the actual pandemic. And the attention on coronavirus distracts focus from rights issues that are both compounded by the impact of the virus and cannot claim the spotlight now.

At WITNESS we are adapting and responding, led by what we learn and hear from the communities of activism, human rights and civic journalism with which we collaborate closely across the world. We will continue to ensure that our guidance on direct documentation helps people document the truth even under trying circumstances and widespread misinformation. We will draw on our experience curating voices and information from closed situations to make sense in confusion. We will provide secure online training while options for physical meeting are curtailed. We will provide meaningful localized guidance on how to document and verify amid an information pandemic; and we will ensure that long-standing struggles are not neglected now when they need it most.

In this crisis moment, it is critical that we enhance the abilities and defend the rights of people who document and share critical realities from the ground. Across the three core thematic issues we currently work on, the need is critical. For issues such as video as evidence from conflict zones, these wars continue on and reach their apex even as coronavirus takes all the attention away. We need only look to the current situation in Idlib, Yemen or in other states of conflict in the Middle East.

For other issues, like state violence against minorities, many people already live in a state of emergency.

Coronavirus response in Complexo do Alemão favela, Rio de Janeiro (credit: Raull Santiago)

Favela residents in Brazil have lived with vastly elevated levels of police killings of civilians for years, and now face a parallel health emergency. Meanwhile immigrant communities in the US have lived in fear of ICE for years and must now weigh their physical health against their physical safety and family integrity. Many communities – in Kashmir and in Rakhine State, Burma – live without access to the internet on an ongoing basis and must still try and share what is happening. And for those who fight for their land rights and environmental justice, coronavirus is both a threat to vulnerable indigenous and poor communities lacking health care, sanitation and state support as well as a powerful distraction from their battle against structural injustice.

A critical part of WITNESS’ strategy is our work to ensure technology companies actions and government regulation of technology are accountable to the most vulnerable members of our global society – marginalized populations globally, particularly those outside the US and Europe, as well as human rights defenders and civic journalists. As responses to coronavirus kick-in there are critical implications in how both civic technology and commercial technology are now being deployed and will be deployed.

Already, coronavirus has acted as an accelerant – like fuel on the fire – to existing trends in technology. Some of these have potentially profound negative impacts for human rights values, human rights documentation and human rights defenders; others may hold a silver lining.

My colleague Dia Kayyali has already written about the sudden shift to much broader algorithmic content moderation that took place last week as Facebook, Twitter, Google and YouTube sent home their human moderators. Over the past years, we’ve seen the implications of both a move to algorithmic moderation and a lack of will and resourcing: from hate speech staying on platforms in vulnerable societies, to the removal critical war crimes evidence at scale from YouTube, to a lack of accountability for decisions made under the guise of countering terrorist and violent extremist content. But in civil society we did not anticipate that such a shift to more broad algorithmic control would happen so rapidly in such a short period of time. We must closely monitor and push for this change not to adversely affect societies and critical struggles worldwide in a moment when they are already threatened by isolation and increased government repression. As Dia suggests, now is the moment for these companies to finally make their algorithms and content moderation processes more transparent to critical civil society experts, as well as reset on how they support and treat the human beings who do the dirty work of moderation.

WITNESS’s work on misinformation and disinformation spans a decade of supporting the production of truthful, trustworthy content in war zones, crises and long-standing struggles for rights. Most recently we have focused on the emerging threats from deepfakes and other forms of synthetic media that enable increasingly realistic fakery of what looks like a real person saying or doing something they never did.

We’ve led the first global expert meetings in Brazil, Southern Africa and Southeast Asia on what a rights-respecting, global responses should look like in terms of understanding threats and solutions. Feedback from these sessions has stressed the need for attention to a continuum of audiovisual misinformation including ‘shallowfakes’, the simpler forms of miscontextualized and lightly edited videos that dominate attempts to confuse and deceive. Right now, social media platforms are unleashing a series of responses to misinformation around Coronavirus – from highlighting authoritative health information from country-level and international sources, to curating resources, offering help centers, and taking down a wider range of content that misinforms, deceives or price gouges including even leading politicians, such as President Bolsonaro in Brazil. The question we must ask is what we want to see internet companies continue to do after the crisis: what should they do for a wider range of misinformation and disinformation outside of health – and what do we not want them to do? We’ll be sharing more about this in the coming weeks.

And where can we find a technological silver lining? One area may be the potential to discover and explore new ways to act in solidarity and agency with each other online. A long-standing area of work at WITNESS is how to use ‘co-presence’ and livestreaming to bridge social distances and help people witness snd support one another when physical proximity is not possible.

Our Mobil-Eyes Us project supported favela-based activists to use live video to better engage their audiences to be with them, and provide meaningful support. In parts of the world that benefit from broadband internet access, and the absence of arbitrary shutdowns, and the ability to physically isolate, we are seeing an explosion of experimentation in how to operate better in a world that is both physically distanced, yet still socially proximate. We should learn from this and drive experimentation and action in ensuring that even as our freedom of assembly in physical space is curtailed for legitimate (and illegitimate) reasons, our ability to assemble online in meaningful action is not curtailed but enhanced.

In moments of crisis good and bad actors alike will try and push the agenda that they want. In this moment of acceleration and crisis, WITNESS is committed to ensuring an agenda firmly grounded, and led by a human rights vision and the wants and needs of vulnerable communities and human rights defenders worldwide.

Coronavirus and human rights: Preparing WITNESS’s response

 

How social media companies can identify and respond to threats against human rights defenders

October 15, 2019

global computer threats

Image from Shutterstock.

Ginna Anderson writes in the ABA Abroad of 3

..Unfortunately, social media platforms are now a primary tool for coordinated, state-aligned actors to harass, threaten and undermine advocates. Although public shaming, death threats, defamation and disinformation are not unique to the online sphere, the nature of the internet has given them unprecedented potency. Bad actors are able to rapidly deploy their poisoned content on a vast scale. Social media companies have only just begun to recognize, let alone respond, to the problem. Meanwhile, individuals targeted through such coordinated campaigns must painstakingly flag individual pieces of content, navigate opaque corporate structures and attempt to survive the fallout. To address this crisis, companies such as Facebook, Twitter and Youtube must dramatically increase their capacity and will to engage in transparent, context-driven content moderation.

For human rights defenders, the need is urgent. .. Since 2011, the ABA Center for Human Rights (CHR) has ..noted with concern the coordination of “traditional” judicial harassment of defenders by governments, such as frivolous criminal charges or arbitrary detention, with online campaigns of intimidation. State-aligned online disinformation campaigns against individual defenders often precede or coincide with official investigations and criminal charges.

……

While social media companies generally prohibit incitement of violence and hate speech on their platforms, CHR has had to engage in additional advocacy with social media companies requesting the removal of specific pieces of content or accounts that target defenders. This extra advocacy has been required even where the content clearly violates a social media company’s terms of service and despite initial flagging by a defender. The situation is even more difficult where the threatening content is only recognizable with sufficient local and political context. The various platforms all rely on artificial intelligence, to varying degrees, to identify speech that violates their respective community standards. Yet current iterations of artificial intelligence are often unable to adequately evaluate context and intent.

Online intimidation and smear campaigns against defenders often rely on existing societal fault lines to demean and discredit advocates. In Guatemala, CHR recently documented a coordinated social media campaign to defame, harass, intimidate and incite violence against human rights defenders. Several were linked with so-called “net centers,” where users were reportedly paid to amplify hateful content across platforms. Often, the campaigns relied on “coded” language that hark back to Guatemala’s civil war and the genocide of Mayan communities by calling indigenous leaders communists, terrorists and guerrillas.

These terms appear to have largely escaped social media company scrutiny, perhaps because none is a racist slur per se. And yet, the proliferation of these online attacks, as well as the status of those putting out the content, is contributing to a worsening climate of violence and impunity for violence against defenders by specifically alluding to terms used to justify violence against indigenous communities. In 2018 alone, NPR reports that 26 indigenous defenders were murdered in Guatemala. In such a climate, the fear and intimidation felt by those targeted in such campaigns is not hyperbolic but based on their understanding of how violence can be sparked in Guatemala.

In order to address such attacks, social media companies must adopt policies that allow them to designate defenders as temporarily protected groups in countries that are characterized by state-coordinated or state-condoned persecution of activists. This is in line with international law that prohibits states from targeting individuals for serious harm based on their political opinion. To increase their ability to recognize and respond to persecution and online violence against human rights defenders, companies must continue to invest in their context-driven content moderation capacity, including complementing algorithmic monitoring with human content moderators well-versed in local dialects and historical and political context.

Context-driven content moderation should also take into account factors that increase the risk that online behavior will contribute to offline violence by identifying high-risk countries. These factors include a history of intergroup conflict and an overall increase in the number of instances of intergroup violence in the past 12 months; a major national political election in the next 12 months; and significant polarization of political parties along religious, ethnic or racial lines. Countries where these and other risk factors are present call for proactive approaches to identify problematic accounts and coded threats against defenders and marginalized communities, such as those shown in Equality Labs’ “Facebook India” report.

Companies should identify, monitor and be prepared to deplatform key accounts that are consistently putting out denigrating language and targeting human rights defenders. This must go hand in hand with the greater efforts that companies are finally beginning to take to identify coordinated, state-aligned misinformation campaigns. Focusing on the networks of users who abuse the platform, instead of looking solely at how the online abuse affects defenders’ rights online, will also enable companies to more quickly evaluate whether the status of the speaker increases the likelihood that others will take up any implicit call to violence or will be unduly influenced by disinformation.

This abuser-focused approach will also help to decrease the burden on defenders to find and flag individual pieces of content and accounts as problematic. Many of the human rights defenders with whom CHR works are giving up on flagging, a phenomenon we refer to as flagging fatigue. Many have become fatalistic about the level of online harassment they face. This is particularly alarming as advocates targeted online may develop skins so thick that they are no longer able to assess when their actual risk of physical violence has increased.

Finally, it is vital that social media companies pursue, and civil society demand, transparency in content moderation policy and decision-making, in line with the Santa Clara Principles. Put forward in 2018 by a group of academic experts, organizations and advocates committed to freedom of expression online, the principles are meant to guide companies engaged in content moderation and ensure that the enforcement of their policies is “fair, unbiased, proportional and respectful of users’ rights.” In particular, the principles call upon companies to publicly report on the number of posts and accounts taken down or suspended on a regular basis, as well as to provide adequate notice and meaningful appeal to affected users.

CHR routinely supports human rights defenders facing frivolous criminal charges related to their human rights advocacy online or whose accounts and documentation have been taken down absent any clear justification. This contributes to a growing distrust of the companies among the human rights community as apparently arbitrary decisions about content moderation are leaving advocates both over- and under-protected online.

As the U.N. special rapporteur on freedom of expression explained in his 2018 report, content moderation processes must include the ability to appeal the removal, or refusal to remove, content or accounts. Lack of transparency heightens the risk that calls to address the persecution of human rights defenders online will be subverted into justifications for censorship and restrictions on speech that is protected under international human rights law.

A common response when discussing the feasibility of context-driven content moderation is to compare it to reviewing all the grains of sand on a beach. But human rights defenders are not asking for the impossible. We are merely pointing out that some of that sand is radioactive—it glows in the dark, it is lethal, and there is a moral and legal obligation upon those that profit from the beach to deal with it.

Ginna Anderson, senior counsel, joined ABA CHR in 2012. She is responsible for supporting the center’s work to advance the rights of human rights defenders and marginalized dommunities, including lawyers and journalists at risk. She is an expert in health and human rights, media freedom, freedom of expression and fair trial rights. As deputy director of the Justice Defenders Program since 2013, she has managed strategic litigation, fact-finding missions and advocacy campaigns on behalf of human rights defenders facing retaliation for their work in every region of the world

http://www.abajournal.com/news/article/how-can-social-media-companies-identify-and-respond-to-threats-against-human-rights-defenders

Social media councils – an answer to problems of content moderation and distribution??

June 17, 2019

In the running debate on the pros and cons of information technology, and it complex relation to freedom of information, the NGO Article 19 comes on 11 june 2019 with an interesting proposal “Social Media Councils“.

Social Media Councils: Consultation - Digital

In today’s world, dominant tech companies hold a considerable degree of control over what their users see or hear on a daily basis. Current practices of content moderation on social media offer very little in terms of transparency and virtually no remedy to individual users. The impact that content moderation and distribution (in other words, the composition of users’ feeds and the accessibility and visibility of content on social media) has on the public sphere is not yet fully understood, but legitimate concerns have been expressed, especially in relation to platforms that operate at such a level of market dominance that they can exert decisive influence on public debates.

This raises questions in relation to international laws on freedom of expression and has become a major issue for democratic societies. There are legitimate motives of concern that motivate various efforts to address this issue, particularly regarding the capacity of giant social media platforms to influence the public sphere. However, as with many modern communication technologies, the benefits that individuals and societies derive from the existence of these platforms should not be ignored. The responsibilities of the largest social media companies are currently being debated in legislative, policy and academic circles across the globe, but many of the numerous initiatives that are put forward do not sufficiently account for the protection of freedom of expression.

In this consultation paper, ARTICLE 19 outlines a roadmap for the creation of what we have called Social Media Councils (SMCs), a model for a multi-stakeholder accountability mechanism for content moderation on social media. SMCs aim to provide an open, transparent, accountable and participatory forum to address content moderation issues on social media platforms on the basis of international standards on human rights. The Social Media Council model puts forward a voluntary approach to the oversight of content moderation: participants (social media platforms and all stakeholders) sign up to a mechanism that does not create legal obligations. Its strength and efficiency rely on voluntary compliance by platforms, whose commitment, when signing up, will be to respect and execute the SMC’s decisions (or recommendations) in good faith.

With this document, we present these different options and submit them to a public consultation. The key issues we seek to address through this consultation are:

  • Substantive standards: could SMCs apply international standards directly or should they apply a ‘Code of Human Rights Principles for Content Moderation’?
  • Functions of SMCs: should SMCs have a purely advisory role or should they be able to review individual cases?
  • Global or national: should SMCs be created at the national level or should there be one global SMC?
  • Subject-matter jurisdiction: should SMCs deal with all content moderation decisions of social media companies, or should they have a more specialised area of focus, for example a particular type of content?

The consultation also seeks input on a number of technical issues that will be present in any configuration of the SMC, such as:

  1. Constitution process
  2. Structure
  3. Geographic jurisdiction (for a national SMC)
  4. Rules of procedure (if the SMC is an appeals mechanism)
  5. Funding

An important dimension of the Social Media Council concept is that the proposed structure has no exact precedent: the issue of online content moderation presents a new and challenging area. Only with a certain degree of creativity can the complexity of the issues raised by the creation of this new mechanism be solved.

ARTICLE 19’s objective is to ensure that decisions on these core questions and the solutions to practical problems sought by this initiative are compatible with the requirements of international human rights standards, and are shaped by a diverse range of expertise and perspectives.

Read the consultation paper

Complete the consultation survey

https://www.article19.org/resources/social-media-councils-consultation/