Posts Tagged ‘facebook’

Facebook engineers resign due to Zuckerberg’s political stance

June 6, 2020

Image courtesy of Yang Jing/Unsplash

Yen Palec on 6 June 2020 writes that a group of Facebook employees recently resigned. They do not agree with Mark Zuckerberg’s political stance. Some engineers are condemning the executive for his refusal to act on issues about politics and police brutality.

See: https://humanrightsdefenders.blog/2020/06/03/more-on-facebook-and-twitter-and-content-moderation/

In a blog post, the engineers claim that Facebook has become a “platform that enables politicians to radicalize individuals and glorify violence.”.. Several employees, many of whom are working at home due to the pandemic, are criticizing the company. While some claim that the First Amendment protects these hate posts, many are arguing that it has gone too much…..

These criticisms are coming from some of Facebook’s early and tenured employees. Among those that vent their criticism are Dave Willner and Brandee Barker. They claim that the company’s policy may result in a double standard when it comes to political speech…..

In terms of political speech, Twitter’s action set the standard for tech companies to follow. Human rights activists and free speech defenders rally into Twitter’s side, one major platform did not: and that’s Facebook. As the biggest social media platform in the world, the company has the power to influence almost any political debate.

https://micky.com.au/facebook-engineers-resign-due-to-zuckerbergs-political-stance/

Cambodian Monk Council defrocks “video monk’ Luon Sovath

June 6, 2020

0n 4 June 2020 Sun Narin for the VOA reported that The Monk Council in Siem Reap province, Cambodia, expelled prominent activist monk and human rights defender Venerable Luon Savath based on leaked audio recordings purportedly between the monk and a group of women.

Venerable Loun Sovath, an award-winning human rights activist, attends the commemoration of the sixth anniversary of the violent crackdown on garment workers in Phnom Penh, Cambodia, January 3, 2020. (Hul Reaksmey/VOA Khmer)
Venerable Loun Sovath, an award-winning human rights activist, attends the commemoration of the sixth anniversary of the violent crackdown on garment workers in Phnom Penh, Cambodia, January 3, 2020. (Hul Reaksmey/VOA Khmer)

This in not the first time that Loun Sovath is in trouble with the ‘authorities’ be they secular or religious, so there could be reasonable doubt about the veracity of the recordings. See:

https://humanrightsdefenders.blog/2014/11/05/cambodian-mea-laureate-2012-luon-sovath-charged-with-incitement/

https://humanrightsdefenders.blog/2014/11/22/martin-ennals-award-jury-expresses-its-concern-about-loun-sovath-martin-ennals-award-laureate-2012/

The leaked audio recordings are purportedly between the monk and a group of women. In a decision dated June 3, head of the Monk Council in Siem Reap, Chum Kimleng, alleged that Luon Sovath had conversations about “deep love” with women, which were shared on Facebook. The statement added that the conversations were between the monk, a woman and her daughters, alleging that Luon Sovath indulged in sexual activity.

If Luon Sovath wears monk robes from now on, related authorities take legal actions,” read the announcement, which defrocked the monk effective Wednesday.

The Monk Council claimed to have investigated the video recordings, but did not provide any evidence or forensic analysis with the statement to show the voice in the recordings belonged to Luon Sovath or if he had acted in violation of religious norms. VOA Khmer attempted to reach Luon Sovath on the phone and his social media accounts on Thursday, but the activist monk did not respond to requests for comment.

There are four videos circulating on Facebook, and seem to originate from one account, called Srey Da Chi-Kraeng that was created on May 30. The videos, according to the accompanying text on Facebook, are recordings with four women – a mother and three daughters.

The video recordings are of an unidentified person, or persons, sitting in a dimly-lit room and having Facebook audio conversation, ranging seven to 10 minutes each. The video is shot so that only the person’s hand holding the smartphone can be seen.

The Facebook account involved in the alleged call has a male voice and uses the image of Luon Sovath and his name in Khmer script. The conversations are flirtatious in nature and include discussions about giving each other massages.

VOA Khmer could identify two Facebook accounts and one page used by Luon Sovath in the past. One of the accounts, which seems to belong to the venerable monk was created in 2017, it has the same display picture as that seen in the videotaped Facebook calls.

However, VOA Khmer found another Facebook account, called Luon Sovath, using the same display picture and was created on May 29, a day before the Srey Da Chi-Kraeng account was created.

The Monk Council in Siem Reap could not be reached on Thursday to provide details of their investigation into the recordings.

Bor Bet, a monk and member of Independent Monk Network for Social Justice, received a call from Luon Sovath last week, with the activist monk alleging that “people wanted to mistreat me.”

“He told me that they want to frame him,” Bor Bet said. “[Luon Sovath said] it is a political case and done because we are human right defenders.”

A spokesperson for the Ministry of Culture and Religion, Seng Somony, said the ministry had received the decision to defrock Luon Sovath, rejecting the accusation that the development was politically motivated…

Luon Sovath has been internationally recognized for his work in documenting land rights abuses in Cambodia and was featured in the documentary, A Cambodian Spring. [https://www.theguardian.com/film/2018/may/20/cambodian-spring-review] In 2012, he received the Martin Ennals Award.

https://www.voacambodia.com/a/monk-council-expels-activist-monk-luon-sovath-for-alleged-intimate-relationship/5448949.html

More on Facebook and Twitter and content moderation

June 3, 2020

On 2 June 2020 many media (here Natasha Kuma) wrote about the ‘hot potatoe’ in the social media debate about which posts are harmful and should be deleted or given a warning. Interesting to note that the European Commission supported the unprecedented decision of Twitter to mark the message of the President Trump about the situation in Minneapolis as violating the rules of the company about the glorification of violence.

The EU Commissioner Thierry Breton said: “we welcome the contribution of Twitter, directed to the social network of respected European approach”. Breton also wrote: “Recent events in the United States show that we need to find the right answers to difficult questions. What should be the role of digital platforms in terms of preventing the flow of misinformation during the election, or the crisis in health care? How to prevent the spread of hate speech on the Internet?” Vice-President of the European Commission Faith Jourova in turn, said that politicians should respond to criticism with facts, not resorting to threats and attacks.

Some employees of Facebook staged a virtual protest against the decision of Mark Zuckerberg not to take any action on the statements of Trum,. The leaders of the three American civil rights groups after a conversation with Zuckerberg and COO Sheryl Sandberg, released a joint statement in which they say that human rights defenders were not satisfied with the explanation of Mark Zuckerberg position: “He (Zuckerberg) refuses to acknowledge that Facebook is promoting trump’s call for violence against the protesters. Mark sets a very dangerous precedent.”

————-

Earlier – on 14 May 2020 – David Cohen wrote about Facebook having outlined learnings and steps it has taken as a result of its Human Rights Impact Assessments in Cambodia, Indonesia, Sri Lanka

Facebook shared results from a human rights impact assessments it commissioned in 2018 to evaluate the role of its services in Cambodia, Indonesia and Sri Lanka.

Director of human rights Miranda Sissons and product policy manager, human rights Alex Warofka said in a Newsroom post, “Freedom of expression is a foundational human right that allows for the free flow of information. We’re reminded how vital this is, in particular, as the world grapples with Covid-19, and accurate and authoritative information is more important than ever. Human rights defenders know this and fight for these freedoms every day. For Facebook, which stands for giving people voice, these rights are core to why we exist.

Sissons and Warofka said that since this research was conducted, Facebook took steps to formalize an approach to determine which countries require more investment, including increased staffing, product changes and further research.

Facebook worked with BSR on the assessment of its role in Cambodia, and with Article One for Indonesia and Sri Lanka.

Recommendations that were similar across all three reports:

  • Improving corporate accountability around human rights.
  • Updating community standards and improving enforcement.
  • Investing in changes to platform architecture to promote authoritative information and reduce the spread of abusive content.
  • Improving reporting mechanisms and response times.
  • Engaging more regularly and substantively with civil society organizations.
  • Increasing transparency so that people better understand Facebook’s approach to content, misinformation and News Feed ranking.
  • Continuing human rights due diligence.

…Key updates to the social network’s community standards included a policy to remove verified misinformation that contributes to the risk of imminent physical harm, as well as protections for vulnerable groups (veiled women, LGBTQ+ individuals, human rights activists) who would run the risk of offline harm if they were “outed.”

Engagement with civil society organizations was formalized, and local fact-checking partnerships were bolstered in Indonesia and Sri Lanka.

Sissons and Warofka concluded, “As we work to protect human rights and mitigate the adverse impacts of our platform, we have sought to communicate more transparently and build trust with rights holders. We also aim to use our presence in places like Sri Lanka, Indonesia and Cambodia to advance human rights, as outlined in the United Nations Guiding Principles on Business and Human Rights and in Article One and BSR’s assessments. In particular, we are deeply troubled by the arrests of people who have used Facebook to engage in peaceful political expression, and we will continue to advocate for freedom of expression and stronger protections of user data.

https://www.adweek.com/digital/facebook-details-human-rights-impact-assessments-in-cambodia-indonesia-sri-lanka/

————

But it is not all roses for Twitter either: On 11 May 2020 Frances Eve (deputy director of research at Chinese Human Rights Defenders) wrote about Twitter becoming the “Chinese Government’s Double Weapon: Punishing Dissent and Propagating Disinformation”.

She relates the story of former journalist Zhang Jialong whose “criminal activity,” according to the prosecutor’s charge sheet, is that “from 2016 onwards, the defendant Zhang Jialong used his phone and computer…. many times to log onto the overseas platform ‘Twitter,’ and through the account ‘张贾龙@zhangjialong’ repeatedly used the platform to post and retweet a great amount of false information that defamed the image of the [Chinese Communist] Party, the state, and the government.”…..

Human rights defenders like Zhang are increasingly being accused of using Twitter, alongside Chinese social media platforms like Weibo, WeChat, and QQ, to commit the “crime” of “slandering” the Chinese Communist Party or the government by expressing their opinions. As many Chinese human rights activists have increasingly tried to express themselves uncensored on Twitter, police have stepped up its monitoring of the platform. Thirty minutes after activist Deng Chuanbin sent a tweet on May 16, 2019 that referenced the 30th anniversary of the Tiananmen Massacre, Sichuan police were outside his apartment building. He has been in pre-trial detention ever since, accused of “picking quarrels and provoking trouble.”

…..While the Chinese government systematically denies Chinese people their right to express themselves freely on the Internet, … the government has aggressively used blocked western social media platforms like Twitter to promote its propaganda and launch disinformation campaigns overseas…

Zhang Jialong’s last tweet was an announcement of the birth of his daughter on June 8, 2019. He should be free and be able to watch her grow up. She deserves to grow up in a country where her father isn’t jailed for his speech.

https://www.vice.com/en_us/article/v7ggvy/chinas-unleashing-a-propaganda-wolfpack-on-twitter-even-though-citizens-go-to-jail-for-tweeting

To see some other posts on content moderation: https://humanrightsdefenders.blog/tag/content-moderation/

Emi Palmor’s selection to Facebook oversight board criticised by Palestinian NGOs

May 16, 2020

After reporting on the Saudi criticism regarding the composition of Facebook’s new oversight board [https://humanrightsdefenders.blog/2020/05/13/tawakkol-karman-on-facebooks-oversight-board-doesnt-please-saudis/], here the position of Palestinian civil society organizations who are very unhappy with the selection of the former General Director of the Israeli Ministry of Justice.

On 15 May 2020, MENAFN – Palestine News Network – reports that Palestinian civil society organizations condemn the selection of Emi Palmor, the former General Director of the Israeli Ministry of Justice, to Facebook’s Oversight Board and raises the alarm about the impact that her role will play in further shrinking the space for freedom of expression online and the protection of human rights. While it is important that the Members of the Oversight Board should be diverse, it is equally essential that they are known to be leaders in upholding the rule of law and protecting human rights worldwide.

Under Emi Palmor’s direction, the Israeli Ministry of Justice petitioned Facebook to censor legitimate speech of human rights defenders and journalists because it was deemed politically undesirable. This is contrary to international human rights law standards and recommendations issued by the United Nations (UN) Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, as well as digital rights experts and activists which argue that censorship must be rare and well justified to protect freedom of speech and that companies should develop tools that ‘prevent or mitigate the human rights risks caused by national laws or demands inconsistent with international standards.’

During Palmor’s time at the Israeli Ministry of Justice (2014-2019), the Ministry established the Israeli Cyber Unit, ……….

Additionally, as documented in Facebook’s Transparency Report, since 2016, there has been an increase in the number of Israeli government requests for data, which now total over 700, 50 percent of which were submitted under ’emergency requests’ and were not related to legal processes. These are not isolated attempts to restrict Palestinian digital rights and freedom of expression online. Instead, they fall within the context of a widespread and systematic attempt by the Israeli government, particularly through the Cyber Unit formerly headed by Emi Palmor, to silence Palestinians, to remove social media content critical of Israeli policies and practices and to smear and delegitmize human rights defenders, activists and organizations seeking to challenge Israeli rights abuses against the Palestinian people.

 

Tawakkol Karman on Facebook’s Oversight Board doesn’t please Saudis

May 13, 2020

Nobel Peace Prize laureate Yemeni Tawakkol Karman (AFP)

Nobel Peace Prize laureate Yemeni Tawakkol Karman (AFP)

On 10 May 2020 AlBawaba reported that Facebook had appointed Yemeni Nobel Peace Prize laureate Tawakkol Karman as a member of its newly-launched Oversight Board, an independent committee which will have the final say in whether Facebook and Instagram should allow or remove specific content. [ see also: https://humanrightsdefenders.blog/2020/04/11/algorithms-designed-to-suppress-isis-content-may-also-suppress-evidence-of-human-rights-violations/]

Karman, a human rights activist, journalist and politician, won the Nobel Peace Prize in 2011 for her role in Yemen’s Arab Spring uprising. Her appointment to the Facebook body has led to sharp reaction in the Saudi social media. She said that she has been subjected to a campaign of online harassment by Saudi media ever since she was appointed to Facebook’s Oversight Board. In a Twitter post on Monday she said, “I am subjected to widespread bullying & a smear campaign by #Saudi media & its allies.” Karman referred to the 2018 killing of Jamal Khashoggi indicating fears that she could be the target of physical violence.

Tawakkol Karman @TawakkolKarman

I am subjected to widespread bullying&a smear campaign by ’s media&its allies. What is more important now is to be safe from the saw used to cut ’s body into pieces.I am in my way to &I consider this as a report to the international public opinion.

However, previous Saudi Twitter campaigns have been proven by social media analysts to be manufactured and unrepresentative of public opinion, with thousands of suspicious Twitter accounts churning out near-identical tweets in support of the Saudi government line. The Yemeni human rights organization SAM for Rights and Liberties condemned the campaign against Karman, saying in a statement that “personalities close to the rulers of Saudi Arabia and the Emirates, as well as newspapers and satellite channels financed by these two regimes had joined a campaign of hate, and this was not a normal manifestation of responsible expression of opinion“.

Tengku Emma – spokesperson for Rohingyas – attacked on line in Malaysia

April 28, 2020

In an open letter in the Malay Mail of 28 April 2020 over 50 civil society organisations (CSO) and human rights activists, expressed their shock and condemnation about the mounting racist and xenophobic attacks in Malaysia against the Rohingya people and especially the targeted cyber attacks against Tengku Emma Zuriana Tengku Azmi, the representative of the European Rohingya Council’s (https://www.theerc.eu/about/) in Malaysia, and other concerned individuals for expressing their opinion and support for the rights of the Rohingya people seeking refuge in Malaysia.

[On 21 April 2020, Tengku Emma had her letter regarding her concern over the pushback of the Rohingya boat to sea published in the media. Since then she has received mobbed attacks and intimidation online, especially on Facebook.  The attacks, targeted her gender, particularly, with some including calls for rape. They were also intensely racist, both specifically targeted at her as well as the Rohingya. The following forms of violence have been documented thus far: 

● Doxxing – a gross violation by targeted research into her personal information and publishing it online, including her NRIC, phone number, car number plate, personal photographs, etc.; 

● Malicious distribution of a photograph of her son, a minor, and other personal information, often accompanied by aggressive, racist or sexist comments; 

● Threat of rape and other physical harm, and; 

● Distribution of fake and sexually explicit images. 

….One Facebook post that attacked her was shared more than 18,000 times since 23 April 2020. 

….We are deeply concerned and raise the question if there is indeed a concerted effort to spread inhumane, xenophobic and widespread hate that seem be proliferating in social media spaces on the issue of Rohingya seeking refuge in Malaysia, as a tool to divert attention from the current COVID-19 crisis response and mitigation.
When the attacks were reported to Facebook by Tengku Emma, no action was taken. Facebook responded by stating that the attacks did not amount to a breach of their Community Standards. With her information being circulated, accompanied by calls of aggression and violence, Tengku Emma was forced to deactivate her Facebook account. She subsequently lodged a police report in fear for her own safety and that of her family. 

There is, to date, no clear protection measures from either the police or Facebook regarding her reports. 

It is clear that despite direct threats to her safety and the cumulative nature of the attacks, current reporting mechanisms on Facebook are inadequate to respond, whether in timely or decisive ways, to limit harm. It is also unclear to what extent the police or the Malaysian Communications and Multimedia Commission (MCMC) are willing and able to respond to attacks such as this. 

It has been seven (7) days since Tengku Emma received her first attack, which has since ballooned outwards to tens of thousands. The only recourse she seems to have is deactivating her Facebook account, while the proponents of hatred and xenophobia continue to act unchallenged. This points to the systemic gaps in policy and laws in addressing xenophobia, online gender-based violence and hate speech, and even where legislation exists, implementation is far from sufficient. ]

Our demands: 

It must be stressed that the recent emergence and reiteration of xenophobic rhetoric and pushback against the Rohingya, including those already in Malaysia as well as those adrift at sea seeking asylum from Malaysia, is inhumane and against international norms and standards. The current COVID-19 pandemic is not an excuse for Malaysia to abrogate its duty as part of the international community. 

1.         The Malaysian government must, with immediate effect, engage with the United Nations, specifically the United Nations High Commissioner for Refugee (UNHCR), and civil society organisations to find a durable solution in support of the Rohingya seeking asylum in Malaysia on humanitarian grounds. 

2.         We also call on Malaysia to implement the Rabat Plan of Action on the prohibition of advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence, through a multistakeholder framework that promotes freedom of expression based on the principles of gender equality, non-discrimination and diversity.

3. Social media platforms, meanwhile, have the obligation to review and improve their existing standards and guidelines based on the lived realities of women and marginalised communities, who are often the target of online hate speech and violence, including understanding the cumulative impact of mobbed attacks and how attacks manifest in local contexts.

4. We must end all xenophobic and racist attacks and discrimination against Rohingya who seek asylum in Malaysia; and stop online harassment, bullying and intimidation against human rights defenders working on the Rohingya crisis.

For more posts on content moderation: https://humanrightsdefenders.blog/tag/content-moderation/

https://www.malaymail.com/news/what-you-think/2020/04/28/civil-society-orgs-stand-in-solidarity-with-women-human-rights-defender-ten/1861015

Amnesty accuses Facebook of complicity in Vietnamese censorship

April 22, 2020

On 21 April, Reuters reported that Facebook has begun to significantly step up its censorship of “anti-state” posts in the country. This follows pressure from the authorities, including what the company suspects were deliberate restrictions placed on its local servers by state-owned telecommunications companies that caused Facebook to become unusable for periods of time. The next day Amnesty International demanded that Facebook reverses immediately its decision.  “The revelation that Facebook is caving to Viet Nam’s far-reaching demands for censorship is a devastating turning point for freedom of expression in Viet Nam and beyond,” said William Nee, Business and Human Rights Advisor at Amnesty International. “The Vietnamese authorities’ ruthless suppression of freedom of expression is nothing new, but Facebook’s shift in policy makes them complicit.

Facebook must base its content regulation on international human rights standards for freedom of expression, not on the arbitrary whims of a rights-abusing government. Facebook has a responsibility to respect freedom of expression by refusing to cooperate with these indefensible takedown requests.” The Vietnamese authorities have a long track record of characterizing legitimate criticism as “anti-state” and prosecuting human rights defenders for “conducting propaganda against the state.” The authorities have been actively suppressing online speech amid the COVID-19 pandemic and escalating repressive tactics in recent weeks.  “It is shocking that the Vietnamese authorities are further restricting its peoples’ access to information in the midst of a pandemic. The Vietnamese authorities are notorious for harassing peaceful critics and whistleblowers. This move will keep the world even more in the dark about what is really happening in Viet Nam,” said William Nee.

Facebook’s decision follows years of efforts by Vietnamese authorities to profoundly undermine freedom of expression online, during which they prosecuted an increasing number of peaceful government critics for their online activity and introduced a repressive cybersecurity law that requires technology companies to hand over potentially vast amounts of data, including personal information, and to censor users’ posts. “Facebook’s compliance with these demands sets a dangerous precedent. Governments around the world will see this as an open invitation to enlist Facebook in the service of state censorship. It does all tech firms a terrible disservice by making them vulnerable to the same type of pressure and harassment from repressive governments,” said William Nee…

In a report published last year, Amnesty International found that around 10% of Viet Nam’s prisoners of conscience – individuals jailed solely for peacefully exercising their human rights – were jailed in relation to their Facebook activity. In January 2020, the Vietnamese authorities launched an unprecedented crackdown on social media, including Facebook and YouTube, in an attempt to silence public discussion of a high-profile land dispute in the village of Dong Tam, which has attracted persistent allegations of corruption and led to deadly clashes between security forces and villagers.  The crackdown has only intensified since the onset of COVID-19. Between January and mid-March, a total of 654 people were summoned to police stations across Viet Nam to attend “working sessions” with police related to their Facebook posts connected to the virus, among whom 146 were subjected to financial fines and the rest were forced to delete their posts. On 15 April, authorities introduced a sweeping new decree, 15/2020, which imposes new penalties on alleged social media content which falls foul of vague and arbitrary restrictions. The decree further empowers the government to force tech companies to comply with arbitrary censorship and surveillance measures.

See also: https://humanrightsdefenders.blog/2020/02/10/28-ngos-ask-eu-parliament-to-reject-cooperation-deal-with-vietnam-on-11-february/

Re Facebook and content moderation see also the Economist piece of 1 February 2020: https://www.economist.com/business/2020/01/30/facebook-unveils-details-of-its-content-oversight-board

https://www.amnesty.org/en/latest/news/2020/04/viet-nam-facebook-cease-complicity-government-censorship/

Algorithms designed to suppress ISIS content, may also suppress evidence of human rights violations

April 11, 2020

Facebook and YouTube designed algorithms to suppress ISIS content. They're having unexpected side effects.

Illustration by Leo Acadia for TIME
TIME of 11 April 2020 carries a long article by Billy Perrigo entitled “These Tech Companies Managed to Eradicate ISIS Content. But They’re Also Erasing Crucial Evidence of War Crimes” It is a very interseting piece that clearly spells out the dilemma of supressing too much or too little on Facebook, YouTube, etc.  Algorithms designed to suppress ISIS content, are having unexpected side effects such as suppressing evidence of human rights violations.
…..Images by citizen journalist Abo Liath Aljazarawy to his Facebook page (Eye on Alhasakah’s) showed the ground reality of the Syrian civil war. His page was banned. Facebook confirmed to TIME that Eye on Alhasakah was flagged in late 2019 by its algorithms, as well as users, for sharing “extremist content.” It was then funneled to a human moderator, who decided to remove it. After being notified by TIME, Facebook restored the page in early February, some 12 weeks later, saying the moderator had made a mistake. (Facebook declined to say which specific videos were wrongly flagged, except that there were several.)The algorithms were developed largely in reaction to ISIS, who shocked the world in 2014 when they began to share slickly-produced online videos of executions and battles as propaganda. Because of the very real way these videos radicalized viewers, the U.S.-led coalition in Iraq and Syria worked overtime to suppress them, and enlisted social networks to help. Quickly, the companies discovered that there was too much content for even a huge team of humans to deal with. (More than 500 hours of video are uploaded to YouTube every minute.) So, since 2017, beg have been using algorithms to automatically detect extremist content. Early on, those algorithms were crude, and only supplemented the human moderators’ work. But now, following three years of training, they are responsible for an overwhelming proportion of detections. Facebook now says more than 98% of content removed for violating its rules on extremism is flagged automatically. On YouTube, across the board, more than 20 million videos were taken down before receiving a single view in 2019. And as the coronavirus spread across the globe in early 2020, Facebook, YouTube and Twitter announced their algorithms would take on an even larger share of content moderation, with human moderators barred from taking sensitive material home with them.

But algorithms are notoriously worse than humans at understanding one crucial thing: context. Now, as Facebook and YouTube have come to rely on them more and more, even innocent photos and videos, especially from war zones, are being swept up and removed. Such content can serve a vital purpose for both civilians on the ground — for whom it provides vital real-time information — and human rights monitors far away. In 2017, for the first time ever, the International Criminal Court in the Netherlands issued a war-crimes indictment based on videos from Libya posted on social media. And as violence-detection algorithms have developed, conflict monitors are noticing an unexpected side effect, too: these algorithms could be removing evidence of war crimes from the Internet before anyone even knows it exists.

…..
It was an example of how even one mistaken takedown can make the work of human rights defenders more difficult. Yet this is happening on a wider scale: of the 1.7 million YouTube videos preserved by Syrian Archive, a Berlin-based non-profit that downloads evidence of human rights violations, 16% have been removed. A huge chunk were taken down in 2017, just as YouTube began using algorithms to flag violent and extremist content. And useful content is still being removed on a regular basis. “We’re still seeing that this is a problem,” says Jeff Deutsch, the lead researcher at Syrian Archive. “We’re not saying that all this content has to remain public forever. But it’s important that this content is archived, so it’s accessible to researchers, to human rights groups, to academics, to lawyers, for use in some kind of legal accountability.” (YouTube says it is working with Syrian Archive to improve how they identify and preserve footage that could be useful for human rights groups.)

…..

Facebook and YouTube’s detection systems work by using a technology called machine learning, by which colossal amounts of data (in this case, extremist images, videos, and their metadata) are fed to an artificial intelligence adept at spotting patterns. Early types of machine learning could be trained to identify images containing a house, or a car, or a human face. But since 2017, Facebook and YouTube have been feeding these algorithms content that moderators have flagged as extremist — training them to automatically identify beheadings, propaganda videos and other unsavory content.

Both Facebook and YouTube are notoriously secretive about what kind of content they’re using to train the algorithms responsible for much of this deletion. That means there’s no way for outside observers to know whether innocent content — like Eye on Alhasakah’s — has already been fed in as training data, which would compromise the algorithm’s decision-making. In the case of Eye on Alhasakah’s takedown, “Facebook said, ‘oops, we made a mistake,’” says Dia Kayyali, the Tech and Advocacy coordinator at Witness, a human rights group focused on helping people record digital evidence of abuses. “But what if they had used the page as training data? Then that mistake has been exponentially spread throughout their system, because it’s going to train the algorithm more, and then more of that similar content that was mistakenly taken down is going to get taken down. I think that is exactly what’s happening now.” Facebook and YouTube, however, both deny this is possible. Facebook says it regularly retrains its algorithms to avoid this happening. In a statement, YouTube said: “decisions made by human reviewers help to improve the accuracy of our automated flagging systems.”

…….
That’s because Facebook’s policies allow some types of violence and extremism but not others — meaning decisions on whether to take content down is often based on cultural context. Has a video of an execution been shared by its perpetrators to spread fear? Or by a citizen journalist to ensure the wider world sees a grave human rights violation? A moderator’s answer to those questions could mean that of two identical videos, one remains online and the other is taken down. “This technology can’t yet effectively handle everything that is against our rules,” Saltman said. “Many of the decisions we have to make are complex and involve decisions around intent and cultural nuance which still require human eye and judgement.”

In this balancing act, it’s Facebook’s army of human moderators — many of them outsourced contractors — who carry the pole. And sometimes, they lose their footing. After several of Eye on Alhasakah’s posts were flagged by algorithms and humans alike, a Facebook moderator wrongly decided the page should be banned entirely for sharing violent videos in order to praise them — a violation of Facebook’s rules on violence and extremism, which state that some content can remain online if it is newsworthy, but not if it encourages violence or valorizes terrorism. The nuance, Facebook representatives told TIME, is important for balancing freedom of speech with a safe environment for its users — and keeping Facebook on the right side of government regulations.

Facebook’s set of rules on the topic reads like a gory textbook on ethics: beheadings, decomposed bodies, throat-slitting and cannibalism are all classed as too graphic, and thus never allowed; neither is dismemberment — unless it’s being performed in a medical setting; nor burning people, unless they are practicing self-immolation as an act of political speech, which is protected. Moderators are given discretion, however, if violent content is clearly being shared to spread awareness of human rights abuses. “In these cases, depending on how graphic the content is, we may allow it, but we place a warning screen in front of the content and limit the visibility to people aged 18 or over,” said Saltman. “We know not everyone will agree with these policies and we respect that.”

But civilian journalists operating in the heat of a civil war don’t always have time to read the fine print. And conflict monitors say it’s not enough for Facebook and YouTube to make all the decisions themselves. “Like it or not, people are using these social media platforms as a place of permanent record,” says Woods. “The social media sites don’t get to choose what’s of value and importance.”

See also: https://humanrightsdefenders.blog/2019/06/17/social-media-councils-an-answer-to-problems-of-content-moderation-and-distribution/

https://time.com/5798001/facebook-youtube-algorithms-extremism/

Beyond WhatsApp and NSO – how human rights defenders are targeted by cyberattacks

May 14, 2019

Several reports have shown Israeli technology being used by Gulf states against their own citizens (AFP/File photo)

NSO Group has been under increased scrutiny after a series of reports about the ways in which its spyware programme has been used against prominent human rights activists. Last year, a report by CitizenLab, a group at the University of Toronto, showed that human rights defenders in Saudi Arabia, the United Arab Emirates and Bahrain were targeted with the software.

In October, US whistleblower Edward Snowden said Pegasus had been used by the Saudi authorities to surveil journalist Jamal Khashoggi before his death. “They are the worst of the worst,” Snowden said of the firm. Amnesty International said in August that a staffer’s phone was infected with the Pegasus software via a WhatsApp message.

——-

Friedhelm Weinberg‘s piece of 1 May is almost prescient and contains good, broader advice:

When activists open their inboxes, they find more than the standard spam messages telling them they’ve finally won the lottery. Instead, they receive highly sophisticated emails that look like they are real, purport to be from friends and invite them to meetings that are actually happening. The catch is: at one point the emails will attempt to trick them.

1. Phishing for accounts, not compliments

In 2017, the Citizen Lab at the University of Toronto and the Egyptian Initiative for Personal Rights, documented what they called the “Nile Phish” campaign, a set of emails luring activists into giving access to their most sensitive accounts – email and file-sharing tools in the cloud. The Seoul-based Transitional Justice Working Group recently warned on its Facebook page about a very similar campaign. As attacks like these have mounted in recent years, civil society activists have come together to defend themselves, support each other and document what is happening. The Rarenet is a global group of individuals and organizations that provides emergency support for activists – but together it also works to educate civil society actors to dodge attacks before damage is done. The Internet Freedom Festival is a gathering dedicated to supporting people at risk online, bringing together more than 1,000 people from across the globe. The emails from campaigns like Nile Phish may be cunning and carefully crafted to target individual activists.. – they are not cutting-edge technology. Protection is stunningly simple: do nothing. Simply don’t click the link and enter information – as hard as it is when you are promised something in return.

Often digital security is about being calm and controlled as much as it is about being savvy in the digital sphere. And that is precisely what makes it difficult for passionate and stressed activists!

2. The million-dollar virus

Unfortunately, calm is not always enough. Activists have also been targeted with sophisticated spyware that is incredibly expensive to procure and difficult to spot. Ahmed Mansoor, a human-rights defender from the United Arab Emirates, received messages with malware (commonly known as computer viruses) that cost one million dollars on the grey market, where unethical hackers and spyware firms meet. See also: https://humanrightsdefenders.blog/2016/08/29/apple-tackles-iphone-one-tap-spyware-flaws-after-mea-laureate-discovers-hacking-attempt/]

Rights defender Ahmed Mansoor in Dubai in 2011, a day after he was pardoned following a conviction for insulting UAE leaders. He is now in prison once more.

Rights defender Ahmed Mansoor in Dubai in 2011. Image: Reuters/Nikhil Monteiro

3. Shutting down real news with fake readers

Both phishing and malware are attacks directed against the messengers, but there are also attacks against the message itself. This is typically achieved by directing hordes of fake readers to the real news – that is, by sending so many requests through bot visitors to websites that the servers break down under the load. Commonly referred to as “denial of service” attacks, these bot armies have also earned their own response from civil society. Specialised packages from Virtual Road or Deflect sort fake visitors from real ones to make sure the message stays up.

 

A chart showing how distributed denial of service (DDoS) attacks have grown over time.

How distributed denial of service (DDoS) attacks have grown. Image: Kinsta.com; data from EasyDNS

Recently, these companies also started investigating who is behind these attacks– a notoriously difficult task, because it is so easy to hide traces online. Interestingly, whenever Virtual Road were so confident in their findings that they publicly named attackers, the attacks stopped. Immediately. Online, as offline, one of the most effective ways to ensure that attacks end is to name the offenders, whether they are cocky kids or governments seeking to stiffle dissent. But more important than shaming attackers is supporting civil society’s resilience and capacity to weather the storms. For this, digital leadership, trusted networks and creative collaborations between technologists and governments will pave the way to an internet where the vulnerable are protected and spaces for activism are thriving.

——–

Big Brother Awards try to identify risks for human rights defenders

February 24, 2019