Posts Tagged ‘information technology companies’

What to do about global spyware abuse?

January 6, 2021

Mohamed EL Bashir, a Public Policy & Internet Governance Strategist, wrote a lengthy but informative piece about the persistent problem of commercial spyware Abuse: “Reshaping Cyberspace: Beyond the Emerging Online Mercenaries and the Aftermath of SolarWinds“, in CircleID 5 January 2021.

The piece starts of with some concrete cases such as Ahmed Mansoor [see https://humanrightsdefenders.blog/2016/08/29/apple-tackles-iphone-one-tap-spyware-flaws-after-mea-laureate-discovers-hacking-attempt/] and Rafael Cabrera, [see: https://www.nytimes.com/2017/06/21/world/americas/mexico-pena-nieto-spying-hacking-surveillance.html]. In 2018, a close confidant of Jamal Khashoggi was targeted in Canada by a fake package notification, resulting in the infection of his iPhone.

..Citizen Lab has tracked and documented more than two dozen cases using similar intrusion and spyware techniques. We don’t know the number of victims or their stories, as not all vectors are publicly known. Once spyware is implanted, it provides a command and control (C&C) server with regular, scheduled updates designed to avoid extensive bandwidth consumption. Those tools are created to be stealthy and evade forensic analysis, avoid detection by antivirus software, and can be deactivated and removed by operators.

Once successfully implanted on a victim’s phone using an exploit chain like the Trident, spyware can actively record or passively gather a variety of different data about the device. By providing full access to the phone’s files, messages, microphone, and video camera, the operator can turn the device into a silent digital spy in the target’s pocket.

These attacks and many others that are unreported show that spyware tools and the intrusion business have a significant abuse potential and that bad actors or governments can’t resist the temptation to use such tools against political opponents, journalists, and human rights defenders. Due to the lack of operational due-diligence of spyware companies, these companies don’t consider the impact of the use of their tools on the civilian population nor comply with human rights policies. [see: https://humanrightsdefenders.blog/2020/07/20/the-ups-and-downs-in-sueing-the-nso-group/]

The growing privatization of cybersecurity attacks arises through a new generation of private companies, aka online mercenaries. This phenomenon has reached the point where it has acquired its own acronym, PSOAs, for the private sector offensive actors. This harmful industry is quickly growing to become a multi-billion dollar global technology market. These newly emerging companies provide nation-states and bad actors the option to buy the tools necessary for launching sophisticated cyberattacks. This adds another significant element to the cybersecurity threat landscape.

These companies claim that they have strict controls over how their spyware is sold and used and have robust company oversight mechanisms to prevent abuse. However, the media and security research groups have consistently presented a different and more troubling picture of abuse…

The growing abuse of surveillance technology by authoritarian regimes with poor human rights records is becoming a disturbing new, globally emerging trend. The use of these harmful tools has drawn attention to how the availability and abuse of highly intrusive surveillance technology shrink already limited cyberspace in which vulnerable people can express their views without facing repercussions such as imprisonment, torture, or killing.

Solving this global problem will not be easy nor simple and will require a strong coalition of multi-stakeholders, including governments, civil society, and the private sector, to reign in what is now a “Wild West” of unmitigated abuse in cyberspace. With powerful surveillance and intrusion technology roaming free without restrictions, there is nowhere to hide, and no one will be safe from those who wish to cause harm online or offline. Not acting urgently by banning or restricting the use of these tools will threaten democracy, rule of law, and human rights worldwide.

On December 7, 2020, the US National Security Agency issued a cybersecurity advisory warning that “Russian State-sponsored actors” were exploiting a vulnerability in the digital workspace software developed by VMware (VMware®1Access and VMware Identity Manager2 products) using compromised credentials.

The next day, on December 8, the cybersecurity firm FireEye announced the theft of its “Red Team” tools that it uses to identify vulnerabilities in its customers’ systems. Several prominent media organizations reported an ongoing software supply-chain attack against SolarWinds, the company whose products are used by over 300,000 corporate and government customers — including most of the Fortune 500 companies, Los Alamos National Laboratory (which has nuclear weapons responsibilities), and Boeing.

A malware called SUNBURST infected SolarWind’s customers’ systems when they updated the company’s Orion software.

On December 30, 2020, Reuters reported that the hacking group behind the SolarWinds compromise was able to break into Microsoft Corp and access some of its source code. This new development sent a worrying signal about the cyberattack’s ambition and intentions.

Microsoft president Brad Smith said the cyber assault was effectively an attack on the US, its government, and other critical institutions, and demonstrated how dangerous the cyberspace landscape had become.

Based on telemetry gathered from Microsoft’s Defender antivirus software, Smith said the nature of the attack and the breadth of the supply chain vulnerability was very clear to see. He said Microsoft has now identified at least 40 of its customers that the group targeted and compromised, most of which are understood to be based in the US, but Microsoft’s work has also uncovered victims in Belgium, Canada, Israel, Mexico, Spain, the UAE, and the UK, including government agencies, NGOs, and cybersecurity and technology firms.

Although the ongoing operation appears to be for intelligence gathering, no reported damage has resulted from the attacks until the publishing date of this article. This is not “espionage as usual.” It created a serious technological vulnerability in the supply chain. It has also shaken the trust and reliability of the world’s most advanced critical infrastructure to advance one nation’s intelligence agency.

As expected, the Kremlin has denied any role in recent cyberattacks on the United States. President Vladimir Putin’s spokesman Dmitry Peskov said the American accusations that Russia was behind a major security breach lacked evidence. The Russian denial raised the question of a gap of accountability in attributing cyberspace attacks to a nation-state or specific actor. Determining who is to blame in a cyberattack is a significant challenge, as cyberspace is intrinsically different from the kinetic one. There is no physical activity to observe, and technological advancements have allowed perpetrators to be harder to track and to remain seemingly anonymous when conducting the attack (Brantly, 2016).

To achieve a legitimate attribution, it is not enough to identify the suspects, i.e., the actual persons involved in the cyberattacks but also be able to determine if the cyberattacks had a motive which can be political or economic and whether the actors were supported by a government or a non-state actor, with enough evidence to support diplomatic, military, or legal options.

A recognized attribution can enhance accountability in cyberspace and deter bad actors from launching cyberattacks, especially on civilian infrastructures like transportation systems, hospitals, power grids, schools, and civil society organizations.

According to the United Nation’s responsibility of States for Internationally Wrongful Acts article 2, to constitute an “internationally wrongful act,” a cyber operation generally must be 1) attributable to a state and 2) breach an obligation owed another state. It is also unfortunate that state-sponsored cyberattacks violate international law principles of necessity and proportionality.

Governments need to consider a multi-stakeholder approach to help resolve the accountability gap in cyberspace. Some states continue to believe that ensuring international security and stability in cyberspace or cyberpeace is exclusively the responsibility of states. In practice, cyberspace is designed, deployed, and managed primarily by non-state actors, like tech companies, Internet Service Providers (ISPs), standards organizations, and research institutions. It is important to engage them in efforts to ensure the stability of cyberspace.

I will name two examples of multi-stakeholder initiatives to secure cyberspace: the Global Commission on the Stability of Cyberspace (GCSC), which consisted of 28 commissioners from 16 countries, including government officials, has developed principles and norms that can be adopted by states to ensure stable and secure cyberspace. For example, it requested states and non-state actors to not pursue, support, or allow cyber operations intended to disrupt the technical infrastructure essential to elections, referenda, or plebiscites.

Cyberpeace Institute is a newly established global NGO that was one-year-old in December 2020 but has the important goal of protecting the most vulnerable and achieve peace and justice in cyberspace. The institute started its operations by focusing on the healthcare industry, which was under attack daily during the COVID 19 pandemic. As those cyberattacks were a direct threat to human life, the institute called upon governments to stop cyber operations against medical facilities and protect healthcare.

I believe that there is an opportunity for the states to forge agreements to curb cyberattacks on civilian and private sector infrastructure and to define what those boundaries and redlines should be.

SolarWinds and the recent attacks on healthcare facilities are important milestones as they offer a live example of the paramount risks associated with a completely unchecked and unregulated cyberspace environment. But it will only prove to be a moment of true and more fundamental reckoning if many of us, governments, and different multi-stakeholders played a part, each in their respective roles, in capitalizing and focusing on those recent events by forcing legal, technological, and institutional reform and real change in cyberspace.

The effects of the Solarwinds attack will not only impact US government agencies but businesses and civilians that are currently less secure online. Bad actors are becoming more aggressive, bold, reckless and continue to cross the red lines we considered as norms in cyberspace.

Vulnerable civilians are the targets of the intrusion tools and spyware in a new cyberspace wild west landscape. Clearly, additional legal and regulatory scrutiny is required of private-sector offensive actors or PSOAs. If PSOA companies are unwilling to recognize the role that their products play in undermining human rights or address these urgent concerns, then, in this case, intervention by governments and other stakeholders is needed. 

We no longer have the privilege of ignoring the growing impact of cyberattacks on international law, geopolitics, and civilians. We need a strong and global cybersecurity response. What is required is a multi-stakeholders’ courageous agenda that redefines historical assumptions and biases about the possibility of establishing new laws and norms that can govern cyberspace.

Changes and reforms are achievable if there is will. The Snowden revelations and the outcry that followed resulted not only in massive changes to the domestic regulation of US foreign intelligence, but they also shaped changes at the European Court of Human Rights, the Court of Justice of the European Union, and the UN. The Human Rights Committee also helped spur the creation of a new UN Special Rapporteur on the Right to Privacy based in Geneva.

The new cyberspace laws, rules, and norms require a multi-stakeholder dialogue process that involves participants from tech companies, academia, civil society, and international law in global discussions that can be facilitated by governments or supported by a specialized international intergovernmental organization.

Sources and References:

http://www.circleid.com/posts/20210105-reshaping-cyberspace-beyond-the-emerging-online-mercenaries/

Tech giants join legal battle against NSO

December 22, 2020

Raphael Satter reports on 22 December 2020 for Reuters that tech giants Google, Cisco and Dell on Monday joined Facebook’s legal battle against hacking company NSO, filing an amicus brief in federal court that warned that the Israeli firm’s tools were “powerful, and dangerous.”

The brief, filed before the U.S. Court of Appeals for the Ninth Circuit, opens up a new front in Facebook’s lawsuit against NSO, which it filed last year after it was revealed that the cyber surveillance firm had exploited a bug in Facebook-owned instant messaging program WhatsApp to help surveil more than 1,400 people worldwide. See also: https://humanrightsdefenders.blog/2020/07/20/the-ups-and-downs-in-sueing-the-nso-group/

NSO has argued that, because it sells digital break-in tools to police and spy agencies, it should benefit from “sovereign immunity” – a legal doctrine that generally insulates foreign governments from lawsuits. NSO lost that argument in the Northern District of California in July and has since appealed to the Ninth Circuit to have the ruling overturned.

Microsoft, Alphabet-owned Google, Cisco, Dell Technologies-owned VMWare and the Washington-based Internet Association joined forces with Facebook to argue against that, saying that awarding soverign immunity to NSO would lead to a proliferation of hacking technology and “more foreign governments with powerful and dangerous cyber surveillance tools.”

That in turn “means dramatically more opportunities for those tools to fall into the wrong hands and be used nefariously,” the brief argues.

NSO – which did not immediately return a message seeking comment – argues that its products are used to fight crime. But human rights defenders and technologists at places such as Toronto-based Citizen Lab and London-based Amnesty International have documented cases in which NSO technology has been used to target reporters, lawyers and even nutrionists lobbying for soda taxes.

Citizen Lab published a report on Sunday alleging that NSO’s phone-hacking technology had been deployed to hack three dozen phones belonging to journalists, producers, anchors, and executives at Qatar-based broadcaster Al Jazeera as well as a device beloning to a reporter at London-based Al Araby TV.

NSO’s spyware was also been linked to the slaying of Washington Post journalist Jamal Khashoggi, who was murdered and dismembered in the Saudi consulate in Istanbul in 2018. Khashoggi’s friend, dissident video blogger Omar Abdulaziz, has long argued that it was the Saudi government’s ability to see their WhatsApp messages that led to his death.

NSO has denied hacking Khashoggi, but has so far declined to comment on whether its technology was used to spy on others in his circle.

https://www.reuters.com/article/us-facebook-nso-cyber/microsoft-google-cisco-dell-join-legal-battle-against-hacking-company-nso-idUSKBN28V2WX?il=0

Arab Spring: information technology platforms no longer support human rights defenders in the Middle East and North Africa

December 18, 2020

Jason Kelley in the Electronic Frontier Foundation (EFF) of 17 December 2020 summarizes a joint statement by over 30 NGOs saying that the platform policies and content moderation procedures of the tech giants now too often lead to the silencing and erasure of critical voices from across the region. Arbitrary and non-transparent account suspension and removal of political and dissenting speech has become so frequent and systematic in the area that it cannot be dismissed as isolated incidents or the result of transitory errors in automated decision-making.

Young people protest in Morocco, 2011, photo by Magharebia

This year is the tenth anniversary of what became known as the “Arab Spring”, in which activists and citizens across the Middle East and North Africa (MENA) used social media to document the conditions in which they lived, to push for political change and social justice, and to draw the world’s attention to their movement. For many, it was the first time they had seen how the Internet could have a role to play in pushing for human rights across the world. Emerging social media platforms like Facebook, Twitter and YouTube all basked in the reflected glory of press coverage that centered their part in the protests: often to the exclusion of those who were actually on the streets. The years after the uprisings failed to live up to the optimism of the time. Offline, the authoritarian backlash against the democratic protests has meant that many of those who fought for justice a decade ago, are still fighting now.

The letter asks for several concrete measures to ensure that users across the region are treated fairly and are able to express themselves freely:

  • Do not engage in arbitrary or unfair discrimination.
  • Invest in the regional expertise to develop and implement context-based content moderation decisions aligned with human rights frameworks.
  • Pay special attention to cases arising from war and conflict zones.
  • Preserve restricted content related to cases arising from war and conflict zones.
  • Go beyond public apologies for technical failures, and provide greater transparency, notice, and offer meaningful and timely appeals for users by implementing the Santa Clara Principles on Transparency and Accountability in Content Moderation.

Content moderation policies are not only critical to ensuring robust political debate. They are key to expanding and protecting human rights.  Ten years out from those powerful protests, it’s clear that authoritarian and repressive regimes will do everything in their power to stop free and open expression. Platforms have an obligation to note and act on the effects content moderation has on oppressed communities, in MENA and elsewhere. [see also: https://humanrightsdefenders.blog/2020/06/03/more-on-facebook-and-twitter-and-content-moderation/]

In 2012, Mark Zuckerberg, CEO and Founder of Facebook, wrote

By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.

Instead, governments around the world have chosen authoritarianism, and platforms have contributed to the repression. It’s time for that to end.

Read the full letter demanding that Facebook, Twitter, and YouTube stop silencing critical voices from the Middle East and North Africa, reproduced below:

17 December 2020

Open Letter to Facebook, Twitter, and YouTube: Stop silencing critical voices from the Middle East and North Africa

Ten years ago today, 26-year old Tunisian street vendor Mohamed Bouazizi set himself on fire in protest over injustice and state marginalization, igniting mass uprisings in Tunisia, Egypt, and other countries across the Middle East and North Africa. 

As we mark the 10th anniversary of the Arab Spring, we, the undersigned activists, journalists, and human rights organizations, have come together to voice our frustration and dismay at how platform policies and content moderation procedures all too often lead to the silencing and erasure of critical voices from marginalized and oppressed communities across the Middle East and North Africa.

The Arab Spring is historic for many reasons, and one of its outstanding legacies is how activists and citizens have used social media to push for political change and social justice, cementing the internet as an essential enabler of human rights in the digital age.   

Social media companies boast of the role they play in connecting people. As Mark Zuckerberg famously wrote in his 2012 Founder’s Letter

“By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.”

Zuckerberg’s prediction was wrong. Instead, more governments around the world have chosen authoritarianism, and platforms have contributed to their repression by making deals with oppressive heads of state; opening doors to dictators; and censoring key activists, journalists, and other changemakers throughout the Middle East and North Africa, sometimes at the behest of other governments:

  • Tunisia: In June 2020, Facebook permanently disabled more than 60 accounts of Tunisian activists, journalists, and musicians on scant evidence. While many were reinstated, thanks to the quick reaction from civil society groups, accounts of Tunisian artists and musicians still have not been restored. We sent a coalition letter to Facebook on the matter but we didn’t receive a public response.
  • Syria: In early 2020, Syrian activists launched a campaign to denounce Facebook’s decision to take down/disable thousands of anti-Assad accounts and pages that documented war crimes since 2011, under the pretext of removing terrorist content. Despite the appeal, a number of those accounts remain suspended. Similarly, Syrians have documented how YouTube is literally erasing their history.
  • Palestine: Palestinian activists and social media users have been campaigning since 2016 to raise awareness around social media companies’ censorial practices. In May 2020, at least 52 Facebook accounts of Palestinian activists and journalists were suspended, and more have since been restricted. Twitter suspended the account of a verified media agency, Quds News Network, reportedly on suspicion that the agency was linked to terrorist groups. Requests to Twitter to look into the matter have gone unanswered. Palestinian social media users have also expressed concern numerous times about discriminatory platform policies.
  • Egypt: In early October 2019, Twitter suspended en masse the accounts of Egyptian dissidents living in Egypt and across the diaspora, directly following the eruption of anti-Sisi protests in Egypt. Twitter suspended the account of one activist with over 350,000 followers in December 2017, and the account still has yet to be restored. The same activist’s Facebook account was also suspended in November 2017 and restored only after international intervention. YouTube removed his account earlier in 2007.

Examples such as these are far too numerous, and they contribute to the widely shared perception among activists and users in MENA and the Global South that these platforms do not care about them, and often fail to protect human rights defenders when concerns are raised.  

Arbitrary and non-transparent account suspension and removal of political and dissenting speech has become so frequent and systematic that they cannot be dismissed as isolated incidents or the result of transitory errors in automated decision-making. 

While Facebook and Twitter can be swift in responding to public outcry from activists or private advocacy by human rights organizations (particularly in the United States and Europe), in most cases responses to advocates in the MENA region leave much to be desired. End-users are frequently not informed of which rule they violated, and are not provided a means to appeal to a human moderator. 

Remedy and redress should not be a privilege reserved for those who have access to power or can make their voices heard. The status quo cannot continue. 

The MENA region has one of the world’s worst records on freedom of expression, and social media remains critical for helping people connect, organize, and document human rights violations and abuses. 

We urge you to not be complicit in censorship and erasure of oppressed communities’ narratives and histories, and we ask you to implement the following measures to ensure that users across the region are treated fairly and are able to express themselves freely:

  • Do not engage in arbitrary or unfair discrimination. Actively engage with local users, activists, human rights experts, academics, and civil society from the MENA region to review grievances. Regional political, social, cultural context(s) and nuances must be factored in when implementing, developing, and revising policies, products and services. 
  • Invest in the necessary local and regional expertise to develop and implement context-based content moderation decisions aligned with human rights frameworks in the MENA region.  A bare minimum would be to hire content moderators who understand the various and diverse dialects and spoken Arabic in the twenty-two Arab states. Those moderators should be provided with the support they need to do their job safely, healthily, and in consultation with their peers, including senior management.
  • Pay special attention to cases arising from war and conflict zones to ensure content moderation decisions do not unfairly target marginalized communities. For example, documentation of human rights abuses and violations is a legitimate activity distinct from disseminating or glorifying terrorist or extremist content. As noted in a recent letter to the Global Internet Forum to Counter Terrorism, more transparency is needed regarding definitions and moderation of terrorist and violent extremist (TVEC) content
  • Preserve restricted content related to cases arising from war and conflict zones that Facebook makes unavailable, as it could serve as evidence for victims and organizations seeking to hold perpetrators accountable. Ensure that such content is made available to international and national judicial authorities without undue delay.
  • Public apologies for technical errors are not sufficient when erroneous content moderation decisions are not changed. Companies must provide greater transparency, notice, and offer meaningful and timely appeals for users. The Santa Clara Principles on Transparency and Accountability in Content Moderation, which Facebook, Twitter, and YouTube endorsed in 2019, offer a baseline set of guidelines that must be immediately implemented. 

Signed,

Access Now
Arabic Network for Human Rights Information — ANHRI
Article 19
Association for Progressive Communications — APC
Association Tunisienne de Prévention Positive
Avaaz
Cairo Institute for Human Rights Studies (CIHRS)
The Computational Propaganda Project
Daaarb — News — website
Egyptian Initiative for Personal Rights
Electronic Frontier Foundation
Euro-Mediterranean Human Rights Monitor
Global Voices
Gulf Centre for Human Rights (GCHR)
Hossam el-Hamalawy, journalist and member of the Egyptian Revolutionary Socialists Organization
Humena for Human Rights and Civic Engagement
IFEX
Ilam- Media Center For Arab Palestinians In Israel
ImpACT International for Human Rights Policies
Initiative Mawjoudin pour l’égalité
Iraqi Network for Social Media – INSMnetwork
I WATCH Organisation (Transparency International — Tunisia)
Khaled Elbalshy – Daaarb website – Editor in Chief
Mahmoud Ghazayel, Independent
Marlena Wisniak, European Center for Not-for-Profit Law
Masaar — Technology and Law Community
Michael Karanicolas, Wikimedia/Yale Law School Initiative on Intermediaries and Information
Mohamed Suliman, Internet activist
My.Kali magazine — Middle East and North Africa
Palestine Digital Rights Coalition (PDRC)
The Palestine Institute for Public Diplomacy
Pen Iraq
Quds News Network
Ranking Digital Rights
Rima Sghaier, Independent
Sada Social Center
Skyline International for Human Rights
SMEX
Syrian Center for Media and Freedom of Expression (SCM)
The Tahrir Institute for Middle East Policy (TIMEP)
Taraaz
Temi Lasade-Anderson, Digital Action
WITNESS
Vigilance Association for Democracy and the Civic State — Tunisia
7amleh – The Arab Center for the Advancement of Social Media

https://www.eff.org/deeplinks/2020/12/decade-after-arab-spring-platforms-have-turned-their-backs-critical-voices-middle

Bernd Lange sees breakthrough for human rights in EU dual-use export

December 12, 2020



On 11 December 2020 Bernd Lange, Vice-chair of the Group of the Progressive Alliance of Socialists and Democrats in the European Parliament, wrote in New Europe the following piece about how after 6 years there has come an European agreement on stricter rules for the export of dual-use goods, which can be used for both civilian and military ends.


All good things are worth waiting for. After long six years negotiators from the European Parliament, the Commission and member states finally agreed on stricter rules for the export of dual-use goods, which can be used for both civilian and military ends. Parliament’s perseverance and assertiveness against a blockade by some of the European Union member states has paid off in the sense that as of now respect for human rights will become an export standard.

Up until now, export restrictions applied to aerospace items, navigation instruments or trucks. From now on, these rules will also apply to EU produced cyber-surveillance technologies, which demonstrably have been abused by authoritarian regimes to spy on opposition movements; for instance, during the Arab Spring in 2011.

This is a breakthrough for human rights in trade by overcoming years of various EU governments blocking the inclusion of cyber-surveillance technology in the export control rules for dual-use goods. Without a doubt: Technological advances, new security challenges and their demonstrated risks to the protection of human rights required more decisive action and harmonised rules for EU export controls.

Thanks to the stamina of the Parliament, it will now be much more difficult for authoritarian regimes to abuse EU produced cybersecurity tools such as biometric software or Big Data searches to spy on human rights defenders and opposition activists. Our message is clear: economic interests must not take precedence over human rights. Exporters have to shoulder greater responsibility and apply due diligence to ensure their products are not employed to violate human rights. We have also managed to increase transparency by insisting on listing exports in greater detail in the annual export control reports, which will make it much harder to hide suspicious items.

In a nutshell, we are setting up an EU-wide regime to control cyber-surveillance items that are not listed as dual-use items in international regimes, in the interest of protecting human rights and political freedoms. We strengthened member states’ public reporting obligations on export controls, so far patchy, to make the cyber-surveillance sector, in particular, more transparent. We increased the importance of human rights as licensing criterion and we agreed on rules to swiftly include emerging technologies in the regulation.

This agreement on dual-use items, together with the rules on conflict minerals and the soon to be adopted rules on corporate due diligence, is establishing a new gold standard for human rights in EU trade policy.

I want the European Union to lead globally on rules and values-based trade. These policies show that we can manage globalisation to protect people and the planet. This must be the blueprint for future rule-based trade policy.

Policy response from Human Rights NGOs to COVID-19: Witness

April 5, 2020

In the midst of the COVID-19 crisis, many human rights organisations have been formulating a policy response. While I cannot be complete or undertake comparisons, I will try and give some examples in the course of the coming weeks. Here the one by Sam Gregory of

…..The immediate implications of coronavirus – quarantine, enhanced emergency powers, restrictions on sharing information –  make it harder for individuals all around the world to document and share the realities of government repression and private actors’ violations.  In states of emergency, authoritarian governments in particular can operate with further impunity, cracking down on free speech and turning to increasingly repressive measures. The threat of coronavirus and its justifying power provides cover for rights-violating laws and measures that history tells us may long outlive the actual pandemic. And the attention on coronavirus distracts focus from rights issues that are both compounded by the impact of the virus and cannot claim the spotlight now.

At WITNESS we are adapting and responding, led by what we learn and hear from the communities of activism, human rights and civic journalism with which we collaborate closely across the world. We will continue to ensure that our guidance on direct documentation helps people document the truth even under trying circumstances and widespread misinformation. We will draw on our experience curating voices and information from closed situations to make sense in confusion. We will provide secure online training while options for physical meeting are curtailed. We will provide meaningful localized guidance on how to document and verify amid an information pandemic; and we will ensure that long-standing struggles are not neglected now when they need it most.

In this crisis moment, it is critical that we enhance the abilities and defend the rights of people who document and share critical realities from the ground. Across the three core thematic issues we currently work on, the need is critical. For issues such as video as evidence from conflict zones, these wars continue on and reach their apex even as coronavirus takes all the attention away. We need only look to the current situation in Idlib, Yemen or in other states of conflict in the Middle East.

For other issues, like state violence against minorities, many people already live in a state of emergency.

Coronavirus response in Complexo do Alemão favela, Rio de Janeiro (credit: Raull Santiago)

Favela residents in Brazil have lived with vastly elevated levels of police killings of civilians for years, and now face a parallel health emergency. Meanwhile immigrant communities in the US have lived in fear of ICE for years and must now weigh their physical health against their physical safety and family integrity. Many communities – in Kashmir and in Rakhine State, Burma – live without access to the internet on an ongoing basis and must still try and share what is happening. And for those who fight for their land rights and environmental justice, coronavirus is both a threat to vulnerable indigenous and poor communities lacking health care, sanitation and state support as well as a powerful distraction from their battle against structural injustice.

A critical part of WITNESS’ strategy is our work to ensure technology companies actions and government regulation of technology are accountable to the most vulnerable members of our global society – marginalized populations globally, particularly those outside the US and Europe, as well as human rights defenders and civic journalists. As responses to coronavirus kick-in there are critical implications in how both civic technology and commercial technology are now being deployed and will be deployed.

Already, coronavirus has acted as an accelerant – like fuel on the fire – to existing trends in technology. Some of these have potentially profound negative impacts for human rights values, human rights documentation and human rights defenders; others may hold a silver lining.

My colleague Dia Kayyali has already written about the sudden shift to much broader algorithmic content moderation that took place last week as Facebook, Twitter, Google and YouTube sent home their human moderators. Over the past years, we’ve seen the implications of both a move to algorithmic moderation and a lack of will and resourcing: from hate speech staying on platforms in vulnerable societies, to the removal critical war crimes evidence at scale from YouTube, to a lack of accountability for decisions made under the guise of countering terrorist and violent extremist content. But in civil society we did not anticipate that such a shift to more broad algorithmic control would happen so rapidly in such a short period of time. We must closely monitor and push for this change not to adversely affect societies and critical struggles worldwide in a moment when they are already threatened by isolation and increased government repression. As Dia suggests, now is the moment for these companies to finally make their algorithms and content moderation processes more transparent to critical civil society experts, as well as reset on how they support and treat the human beings who do the dirty work of moderation.

WITNESS’s work on misinformation and disinformation spans a decade of supporting the production of truthful, trustworthy content in war zones, crises and long-standing struggles for rights. Most recently we have focused on the emerging threats from deepfakes and other forms of synthetic media that enable increasingly realistic fakery of what looks like a real person saying or doing something they never did.

We’ve led the first global expert meetings in Brazil, Southern Africa and Southeast Asia on what a rights-respecting, global responses should look like in terms of understanding threats and solutions. Feedback from these sessions has stressed the need for attention to a continuum of audiovisual misinformation including ‘shallowfakes’, the simpler forms of miscontextualized and lightly edited videos that dominate attempts to confuse and deceive. Right now, social media platforms are unleashing a series of responses to misinformation around Coronavirus – from highlighting authoritative health information from country-level and international sources, to curating resources, offering help centers, and taking down a wider range of content that misinforms, deceives or price gouges including even leading politicians, such as President Bolsonaro in Brazil. The question we must ask is what we want to see internet companies continue to do after the crisis: what should they do for a wider range of misinformation and disinformation outside of health – and what do we not want them to do? We’ll be sharing more about this in the coming weeks.

And where can we find a technological silver lining? One area may be the potential to discover and explore new ways to act in solidarity and agency with each other online. A long-standing area of work at WITNESS is how to use ‘co-presence’ and livestreaming to bridge social distances and help people witness snd support one another when physical proximity is not possible.

Our Mobil-Eyes Us project supported favela-based activists to use live video to better engage their audiences to be with them, and provide meaningful support. In parts of the world that benefit from broadband internet access, and the absence of arbitrary shutdowns, and the ability to physically isolate, we are seeing an explosion of experimentation in how to operate better in a world that is both physically distanced, yet still socially proximate. We should learn from this and drive experimentation and action in ensuring that even as our freedom of assembly in physical space is curtailed for legitimate (and illegitimate) reasons, our ability to assemble online in meaningful action is not curtailed but enhanced.

In moments of crisis good and bad actors alike will try and push the agenda that they want. In this moment of acceleration and crisis, WITNESS is committed to ensuring an agenda firmly grounded, and led by a human rights vision and the wants and needs of vulnerable communities and human rights defenders worldwide.

Coronavirus and human rights: Preparing WITNESS’s response

 

Ross LaJeunesse and human rights policy at Google

January 3, 2020

The former Google Exec said it was driven out after trying to start a human rights program

An illuminated Google logo can be seen in an office building in Switzerland on December 5, 2018. (Arnd Wiegmann / Reuters)

Several newspapers (here the BBC) wrote on 2 January 2020 about ex-Google executive Ross LaJeunesse revelations concerning the firm’s human rights policy. This matters more than usual in view of Google’s self-professed commitment to human rights, e.g. in the context of the Global Network Initiative (GNI) which brings information technology companies together with NGOs, investors and academics. Founding companies are: Google, Microsoft, and Yahoo!. GNI’s principles and guidelines provide companies with a framework for responding to government requests in a manner that protects and advances freedom of expression and privacy. Companies that join GNI agree to independent assessments of their record in implementing these principles and guidelines [see https://humanrightsdefenders.blog/2013/05/23/facebook-joins-the-global-network-initiative-for-human-rights/. Or by providing funding [see e.g. https://humanrightsdefenders.blog/2019/05/08/excellent-news-huridocs-to-receive-1-million-from-google-for-ai-work/].

A former Google executive has raised concerns about the tech giant’s human rights policies as it eyes expansion in China and elsewhere. Ross LaJeunesse, the firm’s former head of global international relations (until May last year), said he was “sidelined” after he pushed the company to take a stronger stance. Google defended its record in a statement, saying it has an “unwavering commitment” to human rights.

Mr LaJeunesse is now campaigning for a seat in the US senate. He said his experience at Google convinced him of the need for tougher tech regulations. “No longer can massive tech companies like Google be permitted to operate relatively free from government oversight,he wrote in a post on Medium.

Google’s main search business quit China in 2010 in protest of the country’s censorship laws and alleged government hacks. But it has since explored ways to return to the country, a major market, stirring controversy.bLaJeunesse said Google rebuffed his efforts to formalise a company-wide programme for human rights review, even as it worked to expand in countries such as China and Saudi Arabia. “Each time I recommended a Human Rights Program, senior executives came up with an excuse to say no,” he wrote. “I then realized that the company had never intended to incorporate human rights principles into its business and product decisions. Just when Google needed to double down on a commitment to human rights, it decided to instead chase bigger profits and an even higher stock price.

Google said it conducts human rights assessments for its services and does not believe the more centralised approach recommended by Mr LeJeunesse was best, given its different products.

View at Medium.com

How can the human rights defenders use new information technologies better?

November 28, 2019

(twitter: @mads_gottlieb) wrote in Impakter about Human Rights, Technology and Partnerships and stated that these technologies have the potential to tremendously facilitate human rights defenders in their work, whether they are used to document facts about investigations, or as preventive measures to avoid violations. His main message in this short article is an appeal to the human rights sector at large, to use technology more creatively, to make technology upgrades a top priority, and to engage with the technology sector in this difficult endeavor. The human rights sector will never be able to develop the newest technologies, but the opportunities that technology provides is something they need to make use of now and in collaboration with the technology sector

…Several cases show that human rights are under threat, and that it is difficult to investigate and gather the necessary facts in time to protect them. Duterte in the Philippines, ordered the police to shoot activists who demonstrated against extra-judicial killings. He later tried to reduce the funding of the Philippines National Human Rights Commission to 1 USD a year. This threat followed a period of 15 months of investigating the killings, and Duterte responded with the claim that they were “useless and defended criminal’s rights.” 

Zimbabwe is another country with a difficult environment for human rights defenders. It is not surprising that few people speak out, since the few that dare to demonstrate or voice opposing political views disappear. A famous example is the activist and journalist,  from Occupy Africa Unity Square. He was allegedly beaten in 2014, and in 2015 he went missing and was never found. His disappearance occurred after a period of public demonstrations against Mugabe’s regime. To add to the challenging conditions that call for better tools to defend human rights, is the fact that many European countries digitalise their public services. The newly introduced data platforms store and process sensitive information about the population, such as gender, ethnicity, sexual orientation, past health records, etc. Information that can easily be used for discriminative purposes, whether intentionally or not.

Human rights defenders typically struggle to find adequate resources for their daily operations and as a result, investments in technology often come second. It is rare for human rights defenders to have anything beyond the minimum requirements, such as the internally-facing maintenance of an operational and secure internet connection, a case system, or a website. At the same time, global technology companies develop new technologies such as blockchain, artificial intelligence, and advanced data and surveillance techniques. These technologies have the potential to tremendously facilitate human rights defenders in their work, whether they are used to document facts about investigations, or as preventive measures to avoid violations. It is also important to facilitate and empower rights-holders in setting up and using networks and platforms that can help notify and verify violations quickly. 

Collaboration is an excellent problem-solving approach and human rights organizations are well aware of it. They engage in multiple partnerships with important actors. The concern is therefore not the lack of collaboration, but whether they adequately prioritize what is now the world’s leading sector — technology (the top 5 on Forbes list of most valuable brands are all technology companies; Apple, Google, Microsoft, Amazon, and Facebook). It is not up to the technology sector to engage with the human rights sector (whether they want to or not), but it should be a top priority for the human rights sector to try to reduce their technology gap, in the interest of human rights.

There are several partnership opportunities, and many are easy to get started with and do not require monetary investments. One opportunity is to partner up with tech universities, that have the expertise to develop new types of secure, rapid monitoring systems. Blockchain embraces most of the principles that human rights embraces, such as transparency, equality and accountability, and rapid response times are possible. So why not collaborate with universities? Another opportunity is collaborating with institutions that manage satellite images. Images provide very solid proof regarding changes in landscape, examples include deforestation that threatens indigenous people, and the removal or burning of villages over a short period of time. A third opportunity is to get in dialogue with the technology giants that develop these new technologies, and, rather than asking for monetary donations, ask for input regarding how the human rights sector can effectively leverage technology. 

 

NSO accused of largest attack on civil society through its spyware

October 30, 2019

I blogged about the spyware firm NSO before [see e.g. https://humanrightsdefenders.blog/2019/09/17/has-nso-really-changed-its-attitude-with-regard-to-spyware/], but now WhatsApp has joined the critics with a lawsuit.

On May 13th, WhatsApp announced that it had discovered the vulnerability. In a statement, the company said that the spyware appeared to be the work of a commercial entity, but it did not identify the perpetrator by name. WhatsApp patched the vulnerability and, as part of its investigation, identified more than fourteen hundred phone numbers that the malware had targeted. In most cases, WhatsApp had no idea whom the numbers belonged to, because of the company’s privacy and data-retention rules. So WhatsApp gave the list of phone numbers to the Citizen Lab, a research laboratory at the University of Toronto’s Munk School of Global Affairs, where a team of cyber experts tried to determine whether any of the numbers belonged to civil-society members.

On Tuesday 29 October 2019, WhatsApp took the extraordinary step of announcing that it had traced the malware back to NSO Group, a spyware-maker based in Israel, and filed a lawsuit against the company—and also its parent, Q Cyber Technologies—in a Northern California court, accusing it of “unlawful access and use” of WhatsApp computers. According to the lawsuit, NSO Group developed the malware in order to access messages and other communications after they were decrypted on targeted devices, allowing intruders to bypass WhatsApp’s encryption.

NSO Group said in a statement in response to the lawsuit, “In the strongest possible terms, we dispute today’s allegations and will vigorously fight them. The sole purpose of NSO is to provide technology to licensed government intelligence and law enforcement agencies to help them fight terrorism and serious crime. Our technology is not designed or licensed for use against human rights activists and journalists.” In September, NSO Group announced the appointment of new, high-profile advisers, including Tom Ridge, the first U.S. Secretary of Homeland Security, in an effort to improve its global image.

In a statement to its users on Tuesday, WhatsApp said, “There must be strong legal oversight of cyber weapons like the one used in this attack to ensure they are not used to violate individual rights and freedoms people deserve wherever they are in the world. Human rights groups have documented a disturbing trend that such tools have been used to attack journalists and human rights defenders.”

John Scott-Railton, a senior researcher at the Citizen Lab, said, “It is the largest attack on civil society that we know of using this kind of vulnerability.”

https://www.newyorker.com/news/news-desk/whatsapp-sues-an-israeli-tech-firm-whose-spyware-targeted-human-rights-activists-and-journalists

https://uk.finance.yahoo.com/news/whatsapp-blames-sues-mobile-spyware-192135400.html

How social media companies can identify and respond to threats against human rights defenders

October 15, 2019

global computer threats

Image from Shutterstock.

Ginna Anderson writes in the ABA Abroad of 3

..Unfortunately, social media platforms are now a primary tool for coordinated, state-aligned actors to harass, threaten and undermine advocates. Although public shaming, death threats, defamation and disinformation are not unique to the online sphere, the nature of the internet has given them unprecedented potency. Bad actors are able to rapidly deploy their poisoned content on a vast scale. Social media companies have only just begun to recognize, let alone respond, to the problem. Meanwhile, individuals targeted through such coordinated campaigns must painstakingly flag individual pieces of content, navigate opaque corporate structures and attempt to survive the fallout. To address this crisis, companies such as Facebook, Twitter and Youtube must dramatically increase their capacity and will to engage in transparent, context-driven content moderation.

For human rights defenders, the need is urgent. .. Since 2011, the ABA Center for Human Rights (CHR) has ..noted with concern the coordination of “traditional” judicial harassment of defenders by governments, such as frivolous criminal charges or arbitrary detention, with online campaigns of intimidation. State-aligned online disinformation campaigns against individual defenders often precede or coincide with official investigations and criminal charges.

……

While social media companies generally prohibit incitement of violence and hate speech on their platforms, CHR has had to engage in additional advocacy with social media companies requesting the removal of specific pieces of content or accounts that target defenders. This extra advocacy has been required even where the content clearly violates a social media company’s terms of service and despite initial flagging by a defender. The situation is even more difficult where the threatening content is only recognizable with sufficient local and political context. The various platforms all rely on artificial intelligence, to varying degrees, to identify speech that violates their respective community standards. Yet current iterations of artificial intelligence are often unable to adequately evaluate context and intent.

Online intimidation and smear campaigns against defenders often rely on existing societal fault lines to demean and discredit advocates. In Guatemala, CHR recently documented a coordinated social media campaign to defame, harass, intimidate and incite violence against human rights defenders. Several were linked with so-called “net centers,” where users were reportedly paid to amplify hateful content across platforms. Often, the campaigns relied on “coded” language that hark back to Guatemala’s civil war and the genocide of Mayan communities by calling indigenous leaders communists, terrorists and guerrillas.

These terms appear to have largely escaped social media company scrutiny, perhaps because none is a racist slur per se. And yet, the proliferation of these online attacks, as well as the status of those putting out the content, is contributing to a worsening climate of violence and impunity for violence against defenders by specifically alluding to terms used to justify violence against indigenous communities. In 2018 alone, NPR reports that 26 indigenous defenders were murdered in Guatemala. In such a climate, the fear and intimidation felt by those targeted in such campaigns is not hyperbolic but based on their understanding of how violence can be sparked in Guatemala.

In order to address such attacks, social media companies must adopt policies that allow them to designate defenders as temporarily protected groups in countries that are characterized by state-coordinated or state-condoned persecution of activists. This is in line with international law that prohibits states from targeting individuals for serious harm based on their political opinion. To increase their ability to recognize and respond to persecution and online violence against human rights defenders, companies must continue to invest in their context-driven content moderation capacity, including complementing algorithmic monitoring with human content moderators well-versed in local dialects and historical and political context.

Context-driven content moderation should also take into account factors that increase the risk that online behavior will contribute to offline violence by identifying high-risk countries. These factors include a history of intergroup conflict and an overall increase in the number of instances of intergroup violence in the past 12 months; a major national political election in the next 12 months; and significant polarization of political parties along religious, ethnic or racial lines. Countries where these and other risk factors are present call for proactive approaches to identify problematic accounts and coded threats against defenders and marginalized communities, such as those shown in Equality Labs’ “Facebook India” report.

Companies should identify, monitor and be prepared to deplatform key accounts that are consistently putting out denigrating language and targeting human rights defenders. This must go hand in hand with the greater efforts that companies are finally beginning to take to identify coordinated, state-aligned misinformation campaigns. Focusing on the networks of users who abuse the platform, instead of looking solely at how the online abuse affects defenders’ rights online, will also enable companies to more quickly evaluate whether the status of the speaker increases the likelihood that others will take up any implicit call to violence or will be unduly influenced by disinformation.

This abuser-focused approach will also help to decrease the burden on defenders to find and flag individual pieces of content and accounts as problematic. Many of the human rights defenders with whom CHR works are giving up on flagging, a phenomenon we refer to as flagging fatigue. Many have become fatalistic about the level of online harassment they face. This is particularly alarming as advocates targeted online may develop skins so thick that they are no longer able to assess when their actual risk of physical violence has increased.

Finally, it is vital that social media companies pursue, and civil society demand, transparency in content moderation policy and decision-making, in line with the Santa Clara Principles. Put forward in 2018 by a group of academic experts, organizations and advocates committed to freedom of expression online, the principles are meant to guide companies engaged in content moderation and ensure that the enforcement of their policies is “fair, unbiased, proportional and respectful of users’ rights.” In particular, the principles call upon companies to publicly report on the number of posts and accounts taken down or suspended on a regular basis, as well as to provide adequate notice and meaningful appeal to affected users.

CHR routinely supports human rights defenders facing frivolous criminal charges related to their human rights advocacy online or whose accounts and documentation have been taken down absent any clear justification. This contributes to a growing distrust of the companies among the human rights community as apparently arbitrary decisions about content moderation are leaving advocates both over- and under-protected online.

As the U.N. special rapporteur on freedom of expression explained in his 2018 report, content moderation processes must include the ability to appeal the removal, or refusal to remove, content or accounts. Lack of transparency heightens the risk that calls to address the persecution of human rights defenders online will be subverted into justifications for censorship and restrictions on speech that is protected under international human rights law.

A common response when discussing the feasibility of context-driven content moderation is to compare it to reviewing all the grains of sand on a beach. But human rights defenders are not asking for the impossible. We are merely pointing out that some of that sand is radioactive—it glows in the dark, it is lethal, and there is a moral and legal obligation upon those that profit from the beach to deal with it.

Ginna Anderson, senior counsel, joined ABA CHR in 2012. She is responsible for supporting the center’s work to advance the rights of human rights defenders and marginalized dommunities, including lawyers and journalists at risk. She is an expert in health and human rights, media freedom, freedom of expression and fair trial rights. As deputy director of the Justice Defenders Program since 2013, she has managed strategic litigation, fact-finding missions and advocacy campaigns on behalf of human rights defenders facing retaliation for their work in every region of the world

http://www.abajournal.com/news/article/how-can-social-media-companies-identify-and-respond-to-threats-against-human-rights-defenders

Has NSO really changed its attitude with regard to spyware?

September 17, 2019

Cyber-intelligence firm NSO Group has introduced a new Human Rights Policy and a supporting governance framework in an apparent attempt to boost its reputation and comply with the United Nations’ Guiding Principles for Business and Human Rights. This follows recent criticism that its technology was being used to violate the rights of journalist and human rights defenders. A recent investigation found the company’s Pegasus spyware was used against a member of non-profit Amnesty International. [see: https://humanrightsdefenders.blog/2019/02/19/novalpina-urged-to-come-clean-about-targeting-human-rights-defenders/]

The NSO’s new human rights policy aims to identify, prevent and mitigate the risks of adverse human rights impact. It also includes a thorough evaluation of the company’s sales process for the potential of adverse human rights impacts coming from the misuse of NSO products. As well as this, it introduces contractual agreements for NSO customers that will require them to limit the use of the company’s products to the prevention and investigation of serious crimes. There will be specific attention to protect individuals or groups that could be at risk of arbitrary digital surveillance and communication interceptions due to race, colour, sex, language, religion, political or other opinions, national or social origin, property, birth or other status, or their exercise or defence of human rights. Rules have been set out to protect whistle-blowers who wish to report concerns about misuse of NSO technology.

Amnesty International is supporting current legal actions being taken against the Israeli Ministry of Defence, demanding that it revoke NSO Group’s export licence. In January 2020 an Israeli court ordered a  closed door hearing.

Danna Ingleton, Deputy Program Director for Amnesty Tech, said: “While on the surface it appears a step forward, NSO has a track record of refusing to take responsibility. The firm has sold invasive digital surveillance to governments who have used these products to track, intimidate and silence activists, journalists and critics.”

CEO and co-founder Shalev Hulio, counters: “NSO has always taken governance and its ethical responsibilities seriously as demonstrated by our existing best-in-class customer vetting and business decision process. With this new Human Rights Policy and governance framework, we are proud to further enhance our compliance system to such a degree that we will become the first company in the cyber industry to be aligned with the Guiding Principles.

https://www.verdict.co.uk/nso-group-new-human-rights-policy/

https://www.ynetnews.com/article/HJSNKJAeU