The Council has approved conclusions on the EU Action Plan on Human Rights and Democracy 2020-2024. The Action Plan sets out the EU’s level of ambition and priorities in this field in its relations with all third countries.
The conclusions acknowledge that while there have been leaps forward, there has also been a pushback against the universality and indivisibility of human rights. The ongoing COVID-19 pandemic and its socio-economic consequences have had an increasingly negative impact on all human rights, democracy and rule of law, deepening pre-existing inequalities and increasing pressure on persons in vulnerable situations.
In 2012, the EU adopted the Strategic Framework on Human Rights and Democracy which set out the principles, objectives and priorities designed to improve the effectiveness and consistency of EU policy in these areas. To implement the EU Strategic Framework of 2012, the EU has adopted two EU Action Plans (2012-2014 and 2015-2019).
The new Action Plan for 2020-2024 builds on the previous action plans and continues to focus on long-standing priorities such as supporting human rights defenders and the fight against the death penalty.
By identifying five overarching priorities: (1) protecting and empowering individuals; (2) building resilient, inclusive and democratic societies; (3) promoting a global system for human rights and democracy; (4) new technologies: harnessing opportunities and addressing challenges; and (5) delivering by working together, the Action Plan also reflects the changing context with attention to new technologies and to the link between global environmental challenges and human rights.
DefendDefenders (the East and Horn of Africa Human Rights Defenders Project) seeks to strengthen the work of human rights defenders (HRDs) in the East and Horn of Africa sub-region by reducing their vulnerability to the risk of persecution and by enhancing their capacity to effectively defend human rights. DefendDefenders focuses its work on Burundi, Djibouti, Eritrea, Ethiopia, Kenya, Rwanda, Somalia (and Somaliland), South Sudan, Sudan, Tanzania, and Uganda.
DefendDefenders is recruiting a Technology Programme Manager for its work in supporting HRDs. Under the overall supervision of the Director of Programmes and Administration, the Executive Director, and in direct partnership with other staff members, the Technology Programme Manager shall be responsible, but not limited to the following duties:
Key Responsibilities
Manage and give direction to the Technology Programme and projects;
Empower and mentor the team to take responsibility of their tasks and encourage a spirit of teamwork within the team;
Manage overall operational and financial responsibilities of the team against project plans and manage the team’s day-to-day activities;
Participate in management meetings and contribute grants, proposal design and implementation for the Technology Programme;
Ensure proper adoption and usage of internal IT tools and organisation systems by designing training programmes for staff and streamlining /recommending systems that can improve operational efficiency;
Communicate regularly with other managers, Director of Programmes & Administration and the Executive Director within the organisation. Ensure that the team works closely with other departments;
Plan budgets and work plans from inception to completion;
Work with partners, consultants, and service providers to ensure delivery of project goals;
Design and implement the IT policy, security protocols and best practice guides for the organisation and partner organisations; and
Represent DefendDefenders and the Technology Programme externally, develop partnerships, and attract funding and resources
Working conditions
Full-time position based in Kampala, Uganda;
The selected applicant must be able to relocate to Kampala immediately, or within a short timeframe; and
Health insurance (in Uganda) and travel insurance are provided.
Requirements
Previous experience in managing a team;
Strong communication and presentation skills;
Strong interpersonal skills and the ability to establish and maintain effective working relationships in a culturally diverse environment;
Willingness to travel;
Self-motivated, organised, and the ability to meet deadlines with minimal supervision;
Resourcefulness and problem-solving aptitude; and
Bachelor’s degree in Computer Science, Information Technology or related discipline, plus professional certifications.
Languages
Fluency in English is a must (spoken and written). Fluency in French is a strong asset, and in Arabic an asset.
Location
The position will be based in Kampala, Uganda, with frequent travels within and out of the country. Applicants should be eligible to work in Uganda without restriction.
Applicants should send a letter of motivation, CV, and contacts of three references to: jobs@defenddefenders.org by 15 November 2020. Do not send scanned copies of certificates. Interviews will be held in person (in Kampala, Uganda), or online late in November.
The subject line of the email should read “Application for Technology Programme Manager position.”
With teams increasingly working remotely during COVID-19, we are all facing questions regarding the security of our communication with one another: Which communication platform or tool is best to use? Which is the most secure for holding sensitive internal meetings? Which will have adequate features for online training sessions or remote courses without compromising the privacy and security of participants?
Front Line Defenders presents this simple overview which may help you choose the right tool for your specific needs.
With end-to-end encryption (e2ee), your message gets encrypted before it leaves your device and only gets decrypted when it reaches the intended recipient’s device. Using e2ee is important if you plan to transmit sensitive communication, such as during internal team or partners meetings.
With encryption to-server, your message gets encrypted before it leaves your device, but is being decrypted on the server, processed, and encrypted again before being sent to recipient(s). Having encryption to-server is OK if you fully trust the server.
Why Zoom or other platforms/tools are not listed here: There are many platforms which can be used for group communication. In this guide we focused on those we think will deliver good user experiences and offer the best privacy and security features. Of course none of the platforms can offer 100% privacy or security as in all communications, there is a margin of risk. We have not included tools such as Zoom, Skype, Telegram etc. in this guide, as we believe that the margin of risk incurred whilst using them is too wide, and therefore Front Line Defenders does not feel comfortable recommending them.
Surveillance and behaviour: Some companies like Facebook, Google, Apple and others regularly collect, analyse and monetize information about users and their online activities. Most, if not all, of us are already profiled by these companies to some extent. If the communication is encrypted to-server owners of the platform may store this communication. Even with end-to-end encryption, communication practices such as location, time, whom you connect with, how often, etc. may still be stored. If you are uncomfortable with this data being collected, stored and shared, we recommended refraining from using services by those companies.
The level of protection of your call depends not only on which platform you choose, but also on the physical security of the space you and others on the call are in and the digital protection of the devices you and others use for the call.
Caution: Use of encryption is illegal in some countries. You should understand and consider the law in your country before deciding on using any of the tools mentioned in this guide.
Criteria for selecting the tools or platforms
Before selecting any communication platform, app or program it is always strongly recommended that you research it first. Below we list some important questions to consider:
Is the platform mature enough? How long has it been running for? Is it still being actively developed? Does it have a large community of active developers? How many active users does it have?
Does the platform provide encryption? Is it end-to-end encrypted or just to-server encrypted?
In which jurisdiction is the owner of the platform and where are servers located? Does this pose a potential challenge for your or your partners?
Does the platform allow for self-hosting?
Is the platform open source? Does it provide source code to anyone to inspect?
Was the platform independently audited? When was the last audit? What do experts say about the platform?
What is the history of the development and ownership of the platform? Have there been any security challenges? How have the owners and developers reacted to those challenges?
How do you connect with others? Do you need to provide phone number, email or nickname? Do you need to install a dedicated app/program? What will this app/program have access to on your device? Is it your address book, location, mic, camera, etc.?
What is stored on the server? What does the platform’s owner have access to?
Does the platform have features needed for the specific task/s you require?
Is the platform affordable? This needs to include potential subscription fees, learning and implementing, and possible IT support needed, hosting costs, etc.
The document then proceeds to give more detailed information related to each tool/service listed in this guide
Video calls, webinar or online training recommendations
Video calls recommendations: In the current situation you will undoubtedly find yourself organizing or participating in many more video calls than before. It may not be obvious to everyone how to do it securely and without exposing yourself and your data to too much risk:
Assume that when you connect to talk your camera and microphone may be turned on by default. Consider covering your camera with a sticker (making sure it doesn’t leave any sticky residue on the camera lens) and only remove it when you use the camera.
You may not want to give away too much information on your house, family pictures, notes on the walls or boards, etc. Be mindful of the background, who and what is also in the frame aside from yourself? Test before the call by, for example, opening meet.jit.si and click on GO button to get to a random empty room with your camera switched on to see what is in the picture. Consider clearing your background of clutter.
Also be mindful who can be heard in the background. Maybe close the door and windows, or alert those sharing your space about your meeting.
It is best to position your face so your eyes are more or less at the upper third of the picture without cutting off your head. Unless you do not want to reveal your face, do not sit with your back to a light or a window. Daylight or a lamp from the front is the best. Stay within the camera frame. You may want to look into the lens from time to time to make “eye contact” with others. If you are using your cellphone, rest it against a steady object (e.g. a pile of books) so that the video picture remains stable.
You may want to mute your microphone to prevent others hearing you typing notes or any background noise as it can be very distracting to others on the call.
If the internet connection is slow you may want to switch off your camera, pause other programs, mute the microphone and ask others to do same. You may also want to try sitting closer to the router, or connecting your computer directly to the router with an ethernet cable. If you share internet connection with others, you may ask them to reduce extensive use of internet for the duration of your call.
It it very tempting to multitask especially during group calls. But you may very soon realise that you are lost in the meeting and others may realize this.
If this is a new situation for you or you are using a new calling tool, you may want to give yourself a few extra minutes to learn and test it prior to the scheduled meeting to get familiar with options like turning on/off the camera and the microphone, etc.
If possible, prepare and test a backup communication plan in case you will have trouble connecting with others. For example, adding them to a Signal group so you can still text chat or troubleshoot problems on the call. Sometimes it helps to have an alternate browser installed on your computer or app on the phone to try connecting with those.
If you would like to organise a webinar or online training, you can use tools outlined above in the group communication. Some of best practices include:
Make sure that you know who is connected. If this is needed check the identities of all people participating by asking them to speak. Do not assume you know who is connected only by reading assigned names.
Agree on ground-rules, like keeping cameras on/off, keeping microphone on/off when one is not speaking, flagging when participants would like to speak, who will be chairing the meeting, who will take notes – where and how will those notes be written and then distributed, is it ok to take screenshots of a video call, is it ok to record the call, etc.
Agree on clear agendas and time schedules. If your webinar is longer than one hour, it is probably best to divide it into clear one-hour sessions separated by some time agreed with participants, so they have time to have a short break. Plan for the possibility that not all participants will return after a break. Have alternative methods to reach out to them to remind them to return, like Signal/Wire/DeltaChat contacts for them.
It is easiest to use a meeting service that participants connect to using a browser without a need to register or install a special program, one that also gives the webinar organiser the ability to mute microphones and close cameras of participants.
Prior to the call, check with all participants whether they have particular needs, such as if they are deaf or hard of hearing, if they are visually impaired or blind, or any other conditions which would affect their participation in the call. With this in mind, ensure that the selected platform will accommodate these needs and to be sure, test the platform beforehand. Simple measures can also improve inclusion and participation in your calls, such as turning on cameras when possible, as it can allow for lip-reading.
Encourage all participants to speak slowly and to avoid jargon where possible, as the working language of the call is most likely not everyone’s mother tongue language. Naturally, there will be moments of silences and pauses, embrace them. They can help to support understanding and can be helpful for participants who are hard of hearing, interpreters and will also aid assistive technology to pick up words correctly.
A multi-year investigation by Citizen Lab has unearthed a hack-for-hire group from India that targeted journalists, advocacy groups, government officials, hedge funds, and human rights defenders.
Jay Jay – a freelance technology writer – posted an article in Teiss on 9 June 2020 stating that Citizen Lab revealed in a blog post published Tuesday that the hack-for-hire group’s identity was established after the security firm investigated a custom URL shortener that the group used to shorten the URLs of phishing websites prior to targeting specific individuals and organisations. Citizen Lab has named the group as “Dark Basin“.
“Over the course of our multi-year investigation, we found that Dark Basin likely conducted commercial espionage on behalf of their clients against opponents involved in high profile public events, criminal cases, financial transactions, news stories, and advocacy,” the firm said.
It added that the hack-for-hire group targeted thousands of individuals and organisations in six continents, including senior politicians, government prosecutors, CEOs, journalists, and human rights defenders, and is linked to BellTroX InfoTech Services, an India-based technology company.
….The range of targets, that included two clusters of advocacy organisations in the United States working on climate change and net neutrality, made it clear to Citizen Lab that Dark Basin was not state-sponsored but was a hack-for-hire operation.
…As further proof of Dark Basin’s links with BellTroX, researchers found that several BellTroX employees boasted capabilities like email penetration, exploitation, conducting cyber intelligence operations, pinging phones, and corporate espionage on LinkedIn. BellTroX’s LinkedIn pages also received endorsements from individuals working in various fields of corporate intelligence and private investigation, including private investigators with prior roles in the FBI, police, military, and other branches of government.
The list of organisations targeted by Dark Basin over the past few years includes Rockefeller Family Fund, Greenpeace, Conservation Law Foundation, Union of Concerned Scientists, Oil Change International, Center for International Environmental Law, Climate Investigations Center, Public Citizen, and 350.org. The hack-for-hire group also targeted several environmentalists and individuals involved in the #ExxonKnew campaign that wanted Exxon to face trial for hiding facts about climate change for decades.
A separate investigation into Dark Basin by NortonLifeLock Labs, which they named “Mercenary.Amanda”, revealed that the hack-for-hire group executed persistent credential spearphishing against a variety of targets in several industries around the globe going back to at least 2013…
Towards an ecosystem of interoperable human rights tools
Social media posts can contain critical evidence of abuses that will one day help deliver justice. That’s why legal advocacy group Global Legal Action Network (GLAN) and their partners are saving copies of online content that show attacks targeting civilians in Yemen. How? They’re using a new integration between Digital Evidence Vault and our Uwazi platform. Read more >>>
Using machine learning to help defenders find what they need
Machine learning could have an enormous impact on the accessibility of human rights information. How? By automating parts of the time-intensive process of adding documents to a collection. In collaboration with some of our partners and Google.org Fellows, we’re working on doing just that. Check it out >>>
How to research human rights law for advocacy
International law can be a powerful tool for local changemakers to advance protections for human rights. But there’s no central place for finding relevant legislation, commitments and precedents. So together with Advocacy Assembly, we created a free 30-minute course to help human rights defenders navigate the information landscape. Learn more >>>
A database to magnify personal stories and identify trends
Pakistan has one of the world’s largest death rows. At the same time, 85% of death sentences are overturned on appeal. Who are the people convicted? Juveniles, people with disabilities or mental illness, and those from economically disadvantaged backgrounds. We partnered with Justice Project Pakistan to launch a database to shine a light on the situation. Take a look >>>
Improvements to our info management platform Uwazi
We rolled out several new features to Uwazi. CSV import allows for the quick creation of collections without the need to manually input large amounts of data. The activity log gives a comprehensive overview of all additions, edits and deletions (or lack thereof). And two-factor verification offers an extra layer of protection. Speaking of security, we also had Uwazi audited by a third party and made improvements based on their findings. Explore the Uwazi portfolio >>>
growing, moving team and a heartfelt ‘thank you’ to Bert
We welcomed several new members to our team: two project managers, a UX designer, two software developers, and a communications coordinator. And we’re currently seeking an info management intern (deadline: 20 December 2019). We gave a warm farewell to Project Manager Hyeong-sik Yoo and Software Developer Clément Habinshuti, and said “thank you” to Senior Documentalist Bert Verstappen, who retired after 32 incredible years.
Executive Director Friedhelm Weinberg goes on parental leave. For the first three months of 2020 while he’s off, Director of Programmes Kristin Antin will be stepping in.
…And some human rights defenders are technologists: building tools to defend or enhance the practice of human rights, and calling out the errors or lies of those who might misuse technology against its users. At this year’s Internet Governance Forum in Berlin, civil society groups mourned a growing trend around the world: the targeted harassment and detention of digital rights defenders by the powerful. Digital rights defenders includes technologists who work to create or investigate digital tools, and who work to improve the security and privacy of vital infrastructure like the Internet, and e-voting devices. As the declaration, signed by a coalition NGOs notes:
The work digital rights defenders do in defense of privacy is fundamental for the protection of human rights. When they raise awareness about the existence of vulnerabilities in systems, they allow the public and private sector to find solutions that improve infrastructure and software security for the benefit of the public. Furthermore, their work as security advisers for journalists and human rights activists is of vital importance for the safety of journalists, activists and other human rights defenders.
The problem is not confined to, but is particular pressing in Latin America. As 2019 draws to a close, Swedish security researcher Ola Bini remains in a state of legal limbo in Ecuador after a politically-led prosecution sought to connect his work building secure communication tools to a vague and unsubstantiated conspiracy of Wikileaks-related hacking. Meanwhile in Argentina, e-voting activist Javier Smaldone remains the target of a tenuous hacking investigation.
The SDD Contributor of Stock Daily Dish posted on 15 December 2019 a detailed piece on the shortcomings of the panic buttons issued by Colombia and Mexico.
A GPS-enabled “panic button” that Colombia‘s government has issued ito abut 400 persons is supposed to summon help for human-rights defenders or journalists if they are threatened. But it the article claims that it has technical flaws that could let hostile parties disable it, eavesdrop on conversations and track users‘ movements, according to an independent security audit conducted for The Associated Press. There is no evidence the vulnerabilities have been exploited, but are alarmed. “This is negligent in the extreme,” said Eva Galperin, director of cybersecurity at the nonprofit Electronic Frontier Foundation, calling the finding “a tremendous security failure.”
The “boton de apoyo,” distributed by Colombia‘s Office of National Protection is a keychain-style fob. Its Chinese manufacturer markets it under the name EV-07 for tracking children, pets and the elderly. The operates on a wireless network, has a built-in microphone and receiver and can be mapped remotely with geo-location software. A button marked “SOS” calls for help when pressed.
A company official, John Chung, acknowledged that Rapid7 notified him of the flaws in December. In keeping with standard industry practice, Rapid7 waited at least two months before publicly disclosing the vulnerabilities to give the manufacturer time to address them. Chung told the AP that Eview was working to update the EV-07‘s webserver software, where Rapid7 found flaws that could allow user and geolocation data to be altered.
Activists have good reason to be wary of public officials in Colombia, where murder rates for land and labor activists are among the world‘s highest, and there is a legacy of state-sponsored crime. The DAS domestic intelligence agency, which provided bodyguards and armored vehicles to high-risk individuals prior to 2011, was disbanded after being caught spying on judges, journalists and activists. Five former DAS officials have been prosecuted for allegedly subjecting Duque and her daughter to psychological torture after she published articles implicating agency officials in the 1999 assassination of Jaime Garzon, a much-loved satirist.
Tanya O‘Carroll of Amnesty International, which has been developing a different kind of “panic button” since 2014 , said the Colombian model is fundamentally flawed. “In many cases, the government is the adversary,” she said. “How can those people who are the exact adversary be the ones that are best placed to respond?”…
In Mexico, the attorney general‘s office has issued more than 200 emergency alert devices to journalists and rights activists since 2013. But there have been multiple complaints . One is unreliability where cell service is poor. Others are more serious: Cases have been documented of police failing to respond or answering but saying they are unable to help.
O‘Carroll of Amnesty International said trials in 17 nations on three continents—including the Philippines, El Salvador and Uganda—show it‘s best to alert trusted parties—friends, family or colleagues. Those people then reach out to trusted authorities. Amnesty‘s app for Android phones is still in beta testing. It is activated with a hardware trigger—multiple taps of the power button. But there have been too many false alarms.
Sweden-based Civil Rights Defenders offers a 300-euro stand-alone panic button first deployed in Russia‘s North Caucasus region in 2013 and now used by more than 70 people in East Africa, Central Asia, the Balkans, Southeast Asia and Venezuela, said Peter Ohlm, a protection officer at the nonprofit. The organization‘s Stockholm headquarters always gets notified, and social media is typically leveraged to spread word fast when an activist is in trouble.
Mads Gottlieb (twitter: @mads_gottlieb) wrote in Impakter about Human Rights, Technology and Partnerships and stated that these technologies have the potential to tremendously facilitate human rights defenders in their work, whether they are used to document facts about investigations, or as preventive measures to avoid violations. His main message in this short article is an appeal to the human rights sector at large, to use technology more creatively, to make technology upgrades a top priority, and to engage with the technology sector in this difficult endeavor. The human rights sector will never be able to develop the newest technologies, but the opportunities that technology provides is something they need to make use of now and in collaboration with the technology sector
…Several cases show that human rights are under threat, and that it is difficult to investigate and gather the necessary facts in time to protect them. Duterte in the Philippines, ordered the police to shoot activists who demonstrated against extra-judicial killings. He later tried to reduce the funding of the Philippines National Human Rights Commission to 1 USD a year. This threat followed a period of 15 months of investigating the killings, and Duterte responded with the claim that they were “useless and defended criminal’s rights.”
Zimbabwe is another country with a difficult environment for human rights defenders. It is not surprising that few people speak out, since the few that dare to demonstrate or voice opposing political views disappear. A famous example is the activist and journalist, from Occupy Africa Unity Square. He was allegedly beaten in 2014, and in 2015 he went missing and was never found. His disappearance occurred after a period of public demonstrations against Mugabe’s regime. To add to the challenging conditions that call for better tools to defend human rights, is the fact that many European countries digitalise their public services. The newly introduced data platforms store and process sensitive information about the population, such as gender, ethnicity, sexual orientation, past health records, etc. Information that can easily be used for discriminative purposes, whether intentionally or not.
Human rights defenders typically struggle to find adequate resources for their daily operations and as a result, investments in technology often come second. It is rare for human rights defenders to have anything beyond the minimum requirements, such as the internally-facing maintenance of an operational and secure internet connection, a case system, or a website. At the same time, global technology companies develop new technologies such as blockchain, artificial intelligence, and advanced data and surveillance techniques. These technologies have the potential to tremendously facilitate human rights defenders in their work, whether they are used to document facts about investigations, or as preventive measures to avoid violations. It is also important to facilitate and empower rights-holders in setting up and using networks and platforms that can help notify and verify violations quickly.
Collaboration is an excellent problem-solving approach and human rights organizations are well aware of it. They engage in multiple partnerships with important actors. The concern is therefore not the lack of collaboration, but whether they adequately prioritize what is now the world’s leading sector — technology (the top 5 on Forbes list of most valuable brands are all technology companies; Apple, Google, Microsoft, Amazon, and Facebook). It is not up to the technology sector to engage with the human rights sector (whether they want to or not), but it should be a top priority for the human rights sector to try to reduce their technology gap, in the interest of human rights.
There are several partnership opportunities, and many are easy to get started with and do not require monetary investments. One opportunity is to partner up with tech universities, that have the expertise to develop new types of secure, rapid monitoring systems. Blockchain embraces most of the principles that human rights embraces, such as transparency, equality and accountability, and rapid response times are possible. So why not collaborate with universities? Another opportunity is collaborating with institutions that manage satellite images. Images provide very solid proof regarding changes in landscape, examples include deforestation that threatens indigenous people, and the removal or burning of villages over a short period of time. A third opportunity is to get in dialogue with the technology giants that develop these new technologies, and, rather than asking for monetary donations, ask for input regarding how the human rights sector can effectively leverage technology.
According to a lawsuit announced on Tuesday, the Israeli spyware-maker NSO Group developed malware specifically to access WhatsApp communications. Photograph by Daniella Cheslow / AP
On May 13th, WhatsApp announced that it had discovered the vulnerability. In a statement, the company said that the spyware appeared to be the work of a commercial entity, but it did not identify the perpetrator by name. WhatsApp patched the vulnerability and, as part of its investigation, identified more than fourteen hundred phone numbers that the malware had targeted. In most cases, WhatsApp had no idea whom the numbers belonged to, because of the company’s privacy and data-retention rules. So WhatsApp gave the list of phone numbers to the Citizen Lab, a research laboratory at the University of Toronto’s Munk School of Global Affairs, where a team of cyber experts tried to determine whether any of the numbers belonged to civil-society members.
On Tuesday 29 October 2019, WhatsApp took the extraordinary step of announcing that it had traced the malware back to NSO Group, a spyware-maker based in Israel, and filed a lawsuit against the company—and also its parent, Q Cyber Technologies—in a Northern California court, accusing it of “unlawful access and use” of WhatsApp computers. According to the lawsuit, NSO Group developed the malware in order to access messages and other communications after they were decrypted on targeted devices, allowing intruders to bypass WhatsApp’s encryption.
NSO Group said in a statement in response to the lawsuit, “In the strongest possible terms, we dispute today’s allegations and will vigorously fight them. The sole purpose of NSO is to provide technology to licensed government intelligence and law enforcement agencies to help them fight terrorism and serious crime. Our technology is not designed or licensed for use against human rights activists and journalists.” In September, NSO Group announced the appointment of new, high-profile advisers, including Tom Ridge, the first U.S. Secretary of Homeland Security, in an effort to improve its global image.
In a statement to its users on Tuesday, WhatsApp said, “There must be strong legal oversight of cyber weapons like the one used in this attack to ensure they are not used to violate individual rights and freedoms people deserve wherever they are in the world. Human rights groups have documented a disturbing trend that such tools have been used to attack journalists and human rights defenders.”
John Scott-Railton, a senior researcher at the Citizen Lab, said, “It is the largest attack on civil society that we know of using this kind of vulnerability.”
..Unfortunately, social media platforms are now a primary tool for coordinated, state-aligned actors to harass, threaten and undermine advocates. Although public shaming, death threats, defamation and disinformation are not unique to the online sphere, the nature of the internet has given them unprecedented potency. Bad actors are able to rapidly deploy their poisoned content on a vast scale. Social media companies have only just begun to recognize, let alone respond, to the problem. Meanwhile, individuals targeted through such coordinated campaigns must painstakingly flag individual pieces of content, navigate opaque corporate structures and attempt to survive the fallout. To address this crisis, companies such as Facebook, Twitter and Youtube must dramatically increase their capacity and will to engage in transparent, context-driven content moderation.
For human rights defenders, the need is urgent. .. Since 2011, the ABA Center for Human Rights (CHR) has ..noted with concern the coordination of “traditional” judicial harassment of defenders by governments, such as frivolous criminal charges or arbitrary detention, with online campaigns of intimidation. State-aligned online disinformation campaigns against individual defenders often precede or coincide with official investigations and criminal charges.
……
While social media companies generally prohibit incitement of violence and hate speech on their platforms, CHR has had to engage in additional advocacy with social media companies requesting the removal of specific pieces of content or accounts that target defenders. This extra advocacy has been required even where the content clearly violates a social media company’s terms of service and despite initial flagging by a defender. The situation is even more difficult where the threatening content is only recognizable with sufficient local and political context. The various platforms all rely on artificial intelligence, to varying degrees, to identify speech that violates their respective community standards. Yet current iterations of artificial intelligence are often unable to adequately evaluate context and intent.
Online intimidation and smear campaigns against defenders often rely on existing societal fault lines to demean and discredit advocates. In Guatemala, CHR recently documented a coordinated social media campaign to defame, harass, intimidate and incite violence against human rights defenders. Several were linked with so-called “net centers,” where users were reportedly paid to amplify hateful content across platforms. Often, the campaigns relied on “coded” language that hark back to Guatemala’s civil war and the genocide of Mayan communities by calling indigenous leaders communists, terrorists and guerrillas.
These terms appear to have largely escaped social media company scrutiny, perhaps because none is a racist slur per se. And yet, the proliferation of these online attacks, as well as the status of those putting out the content, is contributing to a worsening climate of violence and impunity for violence against defenders by specifically alluding to terms used to justify violence against indigenous communities. In 2018 alone, NPR reports that 26 indigenous defenders were murdered in Guatemala. In such a climate, the fear and intimidation felt by those targeted in such campaigns is not hyperbolic but based on their understanding of how violence can be sparked in Guatemala.
In order to address such attacks, social media companies must adopt policies that allow them to designate defenders as temporarily protected groups in countries that are characterized by state-coordinated or state-condoned persecution of activists. This is in line with international law that prohibits states from targeting individuals for serious harm based on their political opinion. To increase their ability to recognize and respond to persecution and online violence against human rights defenders, companies must continue to invest in their context-driven content moderation capacity, including complementing algorithmic monitoring with human content moderators well-versed in local dialects and historical and political context.
Context-driven content moderation should also take into account factors that increase the risk that online behavior will contribute to offline violence by identifying high-risk countries. These factors include a history of intergroup conflict and an overall increase in the number of instances of intergroup violence in the past 12 months; a major national political election in the next 12 months; and significant polarization of political parties along religious, ethnic or racial lines. Countries where these and other risk factors are present call for proactive approaches to identify problematic accounts and coded threats against defenders and marginalized communities, such as those shown in Equality Labs’ “Facebook India” report.
Companies should identify, monitor and be prepared to deplatform key accounts that are consistently putting out denigrating language and targeting human rights defenders. This must go hand in hand with the greater efforts that companies are finally beginning to take to identify coordinated, state-aligned misinformation campaigns. Focusing on the networks of users who abuse the platform, instead of looking solely at how the online abuse affects defenders’ rights online, will also enable companies to more quickly evaluate whether the status of the speaker increases the likelihood that others will take up any implicit call to violence or will be unduly influenced by disinformation.
This abuser-focused approach will also help to decrease the burden on defenders to find and flag individual pieces of content and accounts as problematic. Many of the human rights defenders with whom CHR works are giving up on flagging, a phenomenon we refer to as flagging fatigue. Many have become fatalistic about the level of online harassment they face. This is particularly alarming as advocates targeted online may develop skins so thick that they are no longer able to assess when their actual risk of physical violence has increased.
Finally, it is vital that social media companies pursue, and civil society demand, transparency in content moderation policy and decision-making, in line with the Santa Clara Principles. Put forward in 2018 by a group of academic experts, organizations and advocates committed to freedom of expression online, the principles are meant to guide companies engaged in content moderation and ensure that the enforcement of their policies is “fair, unbiased, proportional and respectful of users’ rights.” In particular, the principles call upon companies to publicly report on the number of posts and accounts taken down or suspended on a regular basis, as well as to provide adequate notice and meaningful appeal to affected users.
CHR routinely supports human rights defenders facing frivolous criminal charges related to their human rights advocacy online or whose accounts and documentation have been taken down absent any clear justification. This contributes to a growing distrust of the companies among the human rights community as apparently arbitrary decisions about content moderation are leaving advocates both over- and under-protected online.
As the U.N. special rapporteur on freedom of expression explained in his 2018 report, content moderation processes must include the ability to appeal the removal, or refusal to remove, content or accounts. Lack of transparency heightens the risk that calls to address the persecution of human rights defenders online will be subverted into justifications for censorship and restrictions on speech that is protected under international human rights law.
A common response when discussing the feasibility of context-driven content moderation is to compare it to reviewing all the grains of sand on a beach. But human rights defenders are not asking for the impossible. We are merely pointing out that some of that sand is radioactive—it glows in the dark, it is lethal, and there is a moral and legal obligation upon those that profit from the beach to deal with it.
Ginna Anderson, senior counsel, joined ABA CHR in 2012. She is responsible for supporting the center’s work to advance the rights of human rights defenders and marginalized dommunities, including lawyers and journalists at risk. She is an expert in health and human rights, media freedom, freedom of expression and fair trial rights. As deputy director of the Justice Defenders Program since 2013, she has managed strategic litigation, fact-finding missions and advocacy campaigns on behalf of human rights defenders facing retaliation for their work in every region of the world