In preparation for what may be the final days of the trial of Ola Bini, an open source and free software developer arrested shortly after Julian Assange’s ejection from Ecuador’s London Embassy, civil society organizations observing the case have issued a report citing due process violations, technical weaknesses, political pressures, and risks that this criminal prosecution entails for the protection of digital rights. Bini was initially detained three years ago and previous stages of his prosecution had significant delays that were criticized by the Office of the Inter-American Commission on Human Rights (IACHR) Special Rapporteur for Freedom of Expression. An online press conference is scheduled for May 11th, with EFF and other organizations set to speak on the violations in Bini’s prosecution and the danger this case represents. The trial hearing is set for May 16-20, and will most likely conclude next week. If convicted, Bini’s defense can still appeal the decision.
What’s Happened So Far
The first part of the trial against Ola Bini took place in January. In this first stage of testimony and expert evidence, the court repeatedly called attention to various irregularities and violations to due process by the prosecutor in charge. Human rights groups observing the hearing emphasized the flimsy evidence provided against Bini and serious flaws in how the seizure of his devices took place. Bini’s defense stressed that the raid happened without him present, and that seized encrypted devices were examined without following procedural rules and safeguards.
These are not the only problems with the case. Over two years ago, EFF visited Ecuador on a fact-finding mission after Bini’s initial arrest and detention. What we found was a case deeply intertwined with the political effects of its outcome, fraught with due process violations. EFF’s conclusions from our Ecuador mission were that political actors, including the prosecution, have recklessly tied their reputations to a case with controversial or no real evidence.
Ola Bini is known globally as someone who builds secure tools and contributes to free software projects. Bini’s team at ThoughtWorks contributed to Certbot, the EFF-managed tool that has provided strong encryption for millions of websites around the world, and most recently, Bini co-founded a non-profit organization devoted to creating user-friendly security tools.
What Bini is not known for, however, is conducting the kind of security research that could be mistaken for an “assault on the integrity of computer systems,” the crime for which he was initially investigated, or “unauthorized access to a computer system,” the crime for which he is being accused now (after prosecutors changed the charges). In 2019, Bini’s lawyers counted 65 violations of due process, and journalists told us at the time that no one was able to provide them with concrete descriptions of what he had done. Bini’s initial imprisonment was ended after a decision considered his detention illegal, but the investigation continued. The judge was later “separated” from the case in a ruling that admitted the wrongdoing of successive pre-trial suspensions and the violation of due process.
A so-called piece of evidence against Bini was a photo of a screenshot, supposedly taken by Bini himself and sent to a colleague, showing the telnet login screen of a router. The image is consistent with someone who connects to an open telnet service, receives a warning not to log on without authorization, and does not proceed—respecting the warning. As for the portion of a message exchange attributed to Bini and a colleague, leaked with the photo, it shows their concern with the router being insecurely open to telnet access on the wider Internet, with no firewall.
Between the trial hearing in January and its resumption in May, Ecuador’s Prosecutor’s Office revived an investigation against Fabián Hurtado, the technical expert called by Ola Bini’s defense to refute the image of the telnet session and who is expected to testify at the trial hearing.
On January 10, 2022, the Prosecutor’s Office filed charges for procedural fraud against Hurtado. There was a conspicuous gap between this charge and the last investigative proceeding by prosecutors in the case against Hurtado, when police raided his home almost 20 months before, claiming that he had “incorporated misleading information in his résumé”. This raid was violent and irregular, and considered by Amnesty International as an attempt to intimidate Ola Bini’s defense. One of the pieces of evidence against Hurtado is the document by which Bini’s lawyer, Dr. Carlos Soria, included Hurtado’s technical report in Bini’s case file.
Hurtado’s indictment hearing was held on February 9, 2022. The judge opened a 90-day period of investigation which is about to end. As part of this investigation, the prosecutor’s office and the police raided the offices of Ola Bini’s non-profit organization in a new episode of due process violations, according to media reports.
Civil Society Report and Recommendations
Today’s report, by organizations gathered in the Observation Mission of Bini’s case, is critical for all participating and to others concerned about digital rights around the world. There is still time for the court to recognize and correct the irregularities and technical weaknesses in the case. It points out key points that should be taken into consideration by the judicial authorities in charge of examining the case.
In particular, the report notes, the accusations have failed to demonstrate a consistent case against Ola Bini. Irregularities in court procedures and police action have affected both the speed of the procedure and due process of law in general. In addition, accusations against Bini show little technical knowledge, and could lead to the criminalization of people carrying out legitimate activities protected by international human rights standards. This case may lead to the further persecution of the so-called “infosec community” in Latin America, which is made up primarily of security activists who find vulnerabilities in computer systems, carrying out work that has a positive impact on society in general. The attempt to criminalize Ola Bini already shows a hostile scenario for these activists and, consequently, for the safeguard of our rights in the digital environment.
Moreover, these activists must be guaranteed the right to use the tools necessary for their work—for example, the importance of online anonymity must be respected as a premise for the exercise of several human rights, such as privacy and freedom of expression. This right is protected by international Human Rights standards, which recognize the use of encryption (including tools such as Tor) as fundamental for the exercise of these rights.
These researchers and activists protect the computer systems on which we all depend, and protect the people who have incorporated electronic devices into their daily lives, such as human rights defenders, journalists and activists, among many other key actors for democratic vitality. Ola Bini, and others who work in the field, must be protected—not persecuted.
EngageMedia posted on 28 February 2022 an anthology of films which highlight Myanmar’s long struggle for democracy
This movie playlist is from Cinemata, a platform for social and environmental films about the Asia-Pacific. It is a project of EngageMedia, a nonprofit that promotes digital rights, open and secure technology, and social issue documentary. This is edited and republished as part of a content-sharing agreement with Global Voices.
EngageMedia has curated a playlist of films that shows the extent of rights abuses in the country, as well as courageous forms of resistance against the continuing infringement on people’s rights. Marking the one-year anniversary of the coup, “A Year of Resistance” turns the spotlight on the long-standing struggle of the people of Myanmar for democracy.
This film collection is curated in solidarity with the people of Myanmar. In bringing the stories of unrest and atrocities to light, these films hope to inspire action and advocacy towards justice and freedom.
“Burma Rebel Artist: Moe Thandar Aung”
After the Myanmar military coup in February 2021, Moe Thandar Aung, a graphic designer whose work touched on themes on feminism, began making protest art in support of calls to defend and uphold democracy in the country.
“Black out”
In the aftermath of the 2021 Myanmar coup, the country is faced with state-mandated internet and information blackouts. Hnin, a single mother, and Mon, her daughter and an anti-coup protester, are among those who can no longer access the internet at home. In their pursuit of news on what is happening on the ground, they find only fabricated stories and unreliable information.
During the six months of the junta coup, at least 950 civilians have been violently killed. A total of 90 children under the age of 18 have been murdered, while at least 48 children were arrested.
An independent female humanitarian activist from Shan State describes the trauma she experiences in working in an environment pervaded by despair but also her commitment to helping those forced to flee armed conflict. This film was directed by Sai Naw Kham, Mon Mon Thet Khin, and Soe Yu Maw.
In this video, Myanmar activists talk about the digital rights and digital security challenges they face, arguing that freedom of expression, freedom to organize, and freedom to associate should be kept, protected elements of digital rights.
This song was made by 24 Youth from six different corners from Myanmar that participated in Turning Tables Myanmar’s yearlong social cohesion project “The Voice of the Youth.” Together they produced and recorded the song “Wake Up” which calls for democracy, youth participation, and sustainable development to replace corruption and injustice.
This 2009 film shows powerful footage from the Saffron Revolution, a series of economic and political protests led by students and Buddhist monks that swept Myanmar from August to September 2007. It also highlights the continuing need for international solidarity amongst Southeast Asians in times of political upheavals as in the current situation in Myanmar.
Their key point is worth noting: The problem for human rights defenders in the Gulf region and neighbouring countries is that states have exploited the opportunity to align their cybercrime laws with European standards to double-down on laws restricting legitimate online expression BUT without any of the judicial safeguards that exist in that region.
Several women take part in a protest, using a hashtag, against Saudi Crown Prince Mohamed bin Salman’s visit to the country in Tunis, Tunisia, in November 2018. EFE / Stringer
Governments in every region of the world are criminalizing human rights activism. They do it by prosecuting protest organizers, journalists, internet activists, and leaders of civil society organizations under laws that make it a crime to insult public figures, disseminate information that damages “public order,” “national security,” and “fake news.”
In the Gulf region and neighbouring countries, oppressive governments have further weaponized their legal arsenal by adopting anti-cybercrime laws that apply these overly broad and ill-defined offline restrictions to online communications.
In an age when online communications are ubiquitous, and in societies where free press is crippled, laws that criminalize the promotion of human rights on social media networks and other online platforms undermine the ability to publicize and discuss human rights violations and threaten the foundation of any human rights movement.
In May of 2018, for example, the Saudi government carried out mass arrests of women advocating online for women’s right to drive. Charged under the country’s cybercrime law including article six which prohibits online communication “impinging on public order, religious values, public morals, and privacy,” these human rights activists were detained, tortured, and received multi-year sentences for the “crime” of promoting women’s rights.
There is certainly a necessity to address the prevalence and impact of cybercrimes but without criminalizing people who speak out for human rights.
European countries and the United Nations (UN) have encouraged states to adopt a standard approach to addressing crimes committed with online technologies ranging from wire fraud to financing terrorist groups. The Council of Europe issued a 2001 regional convention on cybercrime, to which any state may accede, and the UN is promoting a cybercrime treaty.
Common standards can prevent the abuse of online technologies by enabling the sharing of online evidence and promoting accountability since the evidence of online crimes often resides on servers outside the country where the harm occurred or where the wrongdoers reside.
The problem for human rights defenders in the Gulf region and neighbouring countries is that states have exploited the opportunity to align their cybercrime laws with European standards to double-down on laws restricting legitimate online expression.
European countries have robust human rights oversight from the European Court of Human Rights, which ensures that limitations on freedom of expression online meet stringent international standards. There is no comparable human rights oversight for the Gulf region. Without adequate international judicial review, governments can successfully exploit international processes to strengthen their ability to stifle online expression.
The regional model cybercrime law drafted by the United Arab Emirates and adopted by the Arab League in 2004, follows international guidance. However, it incorporates a regional twist and includes provisions that criminalize online dissemination of content that is “contrary to the public order and morals,” facilitates assistance to terrorist groups, along with disclosure of confidential government information related to national security or the economy.
UN experts reviewed the UAE law and gave it a seal of approval, noting it complied with the European convention, ignoring the fact that UN human rights experts have documented repeatedly that governments use such restrictions to crack down on dissent. A UN-sponsored global cybercrime study, published in 2013, similarly soft-pedaled the threat of criminalizing online dissent by noting that governments had leeway to protect local values. Such protection does not extend to speaking up for universal rights like equality and democracy.
Actually, the universal right to freedom of expression protects online content, and limitations must meet international standards of legality, legitimacy, necessity, and proportionality. In our recent report on the use of anti-cybercrime legislation throughout the Gulf region and neighbouring countries, we found that over an 18-month period (May 2018-October 2020), there were 225 credible incidents of online freedom of expression violations against activists and journalist in ten countries: Bahrain, Iran, Iraq, Jordan, Kuwait, Oman, Qatar, Saudi Arabia, Syria, and the UAE. Each country has adopted anti-cybercrime laws except Iraq, where lawmakers’ drafts of proposed legislation have been met with stiff opposition from domestic and international human rights groups.
The international community needs to increase pressure on the Gulf region and neighboring countries to comply with their international obligations to protect freedom of expression off and online. Turning away from the clear evidence that oppressive governments are expanding the reach of criminal law to stifle online human rights activism undermines legitimate international efforts to address cybercrime.
How can we trust the UN to safeguard the voices advocating online for human rights and democracy in a region that so desperately needs both, if it fails to insist human rights safeguards be written into the regional and national cybercrime laws it champions?
In the age of the internet, online human rights activism needs to be supported—and protected—as a vital part of the cybercommunications ecosystem. In the Gulf region, defenders of human rights pay an untenable price for their work, risking arrest, torture, and even death. It is time to reverse the trend while there are still defenders left.
One of the women human rights defenders in Saudi Arabia said before she was imprisoned, “If the repressive authorities here put behind bars every peaceful voice calling for respect for public freedoms and the achievement of social justice in the Gulf region and neighboring countries, only terrorists will remain out.” History has proven the truth of her words, as most of the individuals who led terrorist groups with a global reach have come from this region and have caused, and still cause, chronic problems for the whole world.
The important lesson that we must learn here is that repressive governments foster a destructive dynamic of expansion and intensification of human rights violations. Repressive governments cooperate with and look to one another for strategies and tactics. Further troubling is that what we see in the Gulf region is enabled by the essentially unconditional support provided by some Western governments, especially the US and UK. This toxic template of Western support to governments that oppress their own people constitutes a threat to world peace and prosperity and must be addressed.
On 3 July 2021, a new interactive online platform by Forensic Architecture, supported by Amnesty International and the Citizen Lab, maps for the first time the global spread of the notorious spyware Pegasus, made by cyber-surveillance company NSO Group.
‘Digital Violence: How the NSO Group Enables State Terror’ documents digital attacks against human rights defenders around the world, and shows the connections between the ‘digital violence’ of Pegasus spyware and the real-world harms lawyers, activists, and other civil society figures face. NSO Group is the worst of the worst in selling digital burglary tools to players who they are fully aware actively and aggressively violate the human rights of dissidents, opposition figures, and journalists. Edward Snowden, President of Freedom of the Press Foundation.
NSO Group is a major player in the shadowy surveillance industry. The company’s Pegasus spyware has been used in some of the most insidious digital attacks on human rights defenders. When Pegasus is surreptitiously installed on a person’s phone, an attacker has complete access to a phone’s messages, emails, media, microphone, camera, calls and contacts. For my earlier posts on NSO see: https://humanrightsdefenders.blog/tag/nso-group/
“The investigation reveals the extent to which the digital domain we inhabit has become the new frontier of human rights violations, a site of state surveillance and intimidation that enables physical violations in real space,” said Shourideh C. Molavi, Forensic Architecture’s Researcher-in-Charge.
Edward Snowden narrates an accompanying video series which tell the stories of human rights activists and journalists targeted by Pegasus. The interactive platform also includes sound design by composer Brian Eno. A film about the project by award-winning director Laura Poitras will premiere at the 2021 Cannes Film Festival later this month.
The online platform is one of the most comprehensive databases on NSO-related activities, with information about export licenses, alleged purchases, digital infections, and the physical targeting of activists after being targeted with spyware, including intimidation, harassment, and detention. The platform also sheds light on the complex corporate structure of NSO Group, based on new research by Amnesty International and partners.
“For years, NSO Group has shrouded its operations in secrecy and profited from working in the shadows. This platform brings to light the important connections between the use of its spyware and the devastating human rights abuses inflicted upon activists and civil society,” said Danna Ingleton, Deputy Director of Amnesty Tech.
Amnesty International’s Security Lab and Citizen Lab have repeatedly exposed the use of NSO Group’s Pegasus spyware to target hundreds of human rights defenders across the globe. Amnesty International is calling on NSO Group to urgently take steps to ensure that it does not cause or contribute to human rights abuses, and to respond when they do occur. The cyber-surveillance must carry out adequate human rights due diligence and take steps to ensure that human rights defenders and journalists do not continue to become targets of unlawful surveillance.
In October 2019, Amnesty International revealed that Moroccan academic and activist, Maati Monjib’s phone had been infected with Pegasus spyware. He continues to face harassment by the Moroccan authorities for his human rights work. In December 2020, Maati Monjib was arbitrarily detained before being released on parole on 23 March 2021.
Maati Monjib, tells his story in one of the short films, and spoke of the personal toll following the surveillance, “The authorities knew everything I said. I was in danger. Surveillance is very harming for the psychological wellbeing of the victim. My life has changed a lot because of all these pressures.”
Amnesty International is calling for all charges against Maati to be dropped, and the harassment against him and his family by the Moroccan authorities to end.
Ron Deibert is director of the Citizen Lab at the University of Toronto’s Munk School of Global Affairs. (Courtesy of Ron Deibert)
On 25 May 2021 Nathaniel Basen for TVO.org spoke with professor Ron Deibert about internet censorship, espionage, and getting threats from authoritarian regimes. It is a long but rich interview: In 2001, Ron Deibert, a professor at the University of Toronto, founded Citizen Lab to help understand and track the spread of digital human-rights abuses around the world.
In the 20 years since, the interdisciplinary lab has made headlines for protecting journalists and human-rights defenders from digital attacks; one of its researchers helped identify members of the group that attacked the United States Capitol earlier this year.
TVO.org: Let’s start at the beginning. How and why did Citizen Lab start, and what did it look like at the time?
Ron Deibert: Back in the late 1990s, I was doing what I would consider to be conventional academic research — the lone professor studying a topic. A lot of desktop research. A student was taking a course of mine proposed doing a paper where he would explore censorship in China. This was a new topic back then — there was not any evidence really that China was censoring the internet — but people assumed they would, and there was a lot of uncertainty about what was going on there.
He was kind of a self-taught hacker, and he put together this research paper where he connected to computers in China using some proxy servers and started comparing the results he got to what he could see here in Canada, doing it very systematically. It opened my eyes to the ways in which methods from computer science and engineering science — technical interrogation tools and techniques — could be used to surface real primary evidence about what’s going on beneath the surface of the internet around information control. Especially what governments, and also private companies, are doing that isn’t in the public domain. No one was really doing that at the time, and a lightbulb went on, where I realized that this is a really powerful way of surfacing primary evidence and data in a way that really no one else was doing.
So I put together a prospectus for a lab that would be interdisciplinary, that would bring together people who have these skills to work systematically on uncovering information-control practices and look at surveillance and censorship and information warfare, from the standpoint of risks to citizens from a human-rights perspective. I was very fortunate at the time to get support from the Ford Foundation — I got a grant from them in 2001 — and I put the proposal together for the Citizen Lab from that.
TVO.org: And at the time you were in a pretty small basement lab.
Deibert: Actually, it was my office in political science where it all got started. When I got the grant, the Munk Centre was just being established, and the building at Devonshire [at the University of Toronto] was under construction. I went over to that building and scoped out what I thought would be a room that no one else would want, to increase my chance of getting approval. I found this space, and I went to Janice Stein, the director, and said, “Hey, I’ve got this grant. I’ve got this idea. I need some space.” And she said, “Okay, you can have it.”
So she supported the idea and took a risk. Space is a very valuable asset on campus. And even though it sounds less glamorous, we were really happy to have that room.
After 10 years, we moved to the new Munk building, the observatory, where we’re located now, and that was really great, because we needed more space. Security is not perfect — where we are there are lots of problems — but it is much better than it was in the old building, where people would just wander in and could easily locate us. Now we’re wrapped behind several layers of access control…..
TVO.org: Let’s talk a little bit about your process. How does Citizen Lab decide what to look into next?
Deibert: It’s a combination of factors. First and foremost, we are looking at the topic, at the domain, broadly speaking, which for us is global in scope. We don’t have a particular regional focus. We’re looking at risks to human rights that arise out of information technology: that’s the broadest possible definition of what we do.
That also limits our selection of cases that we want to examine. We assume that, however problematic cybersecurity is for big banks or government, they have resources — they can go hire a private company. But journalists, human-rights defenders, people living in the global south who are human-rights defenders and are advocating for policy change, they really lack capacity. So we put our effort into identifying cases that present the highest risk to human rights and, ideally, affect the most vulnerable parts of the population.
We divide our work systematically. So there are certain teams that we organize around, though there’s a bit of overlap. It’s fluid, but we have some teams that are more interested in applying network-measurement techniques to uncovering internet censorship, let’s say, and that’s probably the area where we’ve doing the most work for the longest time. Then there’s what we call the targeted-threats group, which is really the most serious stuff around espionage, and it certainly has the highest risk and has gotten us in the crosshairs of some bad actors, to such an extent that we’ve now become a target. We also apply non-technical methods in an interdisciplinary way — we have people who are trained in law and policy. So we’ve done a lot of work around legislation of analyzing national security laws and practices in Canada.
I would say how things are chosen depends on the opportunities that come up. We may hear about something, some preliminary evidence, perhaps a journalist tips us off or a victim comes forward. Or the team itself decides, hey, this is something we should look into. A good example of that is Zoom. We knew about Zoom: it was a kind of obscure business, networking-communications platform, until the pandemic hit. Suddenly, everyone was on Zoom. So our researchers got together and said, “Hey, we better take a look at this” and indeed uncovered some highly problematic security and privacy issues.
TVO.org: Your work with Zoom is a good example of getting immediate results from your work. If I’m correct, after a public outcry, Zoom cleaned up a lot of what you found. How does that feel to have an immediate impact on the world in that way?
Deibert: It’s actually super-rewarding in a number of ways. First of all, there’s the gratification to get the message out. Ultimately, we see ourselves as a university-based watchdog group, so if you can publish something and the next day everybody’s reading about it because it’s on the front page of the New York Times? That’s phenomenal. We’ve been actually really fortunate to have high-profile coverage for our research. I think we’ve had, like, close to 30 front-page stories in the New York Times, the Washington Post, other global media, the Financial Times, about different reports of ours over the last 20 years.
Going further, ultimately, we don’t just want to get attention for what we’re doing — we want to see some change. So there have been so many cases now where we’ve seen consequences, actions taken, policy changes, or advocacy campaigns started as a result of the work that we’ve done.
Probably the biggest one was back in 2016, when we investigated a targeted espionage attack against a human-rights defender in the United Arab Emirates. He shared with us an SMS message that was tainted with malware that the UAE government was using to try to hack his phone, and when we reverse-engineered it, that malware infected our own device, our own iPhone. We realized that it was so sophisticated and involved what were then three software flaws in the Apple operating system, that even Apple itself didn’t know about. We did a responsible disclosure to them and, within two weeks, they pushed out a patch that affected directly the security of more than 1 billion people. So, to be able to say, “Hey, we were responsible for that” is, I think, quite an accomplishment.
TVO.org: On the flip side, there are people that don’t like the work you do. What has it been like for you to become a target? I can’t imagine when you started this thing that you pictured yourself coming under threat.
Deibert: Well, first of all, you’re right. I grew up studying world politics as something out there, and I’m a spectator. There were a couple of instances before this, but, really, when we published the GhostNet report in 2009, which was the first public-evidence-based report on cyber espionage, it was the one that involved the hacking of the office of His Holiness the Dalai Lama, and we uncovered this massive Chinese espionage operation.
It suddenly dawned on me, okay, we’ve gone from kind of just observing and recording to becoming a factor, because very quickly thereafter, we had all sorts of inquiries and veiled threats and concerns about physical security. From that point on, from 2009 to today, they’ve really only amplified. The worst is probably when we were targeted by Black Cube, the same private-intelligence firm made up of ex-Mossad agents that notoriously went after the accusers of Harvey Weinstein. Now, that’s really frightening to be in their crosshairs. We ended up actually exposing that operation, but to know that something like that is going on, frankly, is very disturbing. It really forces you to change your behaviour, think about practical issues: when you’re travelling, hotels, getting into elevators, who’s accessing the same building as you.
At the same time, though, I think it’s a mark of success. If we’re not successful, those people wouldn’t care. It’s just something you have to factor into your risk calculation and take all the precautions, and we’re most concerned about the risks to the subjects of our research. Frankly, we go to extraordinary lengths to protect the security in terms of the data we handle, how we interact with them and interview them. But, yeah, it’s just constant. Actually, every day there’s something, ranging from people who, unfortunately, maybe are mentally disturbed, and they read about us and want to visit us, all the way to, you know, the world’s worst authoritarian regimes that are trying to threaten us.
TVO.org: A lot of this work is global in nature, but some Ontarians might be surprised to know a lot of it is quite local. I’m thinking about your work with internet-filtering technology and Waterloo-based Netsweeper. What makes filtering technology so important, and what was Netsweeper up to?
Deibert: As the internet evolves, there are all sorts of reasons why people want to control access to certain content online — beginning, I would say, with schools and libraries. There are legitimate concerns among parents and teachers that children have access to pornography or other types of content. Service providers like Netsweeper fill the market niche, providing filtering technology to those clients.
But, very quickly, there grew a need among governments — national-level internet censorship. In the beginning, like I talked about with the Chinese, it was very rare in the 1990s or 2000s. I could count on one hand the number of governments that were doing this sort of thing. Now, it’s routine, and it’s big business. So with a company like Netsweeper, for us, it was, frankly, a no-brainer to zero in on it, and not even because they’re based in our own backyard. There’s certainly a motivating factor there because we’re Canadians, and we want to make sure that, as best we can, we identify businesses operating out of Canada to see if they’re in compliance with Canadian law or Canadian values. Here, we had a company that seemed to be not just kind of stumbling into selling internet-censorship services to some of the world’s worst violators of human rights, but actively courting them.
They were showing up all over the world, especially in the Middle East. The Middle East is where Netsweeper really profited from selling internet-censorship services to governments that routinely violate human rights and block access to content that would be considered protected legally here in Canada. And they were also doing this in a non-transparent way.
This is not something they openly advertised, and yet we knew, from our research and technical investigation, we could identify basically unquestionable proof that their technology was being used to filter access to content that would be legally protected here in Canada, in places like Bahrain and Yemen and in the Gulf.
So we did a report about Netsweeper’s technology in Yemen, and at this time, the main telco, YemenNet, was controlled by Houthi rebels, and of course there’s an ongoing civil war, which at that time was really quite intense. We simply documented that Netsweeper’s technology was being used to actually block the entire Israeli top-level domain — the only time we’d ever seen that in the world, with the exception of Iran.
We published this report, and we mentioned in the commentary around it that, in providing services to one participant in an armed conflict, who is censoring information, including information related to international news, they’re effectively inserting themselves in an armed conflict, and it raises all sorts of ethical, moral, and potentially even legal issues. Netsweeper sued me and the University of Toronto for defamation for over $3 million. Of course, we thought that was entirely baseless, and six months later, they simply withdrew the suit.
Coincidentally, their suit came shortly before the Ontario government passed anti-SLAPP legislation to prevent lawsuits that chill free expression, which in our opinion, is very much what it is, because as we were going through the litigation, we couldn’t report on Netsweeper. After the lawsuit was dropped, we then published several subsequent reports on Netsweeper…..
TVO.org: In your 20 years, what is the work you’re most proud of?
Deibert: What I’m most proud of is the staff. I’d say a skill that I have is, I think I would make a good NHL scout or a band manager. I have the ability, for what it’s worth, to identify talented people and give them the support they need. So there’s not a particular report that I’m proud of; I’m most proud of the people who work at the lab. I’m so fortunate to be surrounded by these extremely talented, ethical, dedicated people, most of whom have been with me for over 10 years. It’s rare to have that in a small university. And that’s what I’m most proud of.
TVO.org: The lab itself, as we talked about a little bit, is somewhat unique: you’re working outside of government or corporations and working in the interest of human rights. Others around the world have taken note of your model. Do you hope to export it?
Deibert: It’s beginning to be surprising to me that there aren’t more Citizen Lab–like organizations at other universities. To me, this is a field with such endless opportunity. There’s so much unfortunate malfeasance going on in the digital world.
And, yet, you have these extremely powerful methods and techniques, as we’ve demonstrated, that, by way of analogy, act like an X-ray on the abuse of power. That’s the way I think about it. It’s astonishing.
Sometimes I sit back and shake my head. A lot of the stuff we don’t even publish. It’s remarkable what you can see when you use these very precise, careful methods to uncover and track abuses of power. Why haven’t other university professors jumped on this and tried to mimic it? I don’t really know. I suppose there’s no one answer. There are risks involved with it, and it’s actually not easy to cross disciplinary boundaries.
So I think that we’re helping to build the field, at least I hope, and you’re right that there are a few other places where I’m seeing either professors or, in some cases, human-rights organizations, attempting to build something like this. That is fantastic. That’s really where my effort and the next phase of my career is, around really field-building by promoting that model and hoping that others build up centres like the Citizen Lab at other universities, while also ensuring the sustainability of the lab.
This is a bit “inside university,” but the reality is, as the only professor in the lab, I’m the weakest link. So if something happens to me, the lab would really fall apart. Not because I’m the wizard directing everything — purely because I’m the responsible principal investigator for the grant, and you need that at a university. What I hope to do is ensure the sustainability of the lab outside of me, and that means recruiting other professors to the lab. We’re actively fundraising to do that and to try to get more tenure-track positions connected to the lab so that it can continue once I move on.
TVO.org: And what will the next 20 years hold for the lab itself?
Deibert: Hopefully, we ‘ll be able to continue. We know we have the support from the University of Toronto; they’ve been incredible in a number of ways. We live in a time when big university bureaucracies are criticized, sometimes rightfully so — I’ve been critical of my own university in various areas. But one thing I can say, they have been so supportive of work that we do in a variety of real practical ways, including legal support.
I just want the lab to not be something that is tied to one profession. I want it to continue and to duplicate what we do globally. If we had 25 Citizen Labs sprinkled around the planet, it would be better for human rights overall, because there would at least be another protective layer, if you will, of dogged researchers who aren’t afraid to uncover abuses of power, no matter where they are.
The Hill of 26 May 2021 reports that a coalition of more than 30 human rights and digital privacy rights groups called on Google to abandon its plans to establish a Google Cloud region in Saudi Arabia over concerns about human rights violations.
The groups, which include Amnesty International, Human Rights Watch and PEN America, wrote in their letter that Saudi Arabia’s record of tamping down on public dissent and its justice system that “flagrantly violates due process” made it unsafe for Google to set up a “cloud region” in the kingdom.
“While Google publishes how it handles government requests for customer information and reports when requests are made through formal channels, there are numerous potential human rights risks of establishing a Google Cloud region in Saudi Arabia that include violations of the rights to privacy, freedom of expression and association, non-discrimination, and due process,” the groups said. See also: https://humanrightsdefenders.blog/2019/03/08/saudi-arabia-for-first-time-openly-criticized-in-un-human-rights-council/
The letter also pointed to Saudi authorities who have routinely sought to identify anonymous online dissenters and spy on Saudi citizens through digital surveillance. The groups also pointed to how they themselves are believed to have been put under surveillance by the Saudi government.
“Google has a responsibility to respect human rights, regardless of any state’s willingness to fulfill its own human rights obligations,” the letter continued, pointing to Google’s statement in which it expressed its commitment to human rights and to “improve the lives of as many people as possible.”
In order to address these concerns, the groups called on Google to conduct a “robust, thorough human rights due diligence process” and to “draw red lines around what types of government requests concerning Cloud regions it will not comply with” due to human rights concerns.
“The Saudi government has demonstrated time and again a flagrant disregard for human rights, both through its own direct actions against human rights defenders and its spying on corporate digital platforms to do the same,” the letter read. “We fear that in partnering with the Saudi government, Google will become complicit in future human rights violations affecting people in Saudi Arabia and the Middle East region.”
..Citizen Lab has tracked and documented more than two dozen cases using similar intrusion and spyware techniques. We don’t know the number of victims or their stories, as not all vectors are publicly known. Once spyware is implanted, it provides a command and control (C&C) server with regular, scheduled updates designed to avoid extensive bandwidth consumption. Those tools are created to be stealthy and evade forensic analysis, avoid detection by antivirus software, and can be deactivated and removed by operators.
Once successfully implanted on a victim’s phone using an exploit chain like the Trident, spyware can actively record or passively gather a variety of different data about the device. By providing full access to the phone’s files, messages, microphone, and video camera, the operator can turn the device into a silent digital spy in the target’s pocket.
These attacks and many others that are unreported show that spyware tools and the intrusion business have a significant abuse potential and that bad actors or governments can’t resist the temptation to use such tools against political opponents, journalists, and human rights defenders. Due to the lack of operational due-diligence of spyware companies, these companies don’t consider the impact of the use of their tools on the civilian population nor comply with human rights policies. [see: https://humanrightsdefenders.blog/2020/07/20/the-ups-and-downs-in-sueing-the-nso-group/]
The growing privatization of cybersecurity attacks arises through a new generation of private companies, aka online mercenaries. This phenomenon has reached the point where it has acquired its own acronym, PSOAs, for the private sector offensive actors. This harmful industry is quickly growing to become a multi-billion dollar global technology market. These newly emerging companies provide nation-states and bad actors the option to buy the tools necessary for launching sophisticated cyberattacks. This adds another significant element to the cybersecurity threat landscape.
These companies claim that they have strict controls over how their spyware is sold and used and have robust company oversight mechanisms to prevent abuse. However, the media and security research groups have consistently presented a different and more troubling picture of abuse…
The growing abuse of surveillance technology by authoritarian regimes with poor human rights records is becoming a disturbing new, globally emerging trend. The use of these harmful tools has drawn attention to how the availability and abuse of highly intrusive surveillance technology shrink already limited cyberspace in which vulnerable people can express their views without facing repercussions such as imprisonment, torture, or killing.
Solving this global problem will not be easy nor simple and will require a strong coalition of multi-stakeholders, including governments, civil society, and the private sector, to reign in what is now a “Wild West” of unmitigated abuse in cyberspace. With powerful surveillance and intrusion technology roaming free without restrictions, there is nowhere to hide, and no one will be safe from those who wish to cause harm online or offline. Not acting urgently by banning or restricting the use of these tools will threaten democracy, rule of law, and human rights worldwide.
On December 7, 2020, the US National Security Agency issued a cybersecurity advisory warning that “Russian State-sponsored actors” were exploiting a vulnerability in the digital workspace software developed by VMware (VMware®1Access and VMware Identity Manager2 products) using compromised credentials.
A malware called SUNBURST infected SolarWind’s customers’ systems when they updated the company’s Orion software.
On December 30, 2020, Reuters reported that the hacking group behind the SolarWinds compromise was able to break into Microsoft Corp and access some of its source code. This new development sent a worrying signal about the cyberattack’s ambition and intentions.
Microsoft president Brad Smith said the cyber assault was effectively an attack on the US, its government, and other critical institutions, and demonstrated how dangerous the cyberspace landscape had become.
Based on telemetry gathered from Microsoft’s Defender antivirus software, Smith said the nature of the attack and the breadth of the supply chain vulnerability was very clear to see. He said Microsoft has now identified at least 40 of its customers that the group targeted and compromised, most of which are understood to be based in the US, but Microsoft’s work has also uncovered victims in Belgium, Canada, Israel, Mexico, Spain, the UAE, and the UK, including government agencies, NGOs, and cybersecurity and technology firms.
Although the ongoing operation appears to be for intelligence gathering, no reported damage has resulted from the attacks until the publishing date of this article. This is not “espionage as usual.” It created a serious technological vulnerability in the supply chain. It has also shaken the trust and reliability of the world’s most advanced critical infrastructure to advance one nation’s intelligence agency.
As expected, the Kremlin has denied any role in recent cyberattacks on the United States. President Vladimir Putin’s spokesman Dmitry Peskov said the American accusations that Russia was behind a major security breach lacked evidence. The Russian denial raised the question of a gap of accountability in attributing cyberspace attacks to a nation-state or specific actor. Determining who is to blame in a cyberattack is a significant challenge, as cyberspace is intrinsically different from the kinetic one. There is no physical activity to observe, and technological advancements have allowed perpetrators to be harder to track and to remain seemingly anonymous when conducting the attack (Brantly, 2016).
To achieve a legitimate attribution, it is not enough to identify the suspects, i.e., the actual persons involved in the cyberattacks but also be able to determine if the cyberattacks had a motive which can be political or economic and whether the actors were supported by a government or a non-state actor, with enough evidence to support diplomatic, military, or legal options.
A recognized attribution can enhance accountability in cyberspace and deter bad actors from launching cyberattacks, especially on civilian infrastructures like transportation systems, hospitals, power grids, schools, and civil society organizations.
According to the United Nation’s responsibility of States for Internationally Wrongful Acts article 2, to constitute an “internationally wrongful act,” a cyber operation generally must be 1) attributable to a state and 2) breach an obligation owed another state. It is also unfortunate that state-sponsored cyberattacks violate international law principles of necessity and proportionality.
Governments need to consider a multi-stakeholder approach to help resolve the accountability gap in cyberspace. Some states continue to believe that ensuring international security and stability in cyberspace or cyberpeace is exclusively the responsibility of states. In practice, cyberspace is designed, deployed, and managed primarily by non-state actors, like tech companies, Internet Service Providers (ISPs), standards organizations, and research institutions. It is important to engage them in efforts to ensure the stability of cyberspace.
I will name two examples of multi-stakeholder initiatives to secure cyberspace: the Global Commission on the Stability of Cyberspace (GCSC), which consisted of 28 commissioners from 16 countries, including government officials, has developed principles and norms that can be adopted by states to ensure stable and secure cyberspace. For example, it requested states and non-state actors to not pursue, support, or allow cyber operations intended to disrupt the technical infrastructure essential to elections, referenda, or plebiscites.
Cyberpeace Institute is a newly established global NGO that was one-year-old in December 2020 but has the important goal of protecting the most vulnerable and achieve peace and justice in cyberspace. The institute started its operations by focusing on the healthcare industry, which was under attack daily during the COVID 19 pandemic. As those cyberattacks were a direct threat to human life, the institute called upon governments to stop cyber operations against medical facilities and protect healthcare.
I believe that there is an opportunity for the states to forge agreements to curb cyberattacks on civilian and private sector infrastructure and to define what those boundaries and redlines should be.
SolarWinds and the recent attacks on healthcare facilities are important milestones as they offer a live example of the paramount risks associated with a completely unchecked and unregulated cyberspace environment. But it will only prove to be a moment of true and more fundamental reckoning if many of us, governments, and different multi-stakeholders played a part, each in their respective roles, in capitalizing and focusing on those recent events by forcing legal, technological, and institutional reform and real change in cyberspace.
The effects of the Solarwinds attack will not only impact US government agencies but businesses and civilians that are currently less secure online. Bad actors are becoming more aggressive, bold, reckless and continue to cross the red lines we considered as norms in cyberspace.
Vulnerable civilians are the targets of the intrusion tools and spyware in a new cyberspace wild west landscape. Clearly, additional legal and regulatory scrutiny is required of private-sector offensive actors or PSOAs. If PSOA companies are unwilling to recognize the role that their products play in undermining human rights or address these urgent concerns, then, in this case, intervention by governments and other stakeholders is needed.
We no longer have the privilege of ignoring the growing impact of cyberattacks on international law, geopolitics, and civilians. We need a strong and global cybersecurity response. What is required is a multi-stakeholders’ courageous agenda that redefines historical assumptions and biases about the possibility of establishing new laws and norms that can govern cyberspace.
Changes and reforms are achievable if there is will. The Snowden revelations and the outcry that followed resulted not only in massive changes to the domestic regulation of US foreign intelligence, but they also shaped changes at the European Court of Human Rights, the Court of Justice of the European Union, and the UN. The Human Rights Committee also helped spur the creation of a new UN Special Rapporteur on the Right to Privacy based in Geneva.
The new cyberspace laws, rules, and norms require a multi-stakeholder dialogue process that involves participants from tech companies, academia, civil society, and international law in global discussions that can be facilitated by governments or supported by a specialized international intergovernmental organization.
On 11 December 2020 Bernd Lange, Vice-chair of the Group of the Progressive Alliance of Socialists and Democrats in the European Parliament, wrote in New Europe the following piece about how after 6 years there has come an European agreement on stricter rules for the export of dual-use goods, which can be used for both civilian and military ends.
All good things are worth waiting for. After long six years negotiators from the European Parliament, the Commission and member states finally agreed on stricter rules for the export of dual-use goods, which can be used for both civilian and military ends. Parliament’s perseverance and assertiveness against a blockade by some of the European Union member states has paid off in the sense that as of now respect for human rights will become an export standard.
Up until now, export restrictions applied to aerospace items, navigation instruments or trucks. From now on, these rules will also apply to EU produced cyber-surveillance technologies, which demonstrably have been abused by authoritarian regimes to spy on opposition movements; for instance, during the Arab Spring in 2011.
This is a breakthrough for human rights in trade by overcoming years of various EU governments blocking the inclusion of cyber-surveillance technology in the export control rules for dual-use goods. Without a doubt: Technological advances, new security challenges and their demonstrated risks to the protection of human rights required more decisive action and harmonised rules for EU export controls.
Thanks to the stamina of the Parliament, it will now be much more difficult for authoritarian regimes to abuse EU produced cybersecurity tools such as biometric software or Big Data searches to spy on human rights defenders and opposition activists. Our message is clear: economic interests must not take precedence over human rights. Exporters have to shoulder greater responsibility and apply due diligence to ensure their products are not employed to violate human rights. We have also managed to increase transparency by insisting on listing exports in greater detail in the annual export control reports, which will make it much harder to hide suspicious items.
In a nutshell, we are setting up an EU-wide regime to control cyber-surveillance items that are not listed as dual-use items in international regimes, in the interest of protecting human rights and political freedoms. We strengthened member states’ public reporting obligations on export controls, so far patchy, to make the cyber-surveillance sector, in particular, more transparent. We increased the importance of human rights as licensing criterion and we agreed on rules to swiftly include emerging technologies in the regulation.
This agreement on dual-use items, together with the rules on conflict minerals and the soon to be adopted rules on corporate due diligence, is establishing a new gold standard for human rights in EU trade policy.
I want the European Union to lead globally on rules and values-based trade. These policies show that we can manage globalisation to protect people and the planet. This must be the blueprint for future rule-based trade policy.
With teams increasingly working remotely during COVID-19, we are all facing questions regarding the security of our communication with one another: Which communication platform or tool is best to use? Which is the most secure for holding sensitive internal meetings? Which will have adequate features for online training sessions or remote courses without compromising the privacy and security of participants?
Front Line Defenders presents this simple overview which may help you choose the right tool for your specific needs.
With end-to-end encryption (e2ee), your message gets encrypted before it leaves your device and only gets decrypted when it reaches the intended recipient’s device. Using e2ee is important if you plan to transmit sensitive communication, such as during internal team or partners meetings.
With encryption to-server, your message gets encrypted before it leaves your device, but is being decrypted on the server, processed, and encrypted again before being sent to recipient(s). Having encryption to-server is OK if you fully trust the server.
Why Zoom or other platforms/tools are not listed here: There are many platforms which can be used for group communication. In this guide we focused on those we think will deliver good user experiences and offer the best privacy and security features. Of course none of the platforms can offer 100% privacy or security as in all communications, there is a margin of risk. We have not included tools such as Zoom, Skype, Telegram etc. in this guide, as we believe that the margin of risk incurred whilst using them is too wide, and therefore Front Line Defenders does not feel comfortable recommending them.
Surveillance and behaviour: Some companies like Facebook, Google, Apple and others regularly collect, analyse and monetize information about users and their online activities. Most, if not all, of us are already profiled by these companies to some extent. If the communication is encrypted to-server owners of the platform may store this communication. Even with end-to-end encryption, communication practices such as location, time, whom you connect with, how often, etc. may still be stored. If you are uncomfortable with this data being collected, stored and shared, we recommended refraining from using services by those companies.
The level of protection of your call depends not only on which platform you choose, but also on the physical security of the space you and others on the call are in and the digital protection of the devices you and others use for the call.
Caution: Use of encryption is illegal in some countries. You should understand and consider the law in your country before deciding on using any of the tools mentioned in this guide.
Criteria for selecting the tools or platforms
Before selecting any communication platform, app or program it is always strongly recommended that you research it first. Below we list some important questions to consider:
Is the platform mature enough? How long has it been running for? Is it still being actively developed? Does it have a large community of active developers? How many active users does it have?
Does the platform provide encryption? Is it end-to-end encrypted or just to-server encrypted?
In which jurisdiction is the owner of the platform and where are servers located? Does this pose a potential challenge for your or your partners?
Does the platform allow for self-hosting?
Is the platform open source? Does it provide source code to anyone to inspect?
Was the platform independently audited? When was the last audit? What do experts say about the platform?
What is the history of the development and ownership of the platform? Have there been any security challenges? How have the owners and developers reacted to those challenges?
How do you connect with others? Do you need to provide phone number, email or nickname? Do you need to install a dedicated app/program? What will this app/program have access to on your device? Is it your address book, location, mic, camera, etc.?
What is stored on the server? What does the platform’s owner have access to?
Does the platform have features needed for the specific task/s you require?
Is the platform affordable? This needs to include potential subscription fees, learning and implementing, and possible IT support needed, hosting costs, etc.
The document then proceeds to give more detailed information related to each tool/service listed in this guide
Video calls, webinar or online training recommendations
Video calls recommendations: In the current situation you will undoubtedly find yourself organizing or participating in many more video calls than before. It may not be obvious to everyone how to do it securely and without exposing yourself and your data to too much risk:
Assume that when you connect to talk your camera and microphone may be turned on by default. Consider covering your camera with a sticker (making sure it doesn’t leave any sticky residue on the camera lens) and only remove it when you use the camera.
You may not want to give away too much information on your house, family pictures, notes on the walls or boards, etc. Be mindful of the background, who and what is also in the frame aside from yourself? Test before the call by, for example, opening meet.jit.si and click on GO button to get to a random empty room with your camera switched on to see what is in the picture. Consider clearing your background of clutter.
Also be mindful who can be heard in the background. Maybe close the door and windows, or alert those sharing your space about your meeting.
It is best to position your face so your eyes are more or less at the upper third of the picture without cutting off your head. Unless you do not want to reveal your face, do not sit with your back to a light or a window. Daylight or a lamp from the front is the best. Stay within the camera frame. You may want to look into the lens from time to time to make “eye contact” with others. If you are using your cellphone, rest it against a steady object (e.g. a pile of books) so that the video picture remains stable.
You may want to mute your microphone to prevent others hearing you typing notes or any background noise as it can be very distracting to others on the call.
If the internet connection is slow you may want to switch off your camera, pause other programs, mute the microphone and ask others to do same. You may also want to try sitting closer to the router, or connecting your computer directly to the router with an ethernet cable. If you share internet connection with others, you may ask them to reduce extensive use of internet for the duration of your call.
It it very tempting to multitask especially during group calls. But you may very soon realise that you are lost in the meeting and others may realize this.
If this is a new situation for you or you are using a new calling tool, you may want to give yourself a few extra minutes to learn and test it prior to the scheduled meeting to get familiar with options like turning on/off the camera and the microphone, etc.
If possible, prepare and test a backup communication plan in case you will have trouble connecting with others. For example, adding them to a Signal group so you can still text chat or troubleshoot problems on the call. Sometimes it helps to have an alternate browser installed on your computer or app on the phone to try connecting with those.
If you would like to organise a webinar or online training, you can use tools outlined above in the group communication. Some of best practices include:
Make sure that you know who is connected. If this is needed check the identities of all people participating by asking them to speak. Do not assume you know who is connected only by reading assigned names.
Agree on ground-rules, like keeping cameras on/off, keeping microphone on/off when one is not speaking, flagging when participants would like to speak, who will be chairing the meeting, who will take notes – where and how will those notes be written and then distributed, is it ok to take screenshots of a video call, is it ok to record the call, etc.
Agree on clear agendas and time schedules. If your webinar is longer than one hour, it is probably best to divide it into clear one-hour sessions separated by some time agreed with participants, so they have time to have a short break. Plan for the possibility that not all participants will return after a break. Have alternative methods to reach out to them to remind them to return, like Signal/Wire/DeltaChat contacts for them.
It is easiest to use a meeting service that participants connect to using a browser without a need to register or install a special program, one that also gives the webinar organiser the ability to mute microphones and close cameras of participants.
Prior to the call, check with all participants whether they have particular needs, such as if they are deaf or hard of hearing, if they are visually impaired or blind, or any other conditions which would affect their participation in the call. With this in mind, ensure that the selected platform will accommodate these needs and to be sure, test the platform beforehand. Simple measures can also improve inclusion and participation in your calls, such as turning on cameras when possible, as it can allow for lip-reading.
Encourage all participants to speak slowly and to avoid jargon where possible, as the working language of the call is most likely not everyone’s mother tongue language. Naturally, there will be moments of silences and pauses, embrace them. They can help to support understanding and can be helpful for participants who are hard of hearing, interpreters and will also aid assistive technology to pick up words correctly.
Judge Phyllis Hamilton, in her ruling on the cases, stated that she was not convinced by NSO Group’s claims and arguments that it had no hand in targeting WhatsApp users. Moving forward in the trial, the NSO Group might be forced to reveal its clients and make the list public.
The judge also added that even if NSO was operating at the direction of its customer, it still appeared to have a hand in targeting WhatsApp users. As per reports, a WhatsApp spokesperson said the Facebook-owned venture was pleasd with the court’s decision and will now be able to uncover the practices of NSO Group.
Even in the face of criticism from privacy advocates, the company has claimed that law enforcement agencies are facing difficulties due to the proliferation of encrypted messaging apps like WhatsApp.
The law firm King & Spalding has reportedly been hired by the NSO group to represent them. Among the company’s legal team is Rod Rosenstein, Trump administration’s former attorney general. The NSO Group has reportedly had multiple government clients like Saudi Arabia, Mexico, and the United Arab Emirates who have used spyware to target political opponents and human rights, campaigners.