Posts Tagged ‘cyber security’

New investigation shows global reach of NSO Group’s spyware

July 5, 2021

On 3 July 2021, a new interactive online platform by Forensic Architecture, supported by Amnesty International and the Citizen Lab, maps for the first time the global spread of the notorious spyware Pegasus, made by cyber-surveillance company NSO Group.

‘Digital Violence: How the NSO Group Enables State Terror’ documents digital attacks against human rights defenders around the world, and shows the connections between the ‘digital violence’ of Pegasus spyware and the real-world harms lawyers, activists, and other civil society figures face.   NSO Group is the worst of the worst in selling digital burglary tools to players who they are fully aware actively and aggressively violate the human rights of dissidents, opposition figures, and journalists. Edward Snowden, President of Freedom of the Press Foundation.

NSO Group is a major player in the shadowy surveillance industry. The company’s Pegasus spyware has been used in some of the most insidious digital attacks on human rights defenders. When Pegasus is surreptitiously installed on a person’s phone, an attacker has complete access to a phone’s messages, emails, media, microphone, camera, calls and contacts. For my earlier posts on NSO see: https://humanrightsdefenders.blog/tag/nso-group/

“The investigation reveals the extent to which the digital domain we inhabit has become the new frontier of human rights violations, a site of state surveillance and intimidation that enables physical violations in real space,” said Shourideh C. Molavi, Forensic Architecture’s Researcher-in-Charge. 

Edward Snowden narrates an accompanying video series which tell the stories of human rights activists and journalists targeted by Pegasus. The interactive platform also includes sound design by composer Brian Eno. A film about the project by award-winning director Laura Poitras will premiere at the 2021 Cannes Film Festival later this month.

The online platform is one of the most comprehensive databases on NSO-related activities, with information about export licenses, alleged purchases, digital infections, and the physical targeting of activists after being targeted with spyware, including intimidation, harassment, and detention. The platform also sheds light on the complex corporate structure of NSO Group, based on new research by Amnesty International and partners.

For years, NSO Group has shrouded its operations in secrecy and profited from working in the shadows. This platform brings to light the important connections between the use of its spyware and the devastating human rights abuses inflicted upon activists and civil society,” said Danna Ingleton, Deputy Director of Amnesty Tech.

Amnesty International’s Security Lab and Citizen Lab have repeatedly exposed the use of NSO Group’s Pegasus spyware to target hundreds of human rights defenders across the globe. Amnesty International is calling on NSO Group to urgently take steps to ensure that it does not cause or contribute to human rights abuses, and to respond when they do occur. The cyber-surveillance must carry out adequate human rights due diligence and take steps to ensure that human rights defenders and journalists do not continue to become targets of unlawful surveillance.

In October 2019, Amnesty International revealed that Moroccan academic and activist, Maati Monjib’s phone had been infected with Pegasus spyware. He continues to face harassment by the Moroccan authorities for his human rights work. In December 2020, Maati Monjib was arbitrarily detained before being released on parole on 23 March 2021.

Maati Monjib, tells his story in one of the short films, and spoke of the personal toll following the surveillance, “The authorities knew everything I said. I was in danger. Surveillance is very harming for the psychological wellbeing of the victim. My life has changed a lot because of all these pressures.”

Amnesty International is calling for all charges against Maati to be dropped, and the harassment against him and his family by the Moroccan authorities to end.

To find out more visit digitalviolence.org

https://www.amnesty.org/en/latest/news/2021/07/investigation-maps-human-rights-harm-of-nso-group-spyware/

https://www.techradar.com/news/spyware-toolkit-used-by-governments-hackers-to-break-into-windows-machines

In-depth interview with Ron Deibert, Citizen Lab’s founder

May 31, 2021
a smiling man in a collared shirt standing in front of a staircase

Ron Deibert is director of the Citizen Lab at the University of Toronto’s Munk School of Global Affairs. (Courtesy of Ron Deibert)

On 25 May 2021 Nathaniel Basen for TVO.org spoke with professor Ron Deibert about internet censorship, espionage, and getting threats from authoritarian regimes. It is a long but rich interview: In 2001, Ron Deibert, a professor at the University of Toronto, founded Citizen Lab to help understand and track the spread of digital human-rights abuses around the world. 

In the 20 years since, the interdisciplinary lab has made headlines for protecting journalists and human-rights defenders from digital attacks; one of its researchers helped identify members of the group that attacked the United States Capitol earlier this year.

TVO.org: Let’s start at the beginning. How and why did Citizen Lab start, and what did it look like at the time? 

Ron Deibert: Back in the late 1990s, I was doing what I would consider to be conventional academic research — the lone professor studying a topic. A lot of desktop research. A student was taking a course of mine proposed doing a paper where he would explore censorship in China. This was a new topic back then — there was not any evidence really that China was censoring the internet — but people assumed they would, and there was a lot of uncertainty about what was going on there. 

He was kind of a self-taught hacker, and he put together this research paper where he connected to computers in China using some proxy servers and started comparing the results he got to what he could see here in Canada, doing it very systematically. It opened my eyes to the ways in which methods from computer science and engineering science — technical interrogation tools and techniques — could be used to surface real primary evidence about what’s going on beneath the surface of the internet around information control. Especially what governments, and also private companies, are doing that isn’t in the public domain. No one was really doing that at the time, and a lightbulb went on, where I realized that this is a really powerful way of surfacing primary evidence and data in a way that really no one else was doing. 

So I put together a prospectus for a lab that would be interdisciplinary, that would bring together people who have these skills to work systematically on uncovering information-control practices and look at surveillance and censorship and information warfare, from the standpoint of risks to citizens from a human-rights perspective. I was very fortunate at the time to get support from the Ford Foundation — I got a grant from them in 2001 — and I put the proposal together for the Citizen Lab from that. 

TVO.org: And at the time you were in a pretty small basement lab.

Deibert: Actually, it was my office in political science where it all got started. When I got the grant, the Munk Centre was just being established, and the building at Devonshire [at the University of Toronto] was under construction. I went over to that building and scoped out what I thought would be a room that no one else would want, to increase my chance of getting approval. I found this space, and I went to Janice Stein, the director, and said, “Hey, I’ve got this grant. I’ve got this idea. I need some space.” And she said, “Okay, you can have it.” 

So she supported the idea and took a risk. Space is a very valuable asset on campus. And even though it sounds less glamorous, we were really happy to have that room.

After 10 years, we moved to the new Munk building, the observatory, where we’re located now, and that was really great, because we needed more space. Security is not perfect — where we are there are lots of problems — but it is much better than it was in the old building, where people would just wander in and could easily locate us. Now we’re wrapped behind several layers of access control…..

TVO.org: Let’s talk a little bit about your process. How does Citizen Lab decide what to look into next?

Deibert: It’s a combination of factors. First and foremost, we are looking at the topic, at the domain, broadly speaking, which for us is global in scope. We don’t have a particular regional focus. We’re looking at risks to human rights that arise out of information technology: that’s the broadest possible definition of what we do.

That also limits our selection of cases that we want to examine. We assume that, however problematic cybersecurity is for big banks or government, they have resources — they can go hire a private company. But journalists, human-rights defenders, people living in the global south who are human-rights defenders and are advocating for policy change, they really lack capacity. So we put our effort into identifying cases that present the highest risk to human rights and, ideally, affect the most vulnerable parts of the population. 

We divide our work systematically. So there are certain teams that we organize around, though there’s a bit of overlap. It’s fluid, but we have some teams that are more interested in applying network-measurement techniques to uncovering internet censorship, let’s say, and that’s probably the area where we’ve doing the most work for the longest time. Then there’s what we call the targeted-threats group, which is really the most serious stuff around espionage, and it certainly has the highest risk and has gotten us in the crosshairs of some bad actors, to such an extent that we’ve now become a target. We also apply non-technical methods in an interdisciplinary way — we have people who are trained in law and policy. So we’ve done a lot of work around legislation of analyzing national security laws and practices in Canada. 

I would say how things are chosen depends on the opportunities that come up. We may hear about something, some preliminary evidence, perhaps a journalist tips us off or a victim comes forward. Or the team itself decides, hey, this is something we should look into. A good example of that is Zoom. We knew about Zoom: it was a kind of obscure business, networking-communications platform, until the pandemic hit. Suddenly, everyone was on Zoom. So our researchers got together and said, “Hey, we better take a look at this” and indeed uncovered some highly problematic security and privacy issues.

TVO.org: Your work with Zoom is a good example of getting immediate results from your work. If I’m correct, after a public outcry, Zoom cleaned up a lot of what you found. How does that feel to have an immediate impact on the world in that way? 

Deibert: It’s actually super-rewarding in a number of ways. First of all, there’s the gratification to get the message out. Ultimately, we see ourselves as a university-based watchdog group, so if you can publish something and the next day everybody’s reading about it because it’s on the front page of the New York Times? That’s phenomenal. We’ve been actually really fortunate to have high-profile coverage for our research. I think we’ve had, like, close to 30 front-page stories in the New York Times, the Washington Post, other global media, the Financial Times, about different reports of ours over the last 20 years. 

Going further, ultimately, we don’t just want to get attention for what we’re doing — we want to see some change. So there have been so many cases now where we’ve seen consequences, actions taken, policy changes, or advocacy campaigns started as a result of the work that we’ve done. 

Probably the biggest one was back in 2016, when we investigated a targeted espionage attack against a human-rights defender in the United Arab Emirates. He shared with us an SMS message that was tainted with malware that the UAE government was using to try to hack his phone, and when we reverse-engineered it, that malware infected our own device, our own iPhone. We realized that it was so sophisticated and involved what were then three software flaws in the Apple operating system, that even Apple itself didn’t know about. We did a responsible disclosure to them and, within two weeks, they pushed out a patch that affected directly the security of more than 1 billion people. So, to be able to say, “Hey, we were responsible for that” is, I think, quite an accomplishment.

TVO.org: On the flip side, there are people that don’t like the work you do. What has it been like for you to become a target? I can’t imagine when you started this thing that you pictured yourself coming under threat. 

Deibert: Well, first of all, you’re right. I grew up studying world politics as something out there, and I’m a spectator. There were a couple of instances before this, but, really, when we published the GhostNet report in 2009, which was the first public-evidence-based report on cyber espionage, it was the one that involved the hacking of the office of His Holiness the Dalai Lama, and we uncovered this massive Chinese espionage operation. 

It suddenly dawned on me, okay, we’ve gone from kind of just observing and recording to becoming a factor, because very quickly thereafter, we had all sorts of inquiries and veiled threats and concerns about physical security. From that point on, from 2009 to today, they’ve really only amplified. The worst is probably when we were targeted by Black Cube, the same private-intelligence firm made up of ex-Mossad agents that notoriously went after the accusers of Harvey Weinstein. Now, that’s really frightening to be in their crosshairs. We ended up actually exposing that operation, but to know that something like that is going on, frankly, is very disturbing. It really forces you to change your behaviour, think about practical issues: when you’re travelling, hotels, getting into elevators, who’s accessing the same building as you. 

At the same time, though, I think it’s a mark of success. If we’re not successful, those people wouldn’t care. It’s just something you have to factor into your risk calculation and take all the precautions, and we’re most concerned about the risks to the subjects of our research. Frankly, we go to extraordinary lengths to protect the security in terms of the data we handle, how we interact with them and interview them. But, yeah, it’s just constant. Actually, every day there’s something, ranging from people who, unfortunately, maybe are mentally disturbed, and they read about us and want to visit us, all the way to, you know, the world’s worst authoritarian regimes that are trying to threaten us. 

TVO.org: A lot of this work is global in nature, but some Ontarians might be surprised to know a lot of it is quite local. I’m thinking about your work with internet-filtering technology and Waterloo-based Netsweeper. What makes filtering technology so important, and what was Netsweeper up to? 

Deibert: As the internet evolves, there are all sorts of reasons why people want to control access to certain content online — beginning, I would say, with schools and libraries. There are legitimate concerns among parents and teachers that children have access to pornography or other types of content. Service providers like Netsweeper fill the market niche, providing filtering technology to those clients. 

But, very quickly, there grew a need among governments — national-level internet censorship. In the beginning, like I talked about with the Chinese, it was very rare in the 1990s or 2000s. I could count on one hand the number of governments that were doing this sort of thing. Now, it’s routine, and it’s big business. So with a company like Netsweeper, for us, it was, frankly, a no-brainer to zero in on it, and not even because they’re based in our own backyard. There’s certainly a motivating factor there because we’re Canadians, and we want to make sure that, as best we can, we identify businesses operating out of Canada to see if they’re in compliance with Canadian law or Canadian values. Here, we had a company that seemed to be not just kind of stumbling into selling internet-censorship services to some of the world’s worst violators of human rights, but actively courting them. 

They were showing up all over the world, especially in the Middle East. The Middle East is where Netsweeper really profited from selling internet-censorship services to governments that routinely violate human rights and block access to content that would be considered protected legally here in Canada. And they were also doing this in a non-transparent way. 

This is not something they openly advertised, and yet we knew, from our research and technical investigation, we could identify basically unquestionable proof that their technology was being used to filter access to content that would be legally protected here in Canada, in places like Bahrain and Yemen and in the Gulf. 

So we did a report about Netsweeper’s technology in Yemen, and at this time, the main telco, YemenNet, was controlled by Houthi rebels, and of course there’s an ongoing civil war, which at that time was really quite intense. We simply documented that Netsweeper’s technology was being used to actually block the entire Israeli top-level domain — the only time we’d ever seen that in the world, with the exception of Iran. 

We published this report, and we mentioned in the commentary around it that, in providing services to one participant in an armed conflict, who is censoring information, including information related to international news, they’re effectively inserting themselves in an armed conflict, and it raises all sorts of ethical, moral, and potentially even legal issues. Netsweeper sued me and the University of Toronto for defamation for over $3 million. Of course, we thought that was entirely baseless, and six months later, they simply withdrew the suit. 

Coincidentally, their suit came shortly before the Ontario government passed anti-SLAPP legislation to prevent lawsuits that chill free expression, which in our opinion, is very much what it is, because as we were going through the litigation, we couldn’t report on Netsweeper. After the lawsuit was dropped, we then published several subsequent reports on Netsweeper…..

TVO.org: In your 20 years, what is the work you’re most proud of?

Deibert: What I’m most proud of is the staff. I’d say a skill that I have is, I think I would make a good NHL scout or a band manager. I have the ability, for what it’s worth, to identify talented people and give them the support they need. So there’s not a particular report that I’m proud of; I’m most proud of the people who work at the lab. I’m so fortunate to be surrounded by these extremely talented, ethical, dedicated people, most of whom have been with me for over 10 years. It’s rare to have that in a small university. And that’s what I’m most proud of.

TVO.org: The lab itself, as we talked about a little bit, is somewhat unique: you’re working outside of government or corporations and working in the interest of human rights. Others around the world have taken note of your model. Do you hope to export it? 

Deibert: It’s beginning to be surprising to me that there aren’t more Citizen Lab–like organizations at other universities. To me, this is a field with such endless opportunity. There’s so much unfortunate malfeasance going on in the digital world. 

And, yet, you have these extremely powerful methods and techniques, as we’ve demonstrated, that, by way of analogy, act like an X-ray on the abuse of power. That’s the way I think about it. It’s astonishing. 

Sometimes I sit back and shake my head. A lot of the stuff we don’t even publish. It’s remarkable what you can see when you use these very precise, careful methods to uncover and track abuses of power. Why haven’t other university professors jumped on this and tried to mimic it? I don’t really know. I suppose there’s no one answer. There are risks involved with it, and it’s actually not easy to cross disciplinary boundaries. 

So I think that we’re helping to build the field, at least I hope, and you’re right that there are a few other places where I’m seeing either professors or, in some cases, human-rights organizations, attempting to build something like this. That is fantastic. That’s really where my effort and the next phase of my career is, around really field-building by promoting that model and hoping that others build up centres like the Citizen Lab at other universities, while also ensuring the sustainability of the lab.

This is a bit “inside university,” but the reality is, as the only professor in the lab, I’m the weakest link. So if something happens to me, the lab would really fall apart. Not because I’m the wizard directing everything — purely because I’m the responsible principal investigator for the grant, and you need that at a university. What I hope to do is ensure the sustainability of the lab outside of me, and that means recruiting other professors to the lab. We’re actively fundraising to do that and to try to get more tenure-track positions connected to the lab so that it can continue once I move on.

TVO.org: And what will the next 20 years hold for the lab itself?

Deibert: Hopefully, we ‘ll be able to continue. We know we have the support from the University of Toronto; they’ve been incredible in a number of ways. We live in a time when big university bureaucracies are criticized, sometimes rightfully so — I’ve been critical of my own university in various areas. But one thing I can say, they have been so supportive of work that we do in a variety of real practical ways, including legal support. 

I just want the lab to not be something that is tied to one profession. I want it to continue and to duplicate what we do globally. If we had 25 Citizen Labs sprinkled around the planet, it would be better for human rights overall, because there would at least be another protective layer, if you will, of dogged researchers who aren’t afraid to uncover abuses of power, no matter where they are.

https://www.tvo.org/article/x-ray-on-the-abuse-of-power-citizen-labs-founder-on-fighting-for-human-rights

30 NGOs call on Google to drop plan for a Cloud region in Saudi Arabia

May 27, 2021
Groups call on Google to drop out of Saudi project over human rights concerns

© Getty Images

The Hill of 26 May 2021 reports that a coalition of more than 30 human rights and digital privacy rights groups called on Google to abandon its plans to establish a Google Cloud region in Saudi Arabia over concerns about human rights violations.

The groups, which include Amnesty International, Human Rights Watch and PEN America, wrote in their letter that Saudi Arabia’s record of tamping down on public dissent and its justice system that “flagrantly violates due process” made it unsafe for Google to set up a “cloud region” in the kingdom.

While Google publishes how it handles government requests for customer information and reports when requests are made through formal channels, there are numerous potential human rights risks of establishing a Google Cloud region in Saudi Arabia that include violations of the rights to privacy, freedom of expression and association, non-discrimination, and due process,” the groups said. See also: https://humanrightsdefenders.blog/2019/03/08/saudi-arabia-for-first-time-openly-criticized-in-un-human-rights-council/

The letter also pointed to Saudi authorities who have routinely sought to identify anonymous online dissenters and spy on Saudi citizens through digital surveillance. The groups also pointed to how they themselves are believed to have been put under surveillance by the Saudi government.

“Google has a responsibility to respect human rights, regardless of any state’s willingness to fulfill its own human rights obligations,” the letter continued, pointing to Google’s statement in which it expressed its commitment to human rights and to “improve the lives of as many people as possible.”

In order to address these concerns, the groups called on Google to conduct a “robust, thorough human rights due diligence process” and to “draw red lines around what types of government requests concerning Cloud regions it will not comply with” due to human rights concerns.

“The Saudi government has demonstrated time and again a flagrant disregard for human rights, both through its own direct actions against human rights defenders and its spying on corporate digital platforms to do the same,” the letter read. “We fear that in partnering with the Saudi government, Google will become complicit in future human rights violations affecting people in Saudi Arabia and the Middle East region.”

https://thehill.com/policy/technology/555597-groups-call-on-google-to-drop-out-of-saudi-project-over-human-rights

What to do about global spyware abuse?

January 6, 2021

Mohamed EL Bashir, a Public Policy & Internet Governance Strategist, wrote a lengthy but informative piece about the persistent problem of commercial spyware Abuse: “Reshaping Cyberspace: Beyond the Emerging Online Mercenaries and the Aftermath of SolarWinds“, in CircleID 5 January 2021.

The piece starts of with some concrete cases such as Ahmed Mansoor [see https://humanrightsdefenders.blog/2016/08/29/apple-tackles-iphone-one-tap-spyware-flaws-after-mea-laureate-discovers-hacking-attempt/] and Rafael Cabrera, [see: https://www.nytimes.com/2017/06/21/world/americas/mexico-pena-nieto-spying-hacking-surveillance.html]. In 2018, a close confidant of Jamal Khashoggi was targeted in Canada by a fake package notification, resulting in the infection of his iPhone.

..Citizen Lab has tracked and documented more than two dozen cases using similar intrusion and spyware techniques. We don’t know the number of victims or their stories, as not all vectors are publicly known. Once spyware is implanted, it provides a command and control (C&C) server with regular, scheduled updates designed to avoid extensive bandwidth consumption. Those tools are created to be stealthy and evade forensic analysis, avoid detection by antivirus software, and can be deactivated and removed by operators.

Once successfully implanted on a victim’s phone using an exploit chain like the Trident, spyware can actively record or passively gather a variety of different data about the device. By providing full access to the phone’s files, messages, microphone, and video camera, the operator can turn the device into a silent digital spy in the target’s pocket.

These attacks and many others that are unreported show that spyware tools and the intrusion business have a significant abuse potential and that bad actors or governments can’t resist the temptation to use such tools against political opponents, journalists, and human rights defenders. Due to the lack of operational due-diligence of spyware companies, these companies don’t consider the impact of the use of their tools on the civilian population nor comply with human rights policies. [see: https://humanrightsdefenders.blog/2020/07/20/the-ups-and-downs-in-sueing-the-nso-group/]

The growing privatization of cybersecurity attacks arises through a new generation of private companies, aka online mercenaries. This phenomenon has reached the point where it has acquired its own acronym, PSOAs, for the private sector offensive actors. This harmful industry is quickly growing to become a multi-billion dollar global technology market. These newly emerging companies provide nation-states and bad actors the option to buy the tools necessary for launching sophisticated cyberattacks. This adds another significant element to the cybersecurity threat landscape.

These companies claim that they have strict controls over how their spyware is sold and used and have robust company oversight mechanisms to prevent abuse. However, the media and security research groups have consistently presented a different and more troubling picture of abuse…

The growing abuse of surveillance technology by authoritarian regimes with poor human rights records is becoming a disturbing new, globally emerging trend. The use of these harmful tools has drawn attention to how the availability and abuse of highly intrusive surveillance technology shrink already limited cyberspace in which vulnerable people can express their views without facing repercussions such as imprisonment, torture, or killing.

Solving this global problem will not be easy nor simple and will require a strong coalition of multi-stakeholders, including governments, civil society, and the private sector, to reign in what is now a “Wild West” of unmitigated abuse in cyberspace. With powerful surveillance and intrusion technology roaming free without restrictions, there is nowhere to hide, and no one will be safe from those who wish to cause harm online or offline. Not acting urgently by banning or restricting the use of these tools will threaten democracy, rule of law, and human rights worldwide.

On December 7, 2020, the US National Security Agency issued a cybersecurity advisory warning that “Russian State-sponsored actors” were exploiting a vulnerability in the digital workspace software developed by VMware (VMware®1Access and VMware Identity Manager2 products) using compromised credentials.

The next day, on December 8, the cybersecurity firm FireEye announced the theft of its “Red Team” tools that it uses to identify vulnerabilities in its customers’ systems. Several prominent media organizations reported an ongoing software supply-chain attack against SolarWinds, the company whose products are used by over 300,000 corporate and government customers — including most of the Fortune 500 companies, Los Alamos National Laboratory (which has nuclear weapons responsibilities), and Boeing.

A malware called SUNBURST infected SolarWind’s customers’ systems when they updated the company’s Orion software.

On December 30, 2020, Reuters reported that the hacking group behind the SolarWinds compromise was able to break into Microsoft Corp and access some of its source code. This new development sent a worrying signal about the cyberattack’s ambition and intentions.

Microsoft president Brad Smith said the cyber assault was effectively an attack on the US, its government, and other critical institutions, and demonstrated how dangerous the cyberspace landscape had become.

Based on telemetry gathered from Microsoft’s Defender antivirus software, Smith said the nature of the attack and the breadth of the supply chain vulnerability was very clear to see. He said Microsoft has now identified at least 40 of its customers that the group targeted and compromised, most of which are understood to be based in the US, but Microsoft’s work has also uncovered victims in Belgium, Canada, Israel, Mexico, Spain, the UAE, and the UK, including government agencies, NGOs, and cybersecurity and technology firms.

Although the ongoing operation appears to be for intelligence gathering, no reported damage has resulted from the attacks until the publishing date of this article. This is not “espionage as usual.” It created a serious technological vulnerability in the supply chain. It has also shaken the trust and reliability of the world’s most advanced critical infrastructure to advance one nation’s intelligence agency.

As expected, the Kremlin has denied any role in recent cyberattacks on the United States. President Vladimir Putin’s spokesman Dmitry Peskov said the American accusations that Russia was behind a major security breach lacked evidence. The Russian denial raised the question of a gap of accountability in attributing cyberspace attacks to a nation-state or specific actor. Determining who is to blame in a cyberattack is a significant challenge, as cyberspace is intrinsically different from the kinetic one. There is no physical activity to observe, and technological advancements have allowed perpetrators to be harder to track and to remain seemingly anonymous when conducting the attack (Brantly, 2016).

To achieve a legitimate attribution, it is not enough to identify the suspects, i.e., the actual persons involved in the cyberattacks but also be able to determine if the cyberattacks had a motive which can be political or economic and whether the actors were supported by a government or a non-state actor, with enough evidence to support diplomatic, military, or legal options.

A recognized attribution can enhance accountability in cyberspace and deter bad actors from launching cyberattacks, especially on civilian infrastructures like transportation systems, hospitals, power grids, schools, and civil society organizations.

According to the United Nation’s responsibility of States for Internationally Wrongful Acts article 2, to constitute an “internationally wrongful act,” a cyber operation generally must be 1) attributable to a state and 2) breach an obligation owed another state. It is also unfortunate that state-sponsored cyberattacks violate international law principles of necessity and proportionality.

Governments need to consider a multi-stakeholder approach to help resolve the accountability gap in cyberspace. Some states continue to believe that ensuring international security and stability in cyberspace or cyberpeace is exclusively the responsibility of states. In practice, cyberspace is designed, deployed, and managed primarily by non-state actors, like tech companies, Internet Service Providers (ISPs), standards organizations, and research institutions. It is important to engage them in efforts to ensure the stability of cyberspace.

I will name two examples of multi-stakeholder initiatives to secure cyberspace: the Global Commission on the Stability of Cyberspace (GCSC), which consisted of 28 commissioners from 16 countries, including government officials, has developed principles and norms that can be adopted by states to ensure stable and secure cyberspace. For example, it requested states and non-state actors to not pursue, support, or allow cyber operations intended to disrupt the technical infrastructure essential to elections, referenda, or plebiscites.

Cyberpeace Institute is a newly established global NGO that was one-year-old in December 2020 but has the important goal of protecting the most vulnerable and achieve peace and justice in cyberspace. The institute started its operations by focusing on the healthcare industry, which was under attack daily during the COVID 19 pandemic. As those cyberattacks were a direct threat to human life, the institute called upon governments to stop cyber operations against medical facilities and protect healthcare.

I believe that there is an opportunity for the states to forge agreements to curb cyberattacks on civilian and private sector infrastructure and to define what those boundaries and redlines should be.

SolarWinds and the recent attacks on healthcare facilities are important milestones as they offer a live example of the paramount risks associated with a completely unchecked and unregulated cyberspace environment. But it will only prove to be a moment of true and more fundamental reckoning if many of us, governments, and different multi-stakeholders played a part, each in their respective roles, in capitalizing and focusing on those recent events by forcing legal, technological, and institutional reform and real change in cyberspace.

The effects of the Solarwinds attack will not only impact US government agencies but businesses and civilians that are currently less secure online. Bad actors are becoming more aggressive, bold, reckless and continue to cross the red lines we considered as norms in cyberspace.

Vulnerable civilians are the targets of the intrusion tools and spyware in a new cyberspace wild west landscape. Clearly, additional legal and regulatory scrutiny is required of private-sector offensive actors or PSOAs. If PSOA companies are unwilling to recognize the role that their products play in undermining human rights or address these urgent concerns, then, in this case, intervention by governments and other stakeholders is needed. 

We no longer have the privilege of ignoring the growing impact of cyberattacks on international law, geopolitics, and civilians. We need a strong and global cybersecurity response. What is required is a multi-stakeholders’ courageous agenda that redefines historical assumptions and biases about the possibility of establishing new laws and norms that can govern cyberspace.

Changes and reforms are achievable if there is will. The Snowden revelations and the outcry that followed resulted not only in massive changes to the domestic regulation of US foreign intelligence, but they also shaped changes at the European Court of Human Rights, the Court of Justice of the European Union, and the UN. The Human Rights Committee also helped spur the creation of a new UN Special Rapporteur on the Right to Privacy based in Geneva.

The new cyberspace laws, rules, and norms require a multi-stakeholder dialogue process that involves participants from tech companies, academia, civil society, and international law in global discussions that can be facilitated by governments or supported by a specialized international intergovernmental organization.

Sources and References:

http://www.circleid.com/posts/20210105-reshaping-cyberspace-beyond-the-emerging-online-mercenaries/

Bernd Lange sees breakthrough for human rights in EU dual-use export

December 12, 2020


On 11 December 2020 Bernd Lange, Vice-chair of the Group of the Progressive Alliance of Socialists and Democrats in the European Parliament, wrote in New Europe the following piece about how after 6 years there has come an European agreement on stricter rules for the export of dual-use goods, which can be used for both civilian and military ends.


All good things are worth waiting for. After long six years negotiators from the European Parliament, the Commission and member states finally agreed on stricter rules for the export of dual-use goods, which can be used for both civilian and military ends. Parliament’s perseverance and assertiveness against a blockade by some of the European Union member states has paid off in the sense that as of now respect for human rights will become an export standard.

Up until now, export restrictions applied to aerospace items, navigation instruments or trucks. From now on, these rules will also apply to EU produced cyber-surveillance technologies, which demonstrably have been abused by authoritarian regimes to spy on opposition movements; for instance, during the Arab Spring in 2011.

This is a breakthrough for human rights in trade by overcoming years of various EU governments blocking the inclusion of cyber-surveillance technology in the export control rules for dual-use goods. Without a doubt: Technological advances, new security challenges and their demonstrated risks to the protection of human rights required more decisive action and harmonised rules for EU export controls.

Thanks to the stamina of the Parliament, it will now be much more difficult for authoritarian regimes to abuse EU produced cybersecurity tools such as biometric software or Big Data searches to spy on human rights defenders and opposition activists. Our message is clear: economic interests must not take precedence over human rights. Exporters have to shoulder greater responsibility and apply due diligence to ensure their products are not employed to violate human rights. We have also managed to increase transparency by insisting on listing exports in greater detail in the annual export control reports, which will make it much harder to hide suspicious items.

In a nutshell, we are setting up an EU-wide regime to control cyber-surveillance items that are not listed as dual-use items in international regimes, in the interest of protecting human rights and political freedoms. We strengthened member states’ public reporting obligations on export controls, so far patchy, to make the cyber-surveillance sector, in particular, more transparent. We increased the importance of human rights as licensing criterion and we agreed on rules to swiftly include emerging technologies in the regulation.

This agreement on dual-use items, together with the rules on conflict minerals and the soon to be adopted rules on corporate due diligence, is establishing a new gold standard for human rights in EU trade policy.

I want the European Union to lead globally on rules and values-based trade. These policies show that we can manage globalisation to protect people and the planet. This must be the blueprint for future rule-based trade policy.

Frontline’s Guide to Secure Group Chat and Conferencing Tools

July 21, 2020

With teams increasingly working remotely during COVID-19, we are all facing questions regarding the security of our communication with one another: Which communication platform or tool is best to use? Which is the most secure for holding sensitive internal meetings? Which will have adequate features for online training sessions or remote courses without compromising the privacy and security of participants?

Front Line Defenders presents this simple overview which may help you choose the right tool for your specific needs.

FLD Secure Group Chat Flowchart

Download PDF of the flow chart

Note:

  • With end-to-end encryption (e2ee), your message gets encrypted before it leaves your device and only gets decrypted when it reaches the intended recipient’s device. Using e2ee is important if you plan to transmit sensitive communication, such as during internal team or partners meetings.
  • With encryption to-server, your message gets encrypted before it leaves your device, but is being decrypted on the server, processed, and encrypted again before being sent to recipient(s). Having encryption to-server is OK if you fully trust the server.

Why Zoom or other platforms/tools are not listed here: There are many platforms which can be used for group communication. In this guide we focused on those we think will deliver good user experiences and offer the best privacy and security features. Of course none of the platforms can offer 100% privacy or security as in all communications, there is a margin of risk. We have not included tools such as Zoom, Skype, Telegram etc. in this guide, as we believe that the margin of risk incurred whilst using them is too wide, and therefore Front Line Defenders does not feel comfortable recommending them.

Surveillance and behaviour: Some companies like Facebook, Google, Apple and others regularly collect, analyse and monetize information about users and their online activities. Most, if not all, of us are already profiled by these companies to some extent. If the communication is encrypted to-server owners of the platform may store this communication. Even with end-to-end encryption, communication practices such as location, time, whom you connect with, how often, etc. may still be stored. If you are uncomfortable with this data being collected, stored and shared, we recommended refraining from using services by those companies.

The level of protection of your call depends not only on which platform you choose, but also on the physical security of the space you and others on the call are in and the digital protection of the devices you and others use for the call.

See also:

Caution: Use of encryption is illegal in some countries. You should understand and consider the law in your country before deciding on using any of the tools mentioned in this guide.

Criteria for selecting the tools or platforms

Before selecting any communication platform, app or program it is always strongly recommended that you research it first. Below we list some important questions to consider:

  • Is the platform mature enough? How long has it been running for? Is it still being actively developed? Does it have a large community of active developers? How many active users does it have?
  • Does the platform provide encryption? Is it end-to-end encrypted or just to-server encrypted?
  • In which jurisdiction is the owner of the platform and where are servers located? Does this pose a potential challenge for your or your partners?
  • Does the platform allow for self-hosting?
  • Is the platform open source? Does it provide source code to anyone to inspect?
  • Was the platform independently audited? When was the last audit? What do experts say about the platform?
  • What is the history of the development and ownership of the platform? Have there been any security challenges? How have the owners and developers reacted to those challenges?
  • How do you connect with others? Do you need to provide phone number, email or nickname? Do you need to install a dedicated app/program? What will this app/program have access to on your device? Is it your address book, location, mic, camera, etc.?
  • What is stored on the server? What does the platform’s owner have access to?
  • Does the platform have features needed for the specific task/s you require?
  • Is the platform affordable? This needs to include potential subscription fees, learning and implementing, and possible IT support needed, hosting costs, etc.

The document then proceeds to give more detailed information related to each tool/service listed in this guide

Signal – https://signal.org/

Delta Chat – https://delta.chat/

Wire – https://wire.com/

Jitsi Meet – https://jitsi.org/jitsi-meet/

BigBlueButton – https://bigbluebutton.org/

Whereby – https://whereby.com

Blue Jeans – https://www.bluejeans.com/

GoToMeeting – https://www.gotomeeting.com/

Facetime / iMessage –https://www.apple.com/ios/facetime

Google Meet – https://meet.google.com/

Duo – https://duo.google.com/

WhatsApp – https://www.whatsapp.com/

Video calls, webinar or online training recommendations

Video calls recommendations: In the current situation you will undoubtedly find yourself organizing or participating in many more video calls than before. It may not be obvious to everyone how to do it securely and without exposing yourself and your data to too much risk:

  • Assume that when you connect to talk your camera and microphone may be turned on by default. Consider covering your camera with a sticker (making sure it doesn’t leave any sticky residue on the camera lens) and only remove it when you use the camera.
  • You may not want to give away too much information on your house, family pictures, notes on the walls or boards, etc. Be mindful of the background, who and what is also in the frame aside from yourself? Test before the call by, for example, opening meet.jit.si and click on GO button to get to a random empty room with your camera switched on to see what is in the picture. Consider clearing your background of clutter.
  • Also be mindful who can be heard in the background. Maybe close the door and windows, or alert those sharing your space about your meeting.
  • Video call services may collect information on your location and activity, consider using a VPN (see Physical, emotional and digital protection while using home as office in times of COVID-19 guide).
  • It is best to position your face so your eyes are more or less at the upper third of the picture without cutting off your head. Unless you do not want to reveal your face, do not sit with your back to a light or a window. Daylight or a lamp from the front is the best. Stay within the camera frame. You may want to look into the lens from time to time to make “eye contact” with others. If you are using your cellphone, rest it against a steady object (e.g. a pile of books) so that the video picture remains stable.
  • You may want to mute your microphone to prevent others hearing you typing notes or any background noise as it can be very distracting to others on the call.
  • If the internet connection is slow you may want to switch off your camera, pause other programs, mute the microphone and ask others to do same. You may also want to try sitting closer to the router, or connecting your computer directly to the router with an ethernet cable. If you share internet connection with others, you may ask them to reduce extensive use of internet for the duration of your call.
  • It it very tempting to multitask especially during group calls. But you may very soon realise that you are lost in the meeting and others may realize this.
  • If this is a new situation for you or you are using a new calling tool, you may want to give yourself a few extra minutes to learn and test it prior to the scheduled meeting to get familiar with options like turning on/off the camera and the microphone, etc.
  • If possible, prepare and test a backup communication plan in case you will have trouble connecting with others. For example, adding them to a Signal group so you can still text chat or troubleshoot problems on the call. Sometimes it helps to have an alternate browser installed on your computer or app on the phone to try connecting with those.

If you would like to organise a webinar or online training, you can use tools outlined above in the group communication. Some of best practices include:

  • Make sure that you know who is connected. If this is needed check the identities of all people participating by asking them to speak. Do not assume you know who is connected only by reading assigned names.
  • Agree on ground-rules, like keeping cameras on/off, keeping microphone on/off when one is not speaking, flagging when participants would like to speak, who will be chairing the meeting, who will take notes – where and how will those notes be written and then distributed, is it ok to take screenshots of a video call, is it ok to record the call, etc.
  • Agree on clear agendas and time schedules. If your webinar is longer than one hour, it is probably best to divide it into clear one-hour sessions separated by some time agreed with participants, so they have time to have a short break. Plan for the possibility that not all participants will return after a break. Have alternative methods to reach out to them to remind them to return, like Signal/Wire/DeltaChat contacts for them.
  • It is easiest to use a meeting service that participants connect to using a browser without a need to register or install a special program, one that also gives the webinar organiser the ability to mute microphones and close cameras of participants.
  • Prior to the call, check with all participants whether they have particular needs, such as if they are deaf or hard of hearing, if they are visually impaired or blind, or any other conditions which would affect their participation in the call. With this in mind, ensure that the selected platform will accommodate these needs and to be sure, test the platform beforehand. Simple measures can also improve inclusion and participation in your calls, such as turning on cameras when possible, as it can allow for lip-reading.
  • Encourage all participants to speak slowly and to avoid jargon where possible, as the working language of the call is most likely not everyone’s mother tongue language. Naturally, there will be moments of silences and pauses, embrace them. They can help to support understanding and can be helpful for participants who are hard of hearing, interpreters and will also aid assistive technology to pick up words correctly.

https://www.frontlinedefenders.org/en/resource-publication/guide-secure-group-chat-and-conferencing-tools

The Ups and downs in sueing the NSO Group

July 20, 2020

Written By Shubham Bose

facebook

While AI stranded in its effort in Israel [https://humanrightsdefenders.blog/2020/07/15/amnesty-internationals-bid-to-block-spyware-company-nso-fails-in-israeli-court/ ] a federal US court has passed an order allowing WhatsApp to move forward with its case against the Israeli company for allegedly targeting 1,400 users with malware in 2019. According to reports, it is believed that spyware produced by the Israeli firm NSO Group was used to target various groups of people around the world, such as journalists, human rights defenders, and even politicians. [see: https://humanrightsdefenders.blog/2019/10/30/nso-accused-of-largest-attack-on-civil-society-through-its-spyware/

Judge Phyllis Hamilton, in her ruling on the cases, stated that she was not convinced by NSO Group’s claims and arguments that it had no hand in targeting WhatsApp users. Moving forward in the trial, the NSO Group might be forced to reveal its clients and make the list public.

The judge also added that even if NSO was operating at the direction of its customer, it still appeared to have a hand in targeting WhatsApp users. As per reports, a WhatsApp spokesperson said the Facebook-owned venture was pleasd with the court’s decision and will now be able to uncover the practices of NSO Group.

Even in the face of criticism from privacy advocates, the company has claimed that law enforcement agencies are facing difficulties due to the proliferation of encrypted messaging apps like WhatsApp.

The law firm King & Spalding has reportedly been hired by the NSO group to represent them. Among the company’s legal team is Rod Rosenstein, Trump administration’s former attorney general. The NSO Group has reportedly had multiple government clients like Saudi Arabia, Mexico, and the United Arab Emirates who have used spyware to target political opponents and human rights, campaigners.

https://www.republicworld.com/world-news/us-news/whatsapp-lawsuit-against-israeli-firm-nso-group-given-green-light-by-u.html

Anti-Censorship initiative with free VPN accounts for human rights defenders

July 15, 2020

On 14 July Business-Wire reported that the VPN company TunnelBear has partnered with NGOs to give away 20,000 accounts (these NGOs inlcude Access Now, Frontline Defenders, Internews, and one other undisclosed participant).

This program aims to empower individuals and organizations with the tools they need to browse a safe and open internet environment, regardless of where they live. The VPN provider is encouraging other NGOs or media organizations across the world to reach out if they too are in need of support. “At TunnelBear, we strongly believe in an open and uncensored internet. Whenever we can use our technology to help people towards that end, we will,” said TunnelBear Cofounder Ryan Dochuk.

TunnelBear’s VPN encrypts its user’s internet traffic to enable a private and censor-free browsing experience.

By undergoing and releasing independent audits of their systems, adopting open source tools, and collaborating with the open source community, TunnelBear has proven itself to be an industry leader in the VPN space and a valuable private sector partner within the internet freedom movement. Internews is happy to support TunnelBear in extending its VPN service to the media organizations, journalists, activists, and human rights defenders around the globe who can benefit from it,” said Jon Camfield, Director of Global Technology Strategy at Internews.

Contact: Shames Abdelwahab press@tunnelbear.com

See also: https://humanrightsdefenders.blog/2020/06/23/trump-now-starts-dismanteling-the-open-technology-fund/

https://www.businesswire.com/news/home/20200714005302/en/TunnelBear-Kicks-Anti-Censorship-Initiative-Free-Accounts-Activists

Amnesty International’s bid to block spyware company NSO fails in Israeli court

July 15, 2020

Amnesty International’s bid to block spyware company NSO Group’s international export licence has been shut down in a Tel Aviv court, apparently due to a lack of evidence, reported several media, here in the New Statesman of 14 July 2020. [see: https://humanrightsdefenders.blog/2019/09/17/has-nso-really-changed-its-attitude-with-regard-to-spyware/ ]

The case argued that the Israeli defence ministry should revoke the group’s export licence in light of numerous allegations that its phone-hacking Pegasus spyware has been used by governments (including Mexico, Saudi Arabia, Morocco and the UAE) to spy on civilians including an Amnesty International employee, human rights activists, lawyers and journalists..

The district court judge Rachel Barkai wrote in a statement that there was not enough evidence to “substantiate the claim that an attempt was made to monitor a human rights activist”. She wrote that in reviewing materials provided by the Ministry of Defence and Ministry of Foreign Affairs, she was persuaded that export licences were granted as part of a “sensitive and rigorous process”, and closely monitored and revoked if conditions were violated, “in particular in cases of human rights violations.”

Amnesty International decried the court’s decision. Danna Ingleton, acting co-director of Amnesty Tech, said in a statement: “Today’s disgraceful ruling is a cruel blow to people put at risk around the world by NSO Group selling its products to notorious human rights abusers. […] The ruling of the court flies in the face of the mountains of evidence of NSO Group’s spyware being used to target human rights defenders from Saudi Arabia to Mexico, including the basis of this case – the targeting of one of our own Amnesty employees.

NSO said: “Our detractors, who have made baseless accusations to fit their own agendas, have no answer to the security challenges of the 21st century. Now that the court’s decision has shown that our industry is sufficiently regulated, the focus should turn to what answer those who seek to criticise NSO have to the abuse of encryption by nefarious groups.”

The NSO Group is currently embroiled in another lawsuit brought by WhatsApp, which alleges that Pegasus spyware was used to hack more than a thousand of the messaging platform’s users. [see: https://humanrightsdefenders.blog/2019/10/30/nso-accused-of-largest-attack-on-civil-society-through-its-spyware/]

https://tech.newstatesman.com/security/amnesty-international-nso-group-export-licence

Trump now starts dismanteling the Open Technology Fund

June 23, 2020

Raphael Mimoun wrote in Newsweek of 22 June 2020 an opinion piece “Dictators are Besieging Internet Freedom—and Trump Just Opened the Gates”. It is a detailed piece but worth reading:

raph-m

Last week, the Trump administration started dismantling one of the US government’s most impactful agencies, the Open Technology Fund, which supports projects to counteract repressive censorship and surveillance around the world.

The Open Technology Fund, or OTF, is relatively new, founded in 2012 as a program of the government-backed Radio Free Asia. In 2019, it became an independent non-profit reporting to the US Agency for Global Media (USAGM). Since its founding, the organization has funded dozens of projects now part of the toolkit of millions of rights advocates and journalists around the world. But OTF is now under attack: the new leadership of USAGM, appointed just weeks ago, fired the leadership of all USAGM entities, including OTF, dismissed OTF’s independent and bipartisan board of directors, and is threatening to hollow out OTF altogether….

Many of those tools help those who most need it, where surveillance, censorship, and repression is most acute. Just last month, Delta Chat declined a request for user data from Russia’s communication regulator—because the security architecture developed with OTF support meant it did not have any data to handover. FreeWechat, which publishes posts censored by the Chinese government on the app WeChat, has been visited over 7 million times by Chinese-speakers. Dozens more OTF-funded tools enable millions to evade surveillance by autocratic governments and access the open internet, from Cuba to Hong Kong and Iran.

OTF’s work is critical to human rights defenders and journalists, but it brings privacy and security far beyond those groups. OTF only supports open-source projects, meaning that the code used must be available for anyone to view and reuse……….

But OTF’s work on internet freedom isn’t limited to funding technology development. The organization takes a holistic approach to internet freedom, providing life-saving training and capacity-building to groups directly targeted by cyberattacks, harassment, and violence: LGBTQI advocates in Indonesia, journalists in Mexico, civic activists in Belarus, or exiled Tibetan organizations. OTF also funds events bringing together researchers, technologists, policy-makers, and advocates. Those gatherings—whether global like the Internet Freedom Festival or focused on specific countries or regions like the Iran Cyber Dialogue, the Vietnam Cyber Dialogue, or the Forum on Internet Freedom in Africa–have been transformative. They have helped build a tight community in a space where trust is hard to achieve. Without such events, many of the projects, tools, and collaborations to circumvent censorship and counter surveillance would not exist.

See also: https://www.theverge.com/2020/6/23/21300424/open-technology-fund-usagm-circumvention-tools-china-censorship-michael-pack

https://www.newsweek.com/open-technology-fund-trump-dismantling-1512614