Posts Tagged ‘information technology companies’

Beyond WhatsApp and NSO – how human rights defenders are targeted by cyberattacks

May 14, 2019

Several reports have shown Israeli technology being used by Gulf states against their own citizens (AFP/File photo)

NSO Group has been under increased scrutiny after a series of reports about the ways in which its spyware programme has been used against prominent human rights activists. Last year, a report by CitizenLab, a group at the University of Toronto, showed that human rights defenders in Saudi Arabia, the United Arab Emirates and Bahrain were targeted with the software.

In October, US whistleblower Edward Snowden said Pegasus had been used by the Saudi authorities to surveil journalist Jamal Khashoggi before his death. “They are the worst of the worst,” Snowden said of the firm. Amnesty International said in August that a staffer’s phone was infected with the Pegasus software via a WhatsApp message.

——-

Friedhelm Weinberg‘s piece of 1 May is almost prescient and contains good, broader advice:

When activists open their inboxes, they find more than the standard spam messages telling them they’ve finally won the lottery. Instead, they receive highly sophisticated emails that look like they are real, purport to be from friends and invite them to meetings that are actually happening. The catch is: at one point the emails will attempt to trick them.

1. Phishing for accounts, not compliments

In 2017, the Citizen Lab at the University of Toronto and the Egyptian Initiative for Personal Rights, documented what they called the “Nile Phish” campaign, a set of emails luring activists into giving access to their most sensitive accounts – email and file-sharing tools in the cloud. The Seoul-based Transitional Justice Working Group recently warned on its Facebook page about a very similar campaign. As attacks like these have mounted in recent years, civil society activists have come together to defend themselves, support each other and document what is happening. The Rarenet is a global group of individuals and organizations that provides emergency support for activists – but together it also works to educate civil society actors to dodge attacks before damage is done. The Internet Freedom Festival is a gathering dedicated to supporting people at risk online, bringing together more than 1,000 people from across the globe. The emails from campaigns like Nile Phish may be cunning and carefully crafted to target individual activists.. – they are not cutting-edge technology. Protection is stunningly simple: do nothing. Simply don’t click the link and enter information – as hard as it is when you are promised something in return.

Often digital security is about being calm and controlled as much as it is about being savvy in the digital sphere. And that is precisely what makes it difficult for passionate and stressed activists!

2. The million-dollar virus

Unfortunately, calm is not always enough. Activists have also been targeted with sophisticated spyware that is incredibly expensive to procure and difficult to spot. Ahmed Mansoor, a human-rights defender from the United Arab Emirates, received messages with malware (commonly known as computer viruses) that cost one million dollars on the grey market, where unethical hackers and spyware firms meet. See also: https://humanrightsdefenders.blog/2016/08/29/apple-tackles-iphone-one-tap-spyware-flaws-after-mea-laureate-discovers-hacking-attempt/]

Rights defender Ahmed Mansoor in Dubai in 2011, a day after he was pardoned following a conviction for insulting UAE leaders. He is now in prison once more.

Rights defender Ahmed Mansoor in Dubai in 2011. Image: Reuters/Nikhil Monteiro

3. Shutting down real news with fake readers

Both phishing and malware are attacks directed against the messengers, but there are also attacks against the message itself. This is typically achieved by directing hordes of fake readers to the real news – that is, by sending so many requests through bot visitors to websites that the servers break down under the load. Commonly referred to as “denial of service” attacks, these bot armies have also earned their own response from civil society. Specialised packages from Virtual Road or Deflect sort fake visitors from real ones to make sure the message stays up.

 

A chart showing how distributed denial of service (DDoS) attacks have grown over time.

How distributed denial of service (DDoS) attacks have grown. Image: Kinsta.com; data from EasyDNS

Recently, these companies also started investigating who is behind these attacks– a notoriously difficult task, because it is so easy to hide traces online. Interestingly, whenever Virtual Road were so confident in their findings that they publicly named attackers, the attacks stopped. Immediately. Online, as offline, one of the most effective ways to ensure that attacks end is to name the offenders, whether they are cocky kids or governments seeking to stiffle dissent. But more important than shaming attackers is supporting civil society’s resilience and capacity to weather the storms. For this, digital leadership, trusted networks and creative collaborations between technologists and governments will pave the way to an internet where the vulnerable are protected and spaces for activism are thriving.

——–

Microsoft exercising human rights concerns to turn down facial-recognition sales

April 30, 2019

FILE PHOTO: The Microsoft sign is shown on top of the Microsoft Theatre in Los Angeles, California, U.S. October 19,2018. REUTERS/Mike Blak
REUTERS/Mike Blak

Joseph Menn reported on 16 April 2018 in kfgo.com about Microsoft rejecting a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns. Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white and male pictures. AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

Anytime they pulled anyone over, they wanted to run a face scan” against a database of suspects, company President Brad Smith said without naming the agency. After thinking through the uneven impact, “we said this technology is not your answer.” Speaking at a Stanford University conference on “human-centered artificial intelligence,” Smith said Microsoft had also declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution. Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse….

Smith has called for greater regulation of facial recognition and other uses of artificial intelligence, and he warned Tuesday that without that, companies amassing the most data might win the race to develop the best AI in a “race to the bottom.”

He shared the stage with the United Nations High Commissioner for Human Rights, Michelle Bachelet, who urged tech companies to refrain from building new tools without weighing their impact. “Please embody the human rights approach when you are developing technology,” said Bachelet, a former president of Chile.

[see also my older: https://humanrightsdefenders.blog/2015/11/19/contrasting-views-of-human-rights-in-business-world-bank-and-it-companies/]

https://kfgo.com/news/articles/2019/apr/16/microsoft-turned-down-facial-recognition-sales-on-human-rights-concerns/

Big Brother Awards try to identify risks for human rights defenders

February 24, 2019

Novalpina urged to come clean about targeting human rights defenders

February 19, 2019

In an open letter released today, 18 February 2019, Amnesty International, Human Rights Watch and five other NGOs urged Novalpina to publicly commit to accountability for NSO Group’s past spyware abuses, including the targeting of an Amnesty International employee and the alleged targeting of Jamal Khashoggi. [see also: https://humanrightsdefenders.blog/2016/08/29/apple-tackles-iphone-one-tap-spyware-flaws-after-mea-laureate-discovers-hacking-attempt/]

Danna Ingleton, Deputy Director of Amnesty Tech, said: “Novalpina’s executives have serious questions to answer about their involvement with a company which has become the go-to surveillance tool for abusive governments. This sale comes in the wake of reports that NSO paid private operatives to physically intimidate individuals trying to investigate its role in attacks on human rights defenders – further proof that NSO is an extremely dangerous entity.

We are calling on Novalpina to confirm an immediate end to the sale or further maintenance of NSO products to governments which have been accused of using surveillance to violate human rights. It must also be completely transparent about its plans to prevent further abuses.

This could be an opportunity to finally hold NSO Group to account. Novalpina must commit to fully engaging with investigations into past abuses of NSO’s spyware, and ensure that neither NSO Group nor its previous owners, Francisco Partners, are let off the hook.”

The signatories to the letter are:

  • Amnesty International
  • R3D: Red en Defensa de los Derechos Digitales
  • Privacy International
  • Access Now
  • Human Rights Watch
  • Reporters Without Borders
  • Robert L. Bernstein Institute for Human Rights, NYU School of Law and Global Justice Clinic, NYU School of Law

https://www.amnesty.org/en/latest/news/2019/02/spyware-firm-buyout-reaffirms-urgent-need-for-justice-for-targeted-activists/

https://www.amnesty.org/en/latest/research/2019/02/open-letter-to-novalpina-capital-nso-group-and-francisco-partners/

NGOs express fear that new EU ‘terrorist content’ draft will make things worse for human rights defenders

January 31, 2019

On Wednesday 30 January 2019 Mike Masnick in TechDirt published a piece entitled: “Human Rights Groups Plead With The EU Not To Pass Its Awful ‘Terrorist Content’ Regulation“. The key argument is that machine-learning algorithms are not able to distinguish between terrorist propaganda and investigations of, say, war crimes, It points out that as an example that Germany’s anti-“hate speech” law has proven to be misused by authoritarian regimes. Read the rest of this entry »

LinkedIn reverses censorship position re Zhou Fengsuo’s profile

January 7, 2019

Zhou Fengsuo –  Getty Images

On 3 January, LinkedIn sent Zhou a message saying his profile and activities would not be viewable to users in China because of “specific content on your profile” (without saying which content!). Hours later, Microsoft-owned LinkedIn reversed its decision, apparently after South China Morning Post reporter Owen Churchill brought attention to the case. See the exchange below:

————-

About the CLOUD Act and lists of ‘safe countries’

March 17, 2018

For the weekend two long pieces (copied below in full) about a seemingly technical issue but one that could have big consequences for human rights defenders. The key issue is that foreign government who wanted to obtain information on a social media user from a US tech company (such as Microsoft, Google, Apple and Facebook) had to go through a cumbersome procedure using diplomatic procedures (MLATs, – mutual legal assistance treaties). The draft CLOUD Act (Clarifying Lawful Overseas Use of Data) proposes to make it easier for governments to get these data directly from the companies – and here is the tricky part – as long as these foreign government are on a kind of ‘safe list’ with regard to human rights. And that is where the questions come in according to the specialists below. And there are quite a few other worries.

Human Rights Groups Denounce Proposed Global Data Sharing

(CN) – With a pleasant-sounding name and acronym, the CLOUD Act stands for Clarifying Lawful Overseas Use of Data, but human rights groups take a far less sunny view of the bill than the tech giants pushing for its passage through Congress.

Possibly heading to Capitol Hill next week, Microsoft, Google, Apple and Facebook have lined up behind the legislation that overhauls how tech companies share data with foreign governments without notification or oversight.

Amnesty International’s U.S. director Naureen Shah depicted the legislation as a dystopic threat to human rights and press freedom globally while explaining her “grave misgivings” with the bill.

“The CLOUD Act jeopardizes the lives and safety of thousands of human rights defenders around the world at a time when they face unprecedented threats, intimidation and persecution, as we have documented in recent years,” Shah told reporters at a press conference on Thursday.

The CLOUD Act’s proponents and critics agree that the bill arose from the need to plug a gap in domestic and international law.

For decades, foreign governments requesting information from a U.S. company would have to work through diplomatic procedures known as MLATs, short for mutual legal assistance treaties.

“This process – from a privacy and human rights standpoint – is fairly rights-respecting,” the American Civil Liberties Union’s counsel Neema Singh Guliani said at a press conference from Washington.

For U.S. and foreign prosecutors, the MLAT process is cumbersome and gives the targets of criminal investigations cover to hide incriminating data in servers abroad.

This controversy came to a head in 2013, when New York federal prosecutors sought to circumvent the process to obtain emails of a target of a drug-trafficking investigation held on Microsoft’s servers in Dublin, Ireland. Microsoft went to court to protect the privacy of its users, waging a protracted legal battle currently pending before the U.S. Supreme Court.

Perhaps unwilling to gamble on Supreme Court victory, Microsoft and other companies have backed the CLOUD Act as an alternative.

“One of the things the bill would do is that it would moot the Microsoft Ireland case,” the ACLU’s Guliani noted.

For rights groups, however, Congress’ solution would be worse than the problem. The CLOUD Act lets countries that pass unspecified human rights vetting bypass government vetting and work directly with tech companies for information requests.

“We’re essentially relying on tech companies to be a kind of failsafe,” Shah told reporters.

Once a foreign government is safe-listed, Shah said, that nation can freely request information held by tech companies without congressional oversight for any particular request for five years.

That remains true even if a foreign government’s human rights record undergoes a dramatic decline during those years, as happened in Turkey over the last half decade.

“That’s a problem because we see governments around the world in a human rights freefall,” Shah noted.

Amnesty International has unique insight into that danger: The Turkish government jailed its Turkey chair Taner Kilic in an ongoing crackdown on journalists, human rights workers, and other critical voices that country has targeted in the wake of a coup attempt against its President Recep Tayyip Erdogan.

“If you had looked at Turkey in 2012 or 2013, and matched it against the criteria in this bill, Turkey might have passed muster,” Shah said. “Of course, we know that especially since the coup in mid-2016, Turkey has become the world’s largest jailer of journalists.”

“More than 50,000 people at this point in Turkey have been swept up in their crackdown, including the chair and the director of Amnesty International, who were held, one of whom remains in prison, both of whom are being charged with terrorism offenses,” she added.

Under the CLOUD Act, Shah said, Congress would not be able to intervene if a safe-listed nation followed Turkey’s path.

Should that system fail, it is unclear that either the target of a foreign government’s investigation or the U.S. government would even know it.

The CLOUD Act offers the promise of subjecting governments to compliance reviews, but Guliani, the ACLU’s counsel, called this measure meaningless without individualized notice to users or the federal government.

“How can there be real compliance reviews if the U.S. government isn’t getting notice of individual requests?” she asked.

Guliani added that the CLOUD Act would also enable other governments to circumvent Wiretap Act restrictions against real-time interception.

Opposition from civil society groups has kicked into high gear out of fears that the CLOUD Act may get attached to an omnibus budget bill heading next week to Congress.

Joining the ACLU and Amnesty International, a coalition of 22 other groups signed a letter to elected representatives last week stating: “We urge you to oppose the CLOUD Act, and efforts to attach it to other pieces of legislation.”

As the omnibus budget has not yet been released, it is unclear whether that fear will come to pass.

—–

The CLOUD Act Doesn’t Help Privacy and Human Rights: It Hurts Them

By Neema Singh Guliani, Naureen Shah

Friday, March 16, 2018

At a time when human rights activists, dissidents and journalists around the world face unprecedented attacks, we cannot afford to weaken our commitment to human rights. But the recently introduced CLOUD Act would do just that.

The bill purports to address complaints that current mechanisms for foreign governments to obtain data from U.S. technology companies are slow, requiring review by the Justice Department and a warrant issued by a U.S. judge pursuant to the mutual legal assistance (MLA) process. The solution it proposes, however, is a dangerous abdication of responsibility by the U.S. government and technology companies.

Writing on Lawfare, Peter Swire and Jennifer Daskal have penned a the CLOUD Act, arguing that things don’t work well now, that they could get worse and that this is the best option on the table. But even if we accept Daskal and Swire’s dire view of the state of current affairs, their argument leaves a lot unexplained—such as why an alternative framework or improved version of the CLOUD Act is not tenable, why efforts to pass the bill without any public markups of the legislation or the opportunity for amendments are advisable, and why no major international human rights organizations support it. Two of the largest human rights organizations, Amnesty International and Human Rights Watch, oppose the bill, along with over twenty other privacy and civil liberties organizations. (Swire and Daskal do note that some of these groups participated in a working group on this issue, though they don’t describe the strenuous objections made during that process.)

Most importantly, however, Daskal and Swire do not address how this bill could fail human rights activists and people around the world.

The very premise of the current CLOUD Act—the idea that countries can effectively be safe-listed as human-rights compliant, such that their individual data requests need no further human rights vetting—is wrong. The CLOUD Act requires the executive branch to certify each of these foreign governments as having “robust substantive and procedural protections for privacy and civil liberties” written into their domestic law. But many of the factors that must be considered provide merely a formalistic and even naïve measure of a government’s behavior. Flip through Amnesty International or Human Rights Watch’s recent annual reports, and you can find a dizzying array of countries that have ratified major human rights treaties and reflect those obligations in their domestic laws but, in fact, have arrested, tortured and killed people in retaliation for their activism or due to their identity.

In the case of countries certified by the executive branch certifies, the CLOUD Act would not require the U.S. government to scrutinize data requests by the foreign governments—indeed, the bill would not even require notifying the U.S. government or a user regarding a request. The only line of defense would be technology companies, which hypothetically could refuse the request and refer it to the MLA process, but which may not have the resources, expertise, or even financial incentive to deny a foreign government request. Likewise, the bill requires that countries permit “periodic” reviews for compliance with civil liberties and privacy protections, but does not specify what these reviews will entail. It also doesn’t require even a cursory individual review of all orders or explain how the U.S. government can effectively ensure compliance in a timely fashion when without being aware of requests in real time. For this reason, the periodic U.S. government reviews contemplated in the bill are an insufficient substitute for case-by-case consideration.

Daskal and Swire point to other safeguards: Judges or independent authorities in the foreign country would review their government’s requests for data, they argue. But what about when courts greenlight, rather than check, police and intelligence services to go after human rights activists? This is not a problem confined to a small set of countries. In 2016, Amnesty International recorded at least in which human rights defenders were detained or arrested based solely on their work.

Similarly, the CLOUD Act would not prevent harm to human rights activists and minorities in cases where a country experiences a rapid deterioration in human rights. Under the CLOUD Act, once a foreign government gets an international agreement, it is safe-listed for five years—with no built-in mechanism to ensure that the U.S. government acts quickly when there is a rapid change in circumstances.

For example, in early 2014, Turkey may have met the CLOUD Act’s vague human rights criteria; Freedom House even it a three and four on its index for political and civil rights. But since the attempted coup in mid-2016, the Turkish government has arrested —including journalists and activists such as the chair and director of Amnesty International’s Turkey section—many on bogus terrorism charges. According to : “Most of these accusations of terrorism are based solely on actions such as downloading data protection software, including the ByLock application, publishing opinions disagreeing with the Government’s anti-terrorism policies, organizing demonstrations, or providing legal representation for other activists.”

Under the CLOUD Act, neither Congress nor U.S. courts would be able to prompt a review or a temporary moratorium for a case like Turkey. Users, without notice, would have little practical ability to lodge complaints with the U.S. government or providers. Even if the U.S. government were to take action, the CLOUD Act fails to ensure a sufficiently quick response to protect activists and others whose safety could be threatened.

In such a situation, the only real fail-safe to prevent a technology company from inadvertently acceding to a harmful data request is the technology company itself. But would even a well-intentioned technology company, particularly a small one, have the expertise and resources to competently assess the risk that a foreign order may pose to a particular human rights activist? Would it know, as in the example above, when to view Turkey’s terrorism charges in a particular case as baseless? In many cases, companies would likely rely on the biased assessments by foreign courts and fulfill requests.

Daskal and Swire argue that without the CLOUD Act, foreign governments with poor privacy standards will turn to data localization, which would pose greater human rights risks. But if the bill’s criteria are as strong as needed to protect privacy and human rights, those same foreign governments will not qualify for an international agreement—and so they may still push for data localization. The bill also does nothing to prevent a foreign government with an international agreement from data localization. If a technology company refused a government’s requests, the government could threaten to retaliate with localization and pressure the company to comply.

Finally, Swire and Daskal fail to address the CLOUD Act’s numerous ambiguities as to what human rights standards are a predicate to inclusion in the new data club the bill purports to create. Indeed, many of the criteria listed are merely factors that must be considered, not mandatory requirements. To highlight just a handful of the deficiencies in the bill:

  • The bill states that the Justice Department must consider whether a country respects free expression, without stating whether free expression is defined under U.S. law, international law, or a country’s own domestic law;
  • The bill states the Justice Department must consider whether a country respects “international universal human rights” without definition or clarity regarding how to assess this (indeed, this is not a recognized term in U.S. or international law);
  • The bill requires that requests be based on “articulable and credible facts, particularity, legality, and severity regarding the conduct under investigations”—a standard that is, at best, vague and subject to different interpretations, and is likely lower than the current probable cause standard applied to requests;
  • The bill fails to prohibit agreements in cases in which a country has a pattern or practice of engaging in human rights abuses, nor does it require an assessment as to whether there is effective central control of law enforcement or intelligence units;
  • The bill fails to require that countries meet any standards for metadata requests—leaving companies free to provide this data to human rights abusing countries without restriction;
  • For the first time, the bill allows foreign governments to wiretap and intercept communications in real-time, without even requiring governments to adhere to critical privacy protections in the Wiretap Act (such as notice, probable cause, or a set duration); and
  • The bill permits broad information sharing between governments, allowing countries (including the U.S.) to obtain information from foreign partners under standards that may be lower than their own domestic law.

These ambiguities provide the Justice Department with significant flexibility regarding the human rights standards a country must meet. What’s more, there’s no way for Congress or the judicial branch to practically act as a check in cases in which the executive branch makes the wrong decision. Country determinations are not subject to U.S. judicial review, and Congress would need to pass legislation within 90 days, likely with a veto proof majority, to stop an agreement from going into effect—an extremely high hurdle that will be difficult to overcome.

In light of this, it’s far from clear that, as Daskal and Swire write, the bill “will raise privacy protections on a global scale.” If members of Congress and technology companies want to address concerns with the MLA process while protecting privacy and human rights, they should abandon the CLOUD Act and craft a rights-respecting solution. 

https://www.courthousenews.com/privacy-groups-denounce-proposed-global-data-sharing/

http://www.lawfareblog.com/cloud-act-doesnt-help-privacy-and-human-rights-it-hurts-them

see also related:

https://humanrightsdefenders.blog/2014/11/27/united-nations-declares-again-that-mass-surveillance-threatens-the-right-to-privacy/

https://humanrightsdefenders.blog/2014/12/02/ngos-concerned-about-alarming-proliferation-of-surveillance-technologies-to-repressive-countries-the-wassenaar-arrangement/

https://humanrightsdefenders.blog/2013/05/23/facebook-joins-the-global-network-initiative-for-human-rights/

European Parliament votes to restrict exports of surveillance equipment

January 22, 2018

Members of the European Parliament have voted to curb export of surveillance equipment to states with poor human rights records, following mounting evidence that equipment supplied by companies in Europe has been used by oppressive regimes to suppress political opponents, journalists and campaigners. MEPs in Strasbourg agreed on 17 January to extend EU export controls to include new restrictions on the export of surveillance equipment, including devices for intercepting mobile phones, hacking computers, circumventing passwords and identifying internet users. The proposals also seek to remove encryption technologies from the list of technologies covered by EU export controls, in a move which aims to make it easier for people living in oppressive regimes to gain access to secure communications which can circumvent state surveillance.

Dictators spy on their citizens using EU cyber-surveillance. This must stop. The EU cannot contribute to the suffering of courageous activists, who often risk their lives for freedom and democracy,” said MEP Klaus Buchner, European Parliament rapporteur. “We are determined to close dangerous gaps in the export of dual-use goods and call on member states to follow suit.”

The proposed changes to the EU dual use export control regime are likely to face opposition from the defence industry and governments, as the European Parliament, and the European Commission prepare to negotiate their implantation with Europe’s 28 member states.

European technology companies, including UK firms, have supplied equipment that  has been used for arresting, torturing, and killing people in Iran, Egypt, Ethiopia, and Morocco, according to the European Parliament. An investigation by Computer Weekly revealed that the UK government had approved export licences to Gamma International (UK) to supply mobile phone interception equipment, known as IMSI catchers, to Macedonia, when the regime was engaged in a massive illegal surveillance operation against the public and political opponents.

And the UK’s largest arms manufacturer, BAE Systems, has exported equipment capable of mass internet surveillance to countries that campaigners say regularly commit human rights abuses, including Saudi Arabia, Qatar, Oman, Morocco and Algeria. An overwhelming majority of MEPs supported reforms to the EU’s export control regime, which will require member states to deny export licences if the export of surveillance technology is likely to lead to a serious impact on human rights in the destination country. The proposed changes, backed by 571 votes to 29 against, with 29 abstentions, will impose tough requirements for EU governments.

Member states will be required to assess the likely impact of surveillance technology on citizens’ right to privacy, freedom of speech, and freedom of association, in the destination country before they grant  export licences – a significant step up from current levels of scrutiny.

The proposed rules contain safeguards, however, that will allow legitimate cyber-security research to continue. Companies exporting products that are not specifically listed will be expected to follow the OECD’s “due diligence” guidelines, if there is a risk they could support human-rights violations.

Improved transparency measures will require member states to record and make data on approved and declined export licences publicly available, opening up the secretive global trade in surveillance technologies to greater public scrutiny.

http://www.computerweekly.com/news/252433519/European-Parliament-votes-to-restrict-exports-of-surveillance-equipment

Security Without Borders offers free security help to human rights defenders

January 10, 2017

Network World of 3 January 2017 carried an interesting piece on Claudio Guarnieri who launched Security Without Borders which offers free cybersecurity help to journalists, activists and human rights defenders.

For all the wonderful things that the internet has given us, the internet also has been turned into a tool for repression. Nation states have deep pockets and use the imbalance to their own advantage. Technology has been used “to curb dissent, to censor information, to identify and monitor people.” ..Billions of dollars have been poured into surveillance—both passive and active.”Sadly, electronic surveillance and censorship have become so commonplace that nowadays people can get arrested for a tweet. There are places were dissidents are hunted down, using crypto is illegal, where sites are blocked and even internet access can be cut off. “Those who face imprisonment and violence in the pursuit of justice and democracy cannot succeed if they don’t communicate securely as well as remain safe online.”

Security “is a precondition for privacy, which is the key enabler for freedom of expression.” He was not implying that the security should come from big firms, either, since big security businesses often need contracts with the government and are dependent on the national security sector. So, Guarnieri turned to the hacker community and launched Security Without Borders, which “is an open collective of hackers and cybersecurity professionals who volunteer with assisting journalists, human rights defenders, and non-profit organizations with cyber security issues.”

security without borders

The website Security Without Borders has a big red button labeled “Request Assistance.” Activists, journalists and human rights defenders are encouraged to reach out for help. The group of “penetration testers, malware analysts, developers, engineers, system administrators and hackers” from all walks of life offer cybersecurity help. We can assist with web security assessments, conduct breach investigations and analysis, and generally act as an advisor in questions pertaining to cybersecurity. As security services are often expensive to come by, SWB offers these services free to organizations and people fighting against human rights abuse, racism, and other injustices.

When requesting help, you are asked to give your name or organization’s name, an email address, a description of the work you do and what kind of help you need. Hackers and computer security geeks who support freedom of speech are also encouraged to reach out and volunteer their skills.

There is still on-going discussions on the mailing list on issues such as trust and where to draw the line for extending free help to specific groups. Security Without Borders is just getting off the ground, and will have to deal with some of the same problems that earlier efforts in this area face, see e.g:  https://humanrightsdefenders.blog/2016/08/25/datnav-new-guide-to-navigate-and-integrate-digital-data-in-human-rights-research/ and https://humanrightsdefenders.blog/2016/10/31/protecting-human-rights-defenders-from-hackers-and-improving-digital-security/

Sources:

Security Without Borders: Free security help for dissidents | Network World

http://motherboard.vice.com/read/hacker-claudio-guarnieri-security-without-borders-political-dissidents

Contrasting views of human rights in business: World Bank and IT companies

November 19, 2015

Here two contrasting statements on the theme of business and human rights. One describes the hesitation of the World Bank to apply human rights criteria and even use the word human rights (posted in the Huffington post of 18 November 2015 by Nezir Sinani [www.twitter.com/NezirSinani] and Julia Radomski, and the other is a piece written by Owen Larter and Nicolas Patrick entitled “Microsoft & DLA Piper – Why Human Rights and Human Rights Defenders are Right for our Business” [published in the ISHR Monitor on 27 October 2015]. Read the rest of this entry »