Posts Tagged ‘information technology companies’

How can the human rights defenders use new information technologies better?

November 28, 2019

(twitter: @mads_gottlieb) wrote in Impakter about Human Rights, Technology and Partnerships and stated that these technologies have the potential to tremendously facilitate human rights defenders in their work, whether they are used to document facts about investigations, or as preventive measures to avoid violations. His main message in this short article is an appeal to the human rights sector at large, to use technology more creatively, to make technology upgrades a top priority, and to engage with the technology sector in this difficult endeavor. The human rights sector will never be able to develop the newest technologies, but the opportunities that technology provides is something they need to make use of now and in collaboration with the technology sector

…Several cases show that human rights are under threat, and that it is difficult to investigate and gather the necessary facts in time to protect them. Duterte in the Philippines, ordered the police to shoot activists who demonstrated against extra-judicial killings. He later tried to reduce the funding of the Philippines National Human Rights Commission to 1 USD a year. This threat followed a period of 15 months of investigating the killings, and Duterte responded with the claim that they were “useless and defended criminal’s rights.” 

Zimbabwe is another country with a difficult environment for human rights defenders. It is not surprising that few people speak out, since the few that dare to demonstrate or voice opposing political views disappear. A famous example is the activist and journalist,  from Occupy Africa Unity Square. He was allegedly beaten in 2014, and in 2015 he went missing and was never found. His disappearance occurred after a period of public demonstrations against Mugabe’s regime. To add to the challenging conditions that call for better tools to defend human rights, is the fact that many European countries digitalise their public services. The newly introduced data platforms store and process sensitive information about the population, such as gender, ethnicity, sexual orientation, past health records, etc. Information that can easily be used for discriminative purposes, whether intentionally or not.

Human rights defenders typically struggle to find adequate resources for their daily operations and as a result, investments in technology often come second. It is rare for human rights defenders to have anything beyond the minimum requirements, such as the internally-facing maintenance of an operational and secure internet connection, a case system, or a website. At the same time, global technology companies develop new technologies such as blockchain, artificial intelligence, and advanced data and surveillance techniques. These technologies have the potential to tremendously facilitate human rights defenders in their work, whether they are used to document facts about investigations, or as preventive measures to avoid violations. It is also important to facilitate and empower rights-holders in setting up and using networks and platforms that can help notify and verify violations quickly. 

Collaboration is an excellent problem-solving approach and human rights organizations are well aware of it. They engage in multiple partnerships with important actors. The concern is therefore not the lack of collaboration, but whether they adequately prioritize what is now the world’s leading sector — technology (the top 5 on Forbes list of most valuable brands are all technology companies; Apple, Google, Microsoft, Amazon, and Facebook). It is not up to the technology sector to engage with the human rights sector (whether they want to or not), but it should be a top priority for the human rights sector to try to reduce their technology gap, in the interest of human rights.

There are several partnership opportunities, and many are easy to get started with and do not require monetary investments. One opportunity is to partner up with tech universities, that have the expertise to develop new types of secure, rapid monitoring systems. Blockchain embraces most of the principles that human rights embraces, such as transparency, equality and accountability, and rapid response times are possible. So why not collaborate with universities? Another opportunity is collaborating with institutions that manage satellite images. Images provide very solid proof regarding changes in landscape, examples include deforestation that threatens indigenous people, and the removal or burning of villages over a short period of time. A third opportunity is to get in dialogue with the technology giants that develop these new technologies, and, rather than asking for monetary donations, ask for input regarding how the human rights sector can effectively leverage technology. 

 

NSO accused of largest attack on civil society through its spyware

October 30, 2019

I blogged about the spyware firm NSO before [see e.g. https://humanrightsdefenders.blog/2019/09/17/has-nso-really-changed-its-attitude-with-regard-to-spyware/], but now WhatsApp has joined the critics with a lawsuit.

On May 13th, WhatsApp announced that it had discovered the vulnerability. In a statement, the company said that the spyware appeared to be the work of a commercial entity, but it did not identify the perpetrator by name. WhatsApp patched the vulnerability and, as part of its investigation, identified more than fourteen hundred phone numbers that the malware had targeted. In most cases, WhatsApp had no idea whom the numbers belonged to, because of the company’s privacy and data-retention rules. So WhatsApp gave the list of phone numbers to the Citizen Lab, a research laboratory at the University of Toronto’s Munk School of Global Affairs, where a team of cyber experts tried to determine whether any of the numbers belonged to civil-society members.

On Tuesday 29 October 2019, WhatsApp took the extraordinary step of announcing that it had traced the malware back to NSO Group, a spyware-maker based in Israel, and filed a lawsuit against the company—and also its parent, Q Cyber Technologies—in a Northern California court, accusing it of “unlawful access and use” of WhatsApp computers. According to the lawsuit, NSO Group developed the malware in order to access messages and other communications after they were decrypted on targeted devices, allowing intruders to bypass WhatsApp’s encryption.

NSO Group said in a statement in response to the lawsuit, “In the strongest possible terms, we dispute today’s allegations and will vigorously fight them. The sole purpose of NSO is to provide technology to licensed government intelligence and law enforcement agencies to help them fight terrorism and serious crime. Our technology is not designed or licensed for use against human rights activists and journalists.” In September, NSO Group announced the appointment of new, high-profile advisers, including Tom Ridge, the first U.S. Secretary of Homeland Security, in an effort to improve its global image.

In a statement to its users on Tuesday, WhatsApp said, “There must be strong legal oversight of cyber weapons like the one used in this attack to ensure they are not used to violate individual rights and freedoms people deserve wherever they are in the world. Human rights groups have documented a disturbing trend that such tools have been used to attack journalists and human rights defenders.”

John Scott-Railton, a senior researcher at the Citizen Lab, said, “It is the largest attack on civil society that we know of using this kind of vulnerability.”

https://www.newyorker.com/news/news-desk/whatsapp-sues-an-israeli-tech-firm-whose-spyware-targeted-human-rights-activists-and-journalists

https://uk.finance.yahoo.com/news/whatsapp-blames-sues-mobile-spyware-192135400.html

How social media companies can identify and respond to threats against human rights defenders

October 15, 2019

global computer threats

Image from Shutterstock.

Ginna Anderson writes in the ABA Abroad of 3

..Unfortunately, social media platforms are now a primary tool for coordinated, state-aligned actors to harass, threaten and undermine advocates. Although public shaming, death threats, defamation and disinformation are not unique to the online sphere, the nature of the internet has given them unprecedented potency. Bad actors are able to rapidly deploy their poisoned content on a vast scale. Social media companies have only just begun to recognize, let alone respond, to the problem. Meanwhile, individuals targeted through such coordinated campaigns must painstakingly flag individual pieces of content, navigate opaque corporate structures and attempt to survive the fallout. To address this crisis, companies such as Facebook, Twitter and Youtube must dramatically increase their capacity and will to engage in transparent, context-driven content moderation.

For human rights defenders, the need is urgent. .. Since 2011, the ABA Center for Human Rights (CHR) has ..noted with concern the coordination of “traditional” judicial harassment of defenders by governments, such as frivolous criminal charges or arbitrary detention, with online campaigns of intimidation. State-aligned online disinformation campaigns against individual defenders often precede or coincide with official investigations and criminal charges.

……

While social media companies generally prohibit incitement of violence and hate speech on their platforms, CHR has had to engage in additional advocacy with social media companies requesting the removal of specific pieces of content or accounts that target defenders. This extra advocacy has been required even where the content clearly violates a social media company’s terms of service and despite initial flagging by a defender. The situation is even more difficult where the threatening content is only recognizable with sufficient local and political context. The various platforms all rely on artificial intelligence, to varying degrees, to identify speech that violates their respective community standards. Yet current iterations of artificial intelligence are often unable to adequately evaluate context and intent.

Online intimidation and smear campaigns against defenders often rely on existing societal fault lines to demean and discredit advocates. In Guatemala, CHR recently documented a coordinated social media campaign to defame, harass, intimidate and incite violence against human rights defenders. Several were linked with so-called “net centers,” where users were reportedly paid to amplify hateful content across platforms. Often, the campaigns relied on “coded” language that hark back to Guatemala’s civil war and the genocide of Mayan communities by calling indigenous leaders communists, terrorists and guerrillas.

These terms appear to have largely escaped social media company scrutiny, perhaps because none is a racist slur per se. And yet, the proliferation of these online attacks, as well as the status of those putting out the content, is contributing to a worsening climate of violence and impunity for violence against defenders by specifically alluding to terms used to justify violence against indigenous communities. In 2018 alone, NPR reports that 26 indigenous defenders were murdered in Guatemala. In such a climate, the fear and intimidation felt by those targeted in such campaigns is not hyperbolic but based on their understanding of how violence can be sparked in Guatemala.

In order to address such attacks, social media companies must adopt policies that allow them to designate defenders as temporarily protected groups in countries that are characterized by state-coordinated or state-condoned persecution of activists. This is in line with international law that prohibits states from targeting individuals for serious harm based on their political opinion. To increase their ability to recognize and respond to persecution and online violence against human rights defenders, companies must continue to invest in their context-driven content moderation capacity, including complementing algorithmic monitoring with human content moderators well-versed in local dialects and historical and political context.

Context-driven content moderation should also take into account factors that increase the risk that online behavior will contribute to offline violence by identifying high-risk countries. These factors include a history of intergroup conflict and an overall increase in the number of instances of intergroup violence in the past 12 months; a major national political election in the next 12 months; and significant polarization of political parties along religious, ethnic or racial lines. Countries where these and other risk factors are present call for proactive approaches to identify problematic accounts and coded threats against defenders and marginalized communities, such as those shown in Equality Labs’ “Facebook India” report.

Companies should identify, monitor and be prepared to deplatform key accounts that are consistently putting out denigrating language and targeting human rights defenders. This must go hand in hand with the greater efforts that companies are finally beginning to take to identify coordinated, state-aligned misinformation campaigns. Focusing on the networks of users who abuse the platform, instead of looking solely at how the online abuse affects defenders’ rights online, will also enable companies to more quickly evaluate whether the status of the speaker increases the likelihood that others will take up any implicit call to violence or will be unduly influenced by disinformation.

This abuser-focused approach will also help to decrease the burden on defenders to find and flag individual pieces of content and accounts as problematic. Many of the human rights defenders with whom CHR works are giving up on flagging, a phenomenon we refer to as flagging fatigue. Many have become fatalistic about the level of online harassment they face. This is particularly alarming as advocates targeted online may develop skins so thick that they are no longer able to assess when their actual risk of physical violence has increased.

Finally, it is vital that social media companies pursue, and civil society demand, transparency in content moderation policy and decision-making, in line with the Santa Clara Principles. Put forward in 2018 by a group of academic experts, organizations and advocates committed to freedom of expression online, the principles are meant to guide companies engaged in content moderation and ensure that the enforcement of their policies is “fair, unbiased, proportional and respectful of users’ rights.” In particular, the principles call upon companies to publicly report on the number of posts and accounts taken down or suspended on a regular basis, as well as to provide adequate notice and meaningful appeal to affected users.

CHR routinely supports human rights defenders facing frivolous criminal charges related to their human rights advocacy online or whose accounts and documentation have been taken down absent any clear justification. This contributes to a growing distrust of the companies among the human rights community as apparently arbitrary decisions about content moderation are leaving advocates both over- and under-protected online.

As the U.N. special rapporteur on freedom of expression explained in his 2018 report, content moderation processes must include the ability to appeal the removal, or refusal to remove, content or accounts. Lack of transparency heightens the risk that calls to address the persecution of human rights defenders online will be subverted into justifications for censorship and restrictions on speech that is protected under international human rights law.

A common response when discussing the feasibility of context-driven content moderation is to compare it to reviewing all the grains of sand on a beach. But human rights defenders are not asking for the impossible. We are merely pointing out that some of that sand is radioactive—it glows in the dark, it is lethal, and there is a moral and legal obligation upon those that profit from the beach to deal with it.

Ginna Anderson, senior counsel, joined ABA CHR in 2012. She is responsible for supporting the center’s work to advance the rights of human rights defenders and marginalized dommunities, including lawyers and journalists at risk. She is an expert in health and human rights, media freedom, freedom of expression and fair trial rights. As deputy director of the Justice Defenders Program since 2013, she has managed strategic litigation, fact-finding missions and advocacy campaigns on behalf of human rights defenders facing retaliation for their work in every region of the world

http://www.abajournal.com/news/article/how-can-social-media-companies-identify-and-respond-to-threats-against-human-rights-defenders

Has NSO really changed its attitude with regard to spyware?

September 17, 2019

Cyber-intelligence firm NSO Group has introduced a new Human Rights Policy and a supporting governance framework in an apparent attempt to boost its reputation and comply with the United Nations’ Guiding Principles for Business and Human Rights. This follows recent criticism that its technology was being used to violate the rights of journalist and human rights defenders. A recent investigation found the company’s Pegasus spyware was used against a member of non-profit Amnesty International. [see: https://humanrightsdefenders.blog/2019/02/19/novalpina-urged-to-come-clean-about-targeting-human-rights-defenders/]

The NSO’s new human rights policy aims to identify, prevent and mitigate the risks of adverse human rights impact. It also includes a thorough evaluation of the company’s sales process for the potential of adverse human rights impacts coming from the misuse of NSO products. As well as this, it introduces contractual agreements for NSO customers that will require them to limit the use of the company’s products to the prevention and investigation of serious crimes. There will be specific attention to protect individuals or groups that could be at risk of arbitrary digital surveillance and communication interceptions due to race, colour, sex, language, religion, political or other opinions, national or social origin, property, birth or other status, or their exercise or defence of human rights. Rules have been set out to protect whistle-blowers who wish to report concerns about misuse of NSO technology.

Amnesty International is supporting current legal actions being taken against the Israeli Ministry of Defence, demanding that it revoke NSO Group’s export licence.

Danna Ingleton, Deputy Program Director for Amnesty Tech, said: “While on the surface it appears a step forward, NSO has a track record of refusing to take responsibility. The firm has sold invasive digital surveillance to governments who have used these products to track, intimidate and silence activists, journalists and critics.”

CEO and co-founder Shalev Hulio, counters: “NSO has always taken governance and its ethical responsibilities seriously as demonstrated by our existing best-in-class customer vetting and business decision process. With this new Human Rights Policy and governance framework, we are proud to further enhance our compliance system to such a degree that we will become the first company in the cyber industry to be aligned with the Guiding Principles.

https://www.verdict.co.uk/nso-group-new-human-rights-policy/

How Twitter moved from Arab spring to Arab control

July 29, 2019

Social media platforms were essential in the Arab Spring, but governments soon learned how to counter dissent online”, writes
Twitter played an essential role during the Egyptian Revolution and was used to get info to an international audience [File: Steve Crisp/Reuters]
Twitter played an essential role during the Egyptian Revolution and was used to get info to an international audience [File: Steve Crisp/Reuters]

In a series of articles, Al Jazeera examines how Twitter in the Middle East has changed since the Arab Spring. Government talking points are being magnified through thousands of accounts during politically fraught times and silencing people on Twitter is only part of a large-scale effort by governments to stop human rights activists and opponents of the state from being heard. In the next part of this series, Al Jazeera will look at how Twitter bots influenced online conversation during the GCC crisis on both sides of the issue.

https://www.aljazeera.com/news/2019/07/exists-demobilise-opposition-twitter-fails-arabs-190716080010123.html

Controversial spyware company promises to respect human rights…in the future

June 19, 2019

This photo from August 25, 2016, shows the logo of the Israeli NSO Group company on a building in Herzliya, Israel. (AP Photo/Daniella Cheslow)

This photo from August 25, 2016, shows the logo of the Israeli NSO Group company on a building in Herzliya, Israel. (AP Photo/Daniella Cheslow)

Newspapers report that controversial Israeli spyware developer NSO Group will in the coming months move towards greater transparency and align itself fully with the UN Guiding Principles on Business and Human Rights, the company’s owners said over the weekend. [see also: https://humanrightsdefenders.blog/2019/02/19/novalpina-urged-to-come-clean-about-targeting-human-rights-defenders/]

Private equity firm Novalpina, which acquired a majority stake in NSO Group in February, said that within 90 days it would “establish at NSO a new benchmark for transparency and respect for human rights.” It said it sought “a significant enhancement of respect for human rights to be built into NSO’s governance policies and operating procedures and into the products sold under licence to intelligence and law enforcement agencies.

The company has always stated that it provides its software to governments for the sole purpose of fighting terrorism and crime, but human rights defenders and NGOs have claimed the company’s technology has been used by repressive governments to spy on them. Most notably, the spyware was allegedly used in connection with the gruesome killing of Saudi journalist Jamal Khashoggi, who was dismembered in the Saudi consulate in Istanbul last year and whose body has never been found.

Last month London-based Amnesty International, together with other human rights activists, filed a petition to the District Court in Tel Aviv to compel Israel’s Defense Ministry to revoke the export license it granted to the company that Amnesty said has been used “in chilling attacks on human rights defenders around the world.”

On Friday the Guardian reported that Yana Peel, a well-known campaigner for human rights and a prominent figure in London’s art scene, is a co-owner of NSO, as she has a stake in Novalpina, co-founded by her husband Stephen Peel. Peel told the Guardian she has no involvement in the operations or decisions of Novalpina, which is managed by my husband, Stephen Peel, and his partners and added that the Guardian’s view of NSO was “quite misinformed.”

And Citizen Lab is far from re-assured:  https://citizenlab.ca/2019/06/letter-to-novalpina-regarding-statement-on-un-guiding-principles/…

https://www.timesofisrael.com/controversial-nso-group-to-adopt-policy-of-closer-respect-for-human-rights/

https://www.theguardian.com/world/2019/jun/18/whatsapp-spyware-israel-cyber-weapons-company-novalpina-capital-statement

Speech by Commissioner Dunja Mijatović at RightsCon 2019, Tunis, about digital security

June 17, 2019

Council of Europe Commissioner for human rights, Dunja Mijatović, gave a speech at the world’s leading summit on human rights in the digital age, RightsCon 2019, in Tunis, on 11 June 2019:

…A recent article of the New York Times from the city of Kashgar showed the extent to which the Chinese authorities are using facial recognition and snooping technologies to keep a tight control of the Muslim community.  If you think that this does not concern you because it is happening far away, you would be terribly wrong. The Chinese experiment bears a great significance for all of us. It shows to what extent the cozy relations between technology companies and state security agencies can harm us. This has become particularly acute as part of states response to terrorist threats and attacks. States around the world have increased their surveillance arsenal, not always to the benefit of our safety. On the contrary, in several occasions they used it to silence criticism, restrict free assembly, snoop into our private life, or control individuals or minorities.

An illustration of this comes from human rights defenders. If in the past human rights defenders have been ahead of states in using technological progress to expose human rights abuses, now they are facing a backlash. As we speak, states and non-state actors are intercepting their communications, intrude their personal data, trace their digital footprint. States are using technologies to learn about human rights defenders’ plans or upcoming campaigns; to find or fabricate information that can help intimidate, incriminate or destroy their reputation; or to learn about their networks and sources.

This concerns us all. At stake here is the society we want to live in and bequeath to the next generations. Technology should maximise our freedoms and rights – and keep those in power accountable.

To get there we need to strengthen the connections among us and crowdsource human rights protection, promotion and engagement. An important step in that direction would be to provide more support, funding and digital literacy training to human rights defenders. It is also crucial that the private sector and state authorities uphold human rights standards in the designing and implementation of all technological tools.

Living in an increasingly digital world does not mean living artificial lives with artificial liberties. Our rights must be real, all the time.

We all must resist the current backlash and persist in demanding more human rights protection, more transparency and more accountability in the digital world.

https://www.coe.int/en/web/commissioner/-/2019-speech-by-dunja-mijatovic-council-of-europe-commissioner-for-human-rights-at-the-world-s-leading-summit-on-human-rights-in-the-digital-age-rights

Beyond WhatsApp and NSO – how human rights defenders are targeted by cyberattacks

May 14, 2019

Several reports have shown Israeli technology being used by Gulf states against their own citizens (AFP/File photo)

NSO Group has been under increased scrutiny after a series of reports about the ways in which its spyware programme has been used against prominent human rights activists. Last year, a report by CitizenLab, a group at the University of Toronto, showed that human rights defenders in Saudi Arabia, the United Arab Emirates and Bahrain were targeted with the software.

In October, US whistleblower Edward Snowden said Pegasus had been used by the Saudi authorities to surveil journalist Jamal Khashoggi before his death. “They are the worst of the worst,” Snowden said of the firm. Amnesty International said in August that a staffer’s phone was infected with the Pegasus software via a WhatsApp message.

——-

Friedhelm Weinberg‘s piece of 1 May is almost prescient and contains good, broader advice:

When activists open their inboxes, they find more than the standard spam messages telling them they’ve finally won the lottery. Instead, they receive highly sophisticated emails that look like they are real, purport to be from friends and invite them to meetings that are actually happening. The catch is: at one point the emails will attempt to trick them.

1. Phishing for accounts, not compliments

In 2017, the Citizen Lab at the University of Toronto and the Egyptian Initiative for Personal Rights, documented what they called the “Nile Phish” campaign, a set of emails luring activists into giving access to their most sensitive accounts – email and file-sharing tools in the cloud. The Seoul-based Transitional Justice Working Group recently warned on its Facebook page about a very similar campaign. As attacks like these have mounted in recent years, civil society activists have come together to defend themselves, support each other and document what is happening. The Rarenet is a global group of individuals and organizations that provides emergency support for activists – but together it also works to educate civil society actors to dodge attacks before damage is done. The Internet Freedom Festival is a gathering dedicated to supporting people at risk online, bringing together more than 1,000 people from across the globe. The emails from campaigns like Nile Phish may be cunning and carefully crafted to target individual activists.. – they are not cutting-edge technology. Protection is stunningly simple: do nothing. Simply don’t click the link and enter information – as hard as it is when you are promised something in return.

Often digital security is about being calm and controlled as much as it is about being savvy in the digital sphere. And that is precisely what makes it difficult for passionate and stressed activists!

2. The million-dollar virus

Unfortunately, calm is not always enough. Activists have also been targeted with sophisticated spyware that is incredibly expensive to procure and difficult to spot. Ahmed Mansoor, a human-rights defender from the United Arab Emirates, received messages with malware (commonly known as computer viruses) that cost one million dollars on the grey market, where unethical hackers and spyware firms meet. See also: https://humanrightsdefenders.blog/2016/08/29/apple-tackles-iphone-one-tap-spyware-flaws-after-mea-laureate-discovers-hacking-attempt/]

Rights defender Ahmed Mansoor in Dubai in 2011, a day after he was pardoned following a conviction for insulting UAE leaders. He is now in prison once more.

Rights defender Ahmed Mansoor in Dubai in 2011. Image: Reuters/Nikhil Monteiro

3. Shutting down real news with fake readers

Both phishing and malware are attacks directed against the messengers, but there are also attacks against the message itself. This is typically achieved by directing hordes of fake readers to the real news – that is, by sending so many requests through bot visitors to websites that the servers break down under the load. Commonly referred to as “denial of service” attacks, these bot armies have also earned their own response from civil society. Specialised packages from Virtual Road or Deflect sort fake visitors from real ones to make sure the message stays up.

 

A chart showing how distributed denial of service (DDoS) attacks have grown over time.

How distributed denial of service (DDoS) attacks have grown. Image: Kinsta.com; data from EasyDNS

Recently, these companies also started investigating who is behind these attacks– a notoriously difficult task, because it is so easy to hide traces online. Interestingly, whenever Virtual Road were so confident in their findings that they publicly named attackers, the attacks stopped. Immediately. Online, as offline, one of the most effective ways to ensure that attacks end is to name the offenders, whether they are cocky kids or governments seeking to stiffle dissent. But more important than shaming attackers is supporting civil society’s resilience and capacity to weather the storms. For this, digital leadership, trusted networks and creative collaborations between technologists and governments will pave the way to an internet where the vulnerable are protected and spaces for activism are thriving.

——–

Microsoft exercising human rights concerns to turn down facial-recognition sales

April 30, 2019

FILE PHOTO: The Microsoft sign is shown on top of the Microsoft Theatre in Los Angeles, California, U.S. October 19,2018. REUTERS/Mike Blak
REUTERS/Mike Blak

Joseph Menn reported on 16 April 2018 in kfgo.com about Microsoft rejecting a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns. Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white and male pictures. AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

Anytime they pulled anyone over, they wanted to run a face scan” against a database of suspects, company President Brad Smith said without naming the agency. After thinking through the uneven impact, “we said this technology is not your answer.” Speaking at a Stanford University conference on “human-centered artificial intelligence,” Smith said Microsoft had also declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution. Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse….

Smith has called for greater regulation of facial recognition and other uses of artificial intelligence, and he warned Tuesday that without that, companies amassing the most data might win the race to develop the best AI in a “race to the bottom.”

He shared the stage with the United Nations High Commissioner for Human Rights, Michelle Bachelet, who urged tech companies to refrain from building new tools without weighing their impact. “Please embody the human rights approach when you are developing technology,” said Bachelet, a former president of Chile.

[see also my older: https://humanrightsdefenders.blog/2015/11/19/contrasting-views-of-human-rights-in-business-world-bank-and-it-companies/]

https://kfgo.com/news/articles/2019/apr/16/microsoft-turned-down-facial-recognition-sales-on-human-rights-concerns/

Big Brother Awards try to identify risks for human rights defenders

February 24, 2019