Posts Tagged ‘Artificial intelligence’

Social assistance fraud detection system violates human rights says Dutch court

February 12, 2020

An algorithmic risk rating system implemented by the Dutch state to try to predict the likelihood that social security claimants commit benefits or tax fraud violates human rights laws, a court in the Netherlands ruled. The Dutch Risk Indication System (SyRI) legislation uses an undisclosed algorithmic risk model to profile citizens and has been directed exclusively to neighborhoods with mostly low-income and minority residents. Human rights defenders have called it a “welfare surveillance state.”

Several civil society organizations in the Netherlands and two citizens instigated legal action against SyRI, seeking to block its use. The court today ordered an immediate stop to use the system. The ruling is being hailed as historical by human rights defenders, and the court bases its reasoning on European human rights law, specifically the right to privacy established by article 8 of the European Convention on Human Rights ( ECHR) instead of a specific provision in the EU data protection framework (GDPR) that relates to automated processing.

Article 22 of the GDPR includes the right of individuals not to be subject to automated individual decision-making only where they can produce significant legal effects. But there may be some uncertainty about whether this applies if there is a human somewhere in the circle, such as reviewing an objection decision. In this case, the court has avoided such questions by finding that SyRI directly interferes with the rights established in the ECHR. Specifically, the court determined that the SyRI legislation does not pass an equilibrium test in Article 8 of the ECHR that requires that any social interest be considered against the violation of people’s private life, and a fair and reasonable balance is required.

In its current form, the automated risk assessment system did not pass this test, in the opinion of the court. Legal experts suggest that the decision sets some clear limits on how the public sector in the United Kingdom can make use of AI tools, and the court is particularly opposed to the lack of transparency on how the algorithmic rating system worked….

The UN special rapporteur on extreme poverty and human rights, Philip Alston, who intervened in the case by providing the court with a human rights analysis, welcomed the ruling, describing it as “a clear victory for all those who are justifiably concerned about the serious threats that digital welfare systems represent for human rights. ” “This decision sets a strong legal precedent for other courts to follow. This is one of the first times that a court stops the use of digital technologies and abundant digital information by welfare authorities for human rights reasons, ”he added in a press release.

In 2018, Alston warned that the UK government’s rush to apply digital technologies and data tools to socially redesign the provision of large-scale public services risked having a huge impact on the human rights of the most vulnerable. Therefore, the decision of the Dutch court could have some short-term implications for UK policy in this area.

The ruling does not close the door to the use by states of automated profiling systems, but it does make it clear that in Europe human rights laws must be fundamental for the design and implementation of risk tools.

..It remains to be seen whether the Commission will push pan-European limits to specific uses of AI in the public sector, such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests that it is leaning towards risk assessments and a mosaic of risk-based rules.

Blackbox's social assistance fraud detection system violates Dutch human rights and judicial rules – Newsdio

Excellent news: HURIDOCS to receive 1 million $ from Google for AI work

May 8, 2019

Google announced on 7 May 2019 that the Geneva-based NGO HURIDOCS is one of 20 organizations that will share 25 million US dollars in grants from the Google Artificial Intelligence Impact Challenge. The Google Artificial Intelligence Impact Challenge was an open call to nonprofits, social enterprises, and research institutions to submit their ideas to use artificial intelligence (AI) to help address societal challenges. Over 2600 organizations from around the world applied.

Geneva-based HURIDOCS will receive a grant of 1 million US dollars to develop and use machine learning methods to extract, explore and connect relevant information in laws, jurisprudence, victim testimonies, and resolutions. Thanks to these, the NGO will work with partners to make documents better and freely accessible. This will benefit anyone interested in using human rights precedents and laws, for example to lawyers representing victims of human rights violations or students researching non-discrimination.

The machine learning work to liberate information from documents is grounded in more than a decade of work that HURIDOCS has done to provide free access to information. Through pioneering partnerships with the Institute for Human Rights and Development in Africa (IHRDA) and the Center for Justice and International Law (CEJIL), HURIDOCS has co-created some of the most used public human rights databases. A key challenge in creating these databases has been the time-consuming and error-prone manual adding of information – a challenge the machine learning techniques will be used to overcome.

“We have been experimenting with machine learning techniques for more than two years”, said Natalie Widmann, Artificial Intelligence Specialist at HURIDOCS. “We have changed our approach countless times, but we see a clear path to how they can be leveraged in groundbreaking ways to democratise access to information.” HURIDOCS will use the grant from Google to work with partners to co-create the solutions, carefully weighing ethical concerns of automation and focusing on social impact. All the work will be done in the open, including all code being released publicly.

We are truly excited by the opportunity to use these technologies to address a problem that has been holding the human rights movement back”, said Friedhelm Weinberg, Executive Director of HURIDOCS. “We are thankful to Google for the support and look forward to be working with their experts and what will be a fantastic cohort of co-grantees.”

We received thousands of applications to the Google AI Impact Challenge and are excited that HURIDOCS was selected to receive funding and expertise from Google. AI is at a nascent stage when it comes to the value it can have for the social impact sector, and we look forward to seeing the outcomes of this work and considering where there is potential for use to do even more.” – Jacquelline Fuller, President of Google.org

Next week, the HURIDOCS team will travel to San Francisco to work with the other grantees, Google AI experts, Project Managers and the startup specialists from Google’s Launchpad Accelerator for a program that will last six months, from May to November 2019. Each organization will be paired a Google expert who will meet with them regularly for coaching sessions, and will also have access to other Google resources and expert mentorship.

Download the press release in English, Spanish. Learn more about the other Google AI Impact grantees at Google’s blog.

Fo more on HURIDOCS history: https://www.huridocs.org/tag/history-of-huridocs/ and for some of my other posts: https://humanrightsdefenders.blog/tag/huridocs/

HURIDOCS NEWS

Microsoft exercising human rights concerns to turn down facial-recognition sales

April 30, 2019

FILE PHOTO: The Microsoft sign is shown on top of the Microsoft Theatre in Los Angeles, California, U.S. October 19,2018. REUTERS/Mike Blak
REUTERS/Mike Blak

Joseph Menn reported on 16 April 2018 in kfgo.com about Microsoft rejecting a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns. Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white and male pictures. AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

Anytime they pulled anyone over, they wanted to run a face scan” against a database of suspects, company President Brad Smith said without naming the agency. After thinking through the uneven impact, “we said this technology is not your answer.” Speaking at a Stanford University conference on “human-centered artificial intelligence,” Smith said Microsoft had also declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution. Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse….

Smith has called for greater regulation of facial recognition and other uses of artificial intelligence, and he warned Tuesday that without that, companies amassing the most data might win the race to develop the best AI in a “race to the bottom.”

He shared the stage with the United Nations High Commissioner for Human Rights, Michelle Bachelet, who urged tech companies to refrain from building new tools without weighing their impact. “Please embody the human rights approach when you are developing technology,” said Bachelet, a former president of Chile.

[see also my older: https://humanrightsdefenders.blog/2015/11/19/contrasting-views-of-human-rights-in-business-world-bank-and-it-companies/]

https://kfgo.com/news/articles/2019/apr/16/microsoft-turned-down-facial-recognition-sales-on-human-rights-concerns/

Development of Amnesty’s Panic Button App

September 11, 2013

Having last week referred to 3 different (and competing?) techno initiatives to increase the security of HRDs, i would be amiss not to note the post of 11 september  2013 by Tanya O’Caroll on the AI blog concerning  the development of the Panic button. Over the next couple of months, she will be keeping you posted about the Panic Button. If you want to join the community of people working on Panic Button, please leave a comment on the site mentioned below or email panicbutton@amnesty.org.

via Inside the development of Amnesty’s new Panic Button App | Amnestys global human rights blog.