Posts Tagged ‘Artificial intelligence’

Two young human rights defenders, Raphael Mimoun and Nikole Yanez, on tech for human rights

May 16, 2024

Mozilla is highlighting each year the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through its Rise 25 Awards. On 13 May 2024 was the turn of Raphael Mimoun, a builder dedicated to making tools that empower journalists and human rights defenders. Aron Yohannes talked with Raphael about the launch of his app, Tella, combatting misinformation online, the future of social media platforms and more.

Raphael Mimoun: So I never worked in tech per se and only developed a passion for technology as I was working in human rights. It was really a time when, basically, the power of technology to support movements and to head movements around the world was kind of getting fully understood. You had the Arab Spring, you had Occupy Wall Street, you had all of these movements for social justice, for democracy, for human rights, that were very much kind of spread through technology, right? Technology played a very, very important role. But just after that, it was kind of like a hangover where we all realized, “OK, it’s not just all good and fine.” You also have the flip side, which is government spying on the citizens, identifying citizens through social media, through hacking, and so on and so forth — harassing them, repressing them online, but translating into offline violence, repression, and so on. And so I think that was the moment where I was like, “OK, there is something that needs to be done around technology,” specifically for those people who are on the front lines because if we just treat it as a tool — one of those neutral tools — we end up getting very vulnerable to violence, and it can be from the state, it can also be from online mobs, armed groups, all sort of things.

There’s so much misinformation out there now that it’s so much harder to tell the difference between what’s real and fake news. Twitter was such a reliable tool of information before, but that’s changed. Do you think that any of these other platforms can be able to help make up for so much of the misinformation that is out there?

I think we all feel the weight of that loss of losing Twitter. Twitter was always a large corporation, partially owned by a billionaire. It was never kind of a community tool, but there was still an ethos, right? Like a philosophy, or the values of the platform were still very much like community-oriented, right? It was that place for activists and human rights defenders and journalists and communities in general to voice their opinions. So I think that loss was very hard on all of us.

I see a lot of misinformation on Instagram as well. There is very little moderation there. It’s also all visual, so if you want traction, you’re going to try to put something that is very spectacular that is very eye catchy, and so I think that leads to even more misinformation.

I am pretty optimistic about some of the alternatives that have popped up since Twitter’s downfall. Mastodon actually blew up after Twitter, but it’s much older — I think it’s 10 years old by now. And there’s Bluesky. So I think those two are building up, and they offer spaces that are much more decentralized with much more autonomy and agency to users. You are more likely to be able to customize your feeds. You are more likely to have tools for your own safety online, right? All of those different things that I feel like you could never get on Threads, on Instagram or on Twitter, or anything like that. I’m hoping it’s actually going to be able to recreate the community that is very much what Twitter was. It’s never going to be exactly the same thing, but I’m hoping we will get there. And I think the fact that it is decentralized, open source and with very much a philosophy of agency and autonomy is going to lead us to a place where these social networks can’t actually be taken over by a power hungry billionaire.

What do you think is the biggest challenge that we face in the world this year on and offline, and then how do you think we can combat it?

I don’t know if that’s the biggest challenge, but one of the really big challenges that we’re seeing is how the digital is meeting real life and how people who are active online or on the phone on the computer are getting repressed for that work in real life. So we developed an app called Tella, which encrypts and hides files on your phone, right? So you take a photo or a video of a demonstration or police violence, or whatever it is, and then if the police tries to catch you and grab your phone to delete it, they won’t be able to find it, or at least it will be much more difficult to find it. Or it would be uploaded already. And things like that, I think is one of the big things that we’re seeing again. I don’t know if that the biggest challenge online at the moment, but one of the big things we’re seeing is just that it’s becoming completely normalized to grab someone’s phone or check someone’s computer at the airport, or at the border, in the street and go through it without any form of accountability. People have no idea what the regulations are, what the rules are, what’s allowed, what’s not allowed. And when they abuse those powers, is there any recourse? Most places in the world, at least, where we are working, there is definitely no recourse. And so I think that connection between thinking you’re just taking a photo for social media but actually the repercussion is so real because you’re going to have someone take your phone, and maybe they’re going to delete the photo, or maybe they’re going to detain you. Or maybe they’re going to beat you up — like all of those different things. I think this is one of the big challenges that we’re seeing at the moment, and something that isn’t traditionally thought of as an internet issue or an online digital rights issue because it’s someone taking a physical device and looking through it. It often gets overlooked, and then we don’t have much kind of advocacy around it, or anything like that.

What do you think is one action everybody can take to make the world and our lives online a little bit better?

I think social media has a lot of negative consequences for everyone’s mental health and many other things, but for people who are active and who want to be active, consider social networks that are open source, privacy-friendly and decentralized. Bluesky, the Fediverse —including Mastodon — are examples because I think it’s our responsibility to kind of build up a community there, so we can move away from those social media platforms that are owned by either billionaires or massive corporations, who only want to extract value from us and who spy on us and who censor us. And I feel like if everyone committed to being active on those social media platforms — one way of doing that is just having an account, and whatever you post on one, you just post on the other — I feel like that’s one thing that can make a big difference in the long run.

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope that people are celebrating in the next 25 years?

I was talking a little bit earlier about how we are building a culture that is more privacy-centric, like people are becoming aware, becoming wary about all these things happening to the data, the identity, and so on. And I do think we are at a turning point in terms of the technology that’s available to us, the practices and what we need as users to maintain our privacy and our security.  I feel like in honestly not even 25, I think in 10 years, if things go well — which it’s hard to know in this field — and if we keep on building what we already are building, I can see how we will have an internet that is a lot more privacy-centric where communications are by default are private. Where end-to-end encryption is ubiquitous in our communication, in our emailing. Where social media isn’t extractive and people have actual ownership and agency in the social network networks they use. Where data mining is no longer a thing. I feel like overall, I can see how the infrastructure is now getting built, and that in 10,15 or 25 years, we will be in a place where we can use the internet without having to constantly watch over our shoulder to see if someone is spying on us or seeing who has access and all of those things.

Lastly, what gives you hope about the future of our world?

That people are not getting complacent and that it is always people who are standing up to fight back. We’re seeing it at. We saw it at Google with people standing up as part of No Tech for Apartheid coalition and people losing the jobs. We’re seeing it on university campuses around the country. We’re seeing it on the streets. People fight back. That’s where any change has ever come from: the bottom up. I think now, more than ever, people are willing to put something on the line to make sure that they defend their rights. So I think that really gives me hope.

—————

The second story comes from Amnesty International, 14 May 2024 [https://www.amnesty.org/en/latest/campaigns/2024/05/i-come-from-the-world-of-technology-where-there-are-very-few-women/]

Nikole Yanez is a computer scientist by training, and a human rights defender from Honduras. She is passionate about feminism, the impact of the internet and protecting activists. She was first drawn to human rights through her work as a reporter with a local community radio station. After surviving the coup d’état in Honduras in 2009, Nikole broadened her approach to focus her activism on technology. When she applied for the Digital Forensics Fellowship with the Amnesty Tech Security Lab in 2022, she was looking to learn more about cybersecurity and apply what she learnt with the organizations and collectives she works with regularly.  

She highlighted her commitment to fostering a network of tech-savvy communities across Latin America in an interview with Elina Castillo, Amnesty Tech’s Advocacy and Policy Advisor:

I grew up in Honduras, where I lived through the coup d’état, which took place in 2009. It was a difficult time where rights were non-existent, and people were constantly afraid. I thought it was something you only read about in history books, but it was happening in front of my eyes. I felt myself just trying to survive, but as time went by it made me stronger and want to fight for justice. Despite the difficulties, people in my community remained hopeful and we created a community radio station, which broadcast stories about everyday people and their lives with the aim of informing people about their human rights. I was a reporter, developing stories about individual people and their fight for their rights. From there, I found a passion for working with technology and it inspired me to train to become a computer scientist.

I am always looking for ways to connect technology with activism, and specifically to support women and Indigenous people in their struggles. As much as technology presents risks for human rights defenders, it also offers opportunities for us to better protect ourselves and strengthen our movements. Technology can bring more visibility to our movements, and it can empower our work by allowing us to connect with other people and learn new strategies.

Is there one moment where you realized how to connect what you’ve been doing with feminism with technology?

In my work, my perspective as a feminist helps me centre the experiences and needs of marginalised people for trainings and outreach. It is important for me to publicly identify as an Afrofeminist in a society where there is impunity for gendered and racist violence that occurs every day. In Honduras we need to put our energy into supporting these communities whose rights are most violated, and whose stories are invisible.

For example, in 2006, I was working with a Union to install the Ubuntu operating system (an open-source operating system) on their computers. We realized that the unionists didn’t know how to use a computer, so we created a space for digital literacy and learning about how to use a computer at the same time. This became not just a teaching exercise, but an exercise for me to figure out how to connect these tools to what people are interested in. Something clicked for me in this moment, and this experience helped solidify my approach to working on technology and human rights.

There are not many women working in technology and human rights. I don’t want to be one of the only women, so my goal is to see more women colleagues working on technical issues. I want to make it possible for women to work in this field. I also want to motivate more women to create change within the intersection of technology and human rights. Using a feminist perspective and approach, we ask big questions about how we are doing the work, what our approach needs to be, and who we need to work with.   Nikole Yanez Honduras Human Rights Defender

For me, building a feminist internet means building an internet for everyone. This means creating a space where we do not reproduce sexist violence, where we find a community that responds to the people, to the groups, and to the organizations that fight for human rights. This includes involving women and marginalised people in building the infrastructure, in the configuration of servers, and in the development of protocols for how we use all these tools.

In Honduras, there aren’t many people trained in digital forensics analysis, yet there are organizations that are always seeking me out to help check their phones. The fellowship helped me learn about forensic analysis on phones and computers and tied the learning to what I’m actually doing in my area with different organizations and women’s rights defenders. The fellowship was practical and rooted in the experience of civil society organizations.

Nikole Yanez running a technology and human rights session in Honduras

How do you explain the importance of digital forensics? Well first, it’s incredibly relevant for women rights defenders. Everyone wants to know if their phone has been hacked. That’s the first thing they ask:, “Can you actually know whether your phone has been hacked?” and “How do I know? Can you do it for me? How?” Those are the things that come up in my trainings and conversations.

I like to help people to think about protection as a process, something ongoing, because we use technology all day long. There are organizations and people that take years to understand that. So, it’s not something that can be achieved in a single conversation. Sometimes a lot of things need to happen, including bad things, before people really take this topic seriously…

I try to use very basic tools when I’m doing digital security support, to say you can do this on whatever device you’re on, this is a prevention tool. It’s not just applying technical knowledge, it’s also a process of explaining, training, showing how this work is not just for hackers or people who know a lot about computers.

One of the challenges is to spread awareness about cybersecurity among Indigenous and grassroots organizations, which aren’t hyper-connected and don’t think that digital forensics work is relevant to them. Sometimes what we do is completely disconnected from their lives, and they ask us: “But what are you doing?” So, our job is to understand their questions and where they are coming from and ground our knowledge-sharing in what people are actually doing.

To someone reading this piece and saying, oh, this kind of resonates with me, where do I start, what would your recommendation be?

If you are a human rights defender, I would recommend that you share your knowledge with your collective. You can teach them the importance of knowing about them, practicing them, as well as encouraging training to prevent digital attacks, because, in the end, forensic analysis is a reaction to something that has happened.

We can take a lot of preventive measures to ensure the smallest possible impact. That’s the best way to start. And it’s crucial to stay informed, to keep reading, to stay up to date with the news and build community.

If there are girls or gender non-conforming people reading this who are interested in technical issues, it doesn’t matter if you don’t have a degree or a formal education, as long as you like it. Most hackers I’ve met become hackers because they dive into a subject, they like it and they’re passionate about it.Nikole Yanez Honduras Human Rights Defender.

See also:¨https://www.amnesty.org/en/what-we-do/technology/online-violence/

blog.mozilla.org/en/internet-culture/raphael-mimoun-mozilla-rise-25-human-rights-justice-journalists/

In the deepfake era, we need to hear the Human Rights Defenders

December 19, 2023

In a Blog Post (Council on Foreign Relations of 18 December 2023) Raquel Vazquez Llorente argues that ‘Artificial intelligence is increasingly used to alter and generate content online. As development of AI continues, societies and policymakers need to ensure that it incorporates fundamental human rights.” Raquel is the Head of Law and Policy, Technology Threats and Opportunities at WITNESS

The urgency of integrating human rights into the DNA of emerging technologies has never been more pressing. Through my role at WITNESS, I’ve observed first-hand the profound impact of generative AI across societies, and most importantly, on those defending democracy at the frontlines.

The recent elections in Argentina were marked by the widespread use of AI in campaigning material. Generative AI has also been used to target candidates with embarrassing content (increasingly of a sexual nature), to generate political ads, and to support candidates’ campaigns and outreach activities in India, the United States, Poland, Zambia, and Bangladesh (to name a few). The overall result of the lack of strong frameworks for the use of synthetic media in political settings has been a climate of mistrust regarding what we see or hear.

Not all digital alteration is harmful, though. Part of my work involves identifying how emerging technologies can foster positive change. For instance, with appropriate disclosure, synthetic media could be used to enhance voter education and engagement. Generative AI could help create informative content about candidates and their platforms, or of wider election processes, in different languages and formats, improving inclusivity or reducing barriers for underdog or outsider candidates. For voters with disabilities, synthetic media could provide accessible formats of election materials, such as sign language avatars or audio descriptions of written content. Satirical deepfakes could engage people who might otherwise be disinterested in politics, bringing attention to issues that might not be covered in mainstream media. We need to celebrate and protect these uses.

As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? When AI can blur the lines between reality and fiction with increasing credibility and ease, discerning truth from falsehood becomes not just a technological battle, but a fight to uphold democracy.

From conversations with journalists, activists, technologists and other communities impacted by generative AI and deepfakes, I have learnt that the effects of synthetic media on democracy are a mix of new, old, and borrowed challenges.

Generative AI introduces a daunting new reality: inconvenient truths can be denied as deep faked, or at least facilitate claims of plausible deniability to evade accountability. The burden of proof, or perhaps more accurately, the “burden of truth” has shifted onto those circulating authentic content and holding the powerful to account. This is not just a crisis of identifying what is fake. It is also a crisis of protecting what is true. When anything and everything can be dismissed as AI-generated or manipulated, how do we elevate the real stories of those defending our democracy at the frontlines?

But AI’s impact doesn’t stop at new challenges; it exacerbates old inequalities. Those who are already marginalized and disenfranchised—due to their gender, ethnicity, race or belonging to a particular group—face amplified risks. AI is like a magnifying glass for exclusion, and its harms are cumulative. AI deepens existing vulnerabilities, bringing a serious threat to principles of inclusivity and fairness that lie at the heart of democratic values. Similarly, sexual deepfakes can have an additional chilling effect, discouraging women, LGBTQ+ people and individuals from minoritized communities to participate in public life, thus eroding the diversity and representativeness that are essential for a healthy democracy.

Lastly, much as with social media, where we failed to incorporate the voices of the global majority, we have borrowed previous mistakes. The shortcomings in moderating content, combating misinformation, and protecting user privacy have had profound implications on democracy and social discourse. Similarly, in the context of AI, we are yet to see meaningful policies and regulation that not only consult globally those that are being impacted by AI but, more importantly, center the solutions that affected communities beyond the United States and Europe prioritize. This highlights a crucial gap: the urgent need for a global perspective in AI governance, one that learns from the failures of social media in addressing cultural and political nuances across different societies.

As we navigate AI’s impact on democracy and human rights, our approach to these challenges should be multifaceted. We must draw on a blend of strategies—ones that address the immediate ‘new’ realities of AI, respond to the ‘old’ but persistent challenges of inequality, and incorporate ‘borrowed’ wisdom from our past experiences.

First, we must ensure that new AI regulations and companies’ policies are steeped in human rights law and principles, such as those enshrined in the Universal Declaration of Human Rights. In the coming years, one of the most important areas in socio-technical expertise will be the ability to translate human rights protections into AI policies and legislation.

While anchoring new policies in human rights is crucial, we should not lose sight of the historical context of these technological advancements. We must look back as we move forward. As with technological advancements of the past, we should remind ourselves that progress is not how far you go, but how many people you bring along. We should really ask, is it technological progress if it is not inclusive, if it reproduces a disadvantage? Technological advancement that leaves people behind is not true progress; it is an illusion of progress that perpetuates inequality and systems of oppression. This past weekend marked twenty-five years since the adoption of the UN Declaration on Human Rights Defenders, which recognizes the key role of human rights defenders in realizing the Universal Declaration of Human Rights and other legally binding treaties. In the current wave of excitement around generative AI, the voices of those protecting human rights at the frontlines have rarely been more vital.

Our journey towards a future shaped by AI is also about learning from the routes we have already travelled, especially those from the social media era. Synthetic media has to be understood in the context of the broader information ecosystem. We are monetizing the spread of falsehoods while keeping local content moderators and third-party fact-checkers on precarious salaries, and putting the blame on platform users for not being educated enough to spot the fakery. The only way to align democratic values with technology goals is by both placing responsibility and establishing accountability across the whole information and AI ecosystem, from the foundation models researchers, to those commercializing AI tools, and those creating content and distributing it.

In weaving together these new, old, and borrowed strands of thought, we create a powerful blueprint for steering the course of AI. This is not just about countering a wave of digital manipulation—it is about championing technology advancement that amplifies our democratic values, deepens our global engagement, and preserves the core of our common humanity in an increasingly AI-powered and image-driven world. By centering people’s rights in AI development, we not only protect our individual freedoms, but also fortify our shared democratic future.

https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines

Should HRDs worry about Artificial Intelligence?

April 12, 2023

Towards Life 3.0: Ethics and Technology in the 21st Century is a talk series organized and facilitated by Dr. Mathias Risse, Director of the Carr Center for Human Rights Policy, and Berthold Beitz Professor in Human Rights, Global Affairs, and Philosophy. Drawing inspiration from the title of Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, the series draws upon a range of scholars, technology leaders, and public interest technologists to address the ethical aspects of the long-term impact of artificial intelligence on society and human life.

On 20 April you can join for 45 minutes with WITNESS’ new Executive Director Sam Gregory [see: https://humanrightsdefenders.blog/2023/04/05/sam-gregory-finally-in-the-lead-at-witness/]o n how AI is changing the media and information landscape; the creative opportunities for activists and threats to truth created by synthetic image, video, and audio; and the people and places being impacted but left out of the current conversation.

Sam says “Don’t let the hype-cycle around ChatGPT and Midjourney pull you into panic, WITNESS has been preparing for this moment for the past decade with foundational research and global advocacy on synthetic and manipulated media. Through structured work with human rights defenders, journalists, and technologists on four continents, we’ve identified the most pressing concerns posed by these emerging technologies and concrete recommendations on what we must do now.

We have been listening to critical voices around the globe to anticipate and design thoughtful responses to the impact of deepfakes and generative AI on our ability to discern the truth. WITNESS has proactively worked on responsible practices for synthetic media as a part of the Partnership on AI and helped develop technical standards to understand media origins and edits with the C2PA. We have directly influenced standards for authenticity infrastructure and continue to forcefully advocate for centering equity and human rights concerns in the development of detection technologies. We are convening with the people in our communities who have most to gain and lose from these technologies to hear what they want and need, most recently in Kenya at the #GenAIAfrica convening”.

 Register here: wit.to/AI-webinar 

To Counter Domestic Extremism, Human Rights First Launches Pyrra

December 26, 2021

New enterprise uses machine learning to detect extremism across online platforms

On 7 December 2021, Human Rights First announced a new enterprise, originally conceived in its Innovation Lab as Extremist Explorer, that will help to track online extremism as the threats of domestic terrorism continue to grow.

Human Rights First originally developed Extremist Explorer to monitor and challenge violent domestic actors who threaten all our human rights. To generate the level of investment needed to quickly scale up this tool, the organization launched it as a venture-backed enterprise called Pyrra Technologies.

“There is an extremist epidemic online that leads to radical violence,” said Human Rights First CEO Michael Breen. “In the 21st century, the misuse of technology by extremists is one of the greatest threats to human rights. We set up our Innovation Lab to discover, develop, and deploy new technology to both protect and promote human rights.  Pyrra is the first tool the lab has launched.”

Pyrra’s custom AI sweeps sites to detect potentially dangerous content, extremist language, violent threats, and harmful disinformation across social media sites, chatrooms, and forums.

 “We’re in the early innings of threats and disinformation emerging from a proliferating number of smaller social media platforms with neither the resources nor the will to remove violative content,Welton Chang, founding CEO of Pyrra and former CTO at Human Rights First, said at the launch announcement.  “Pyrra takes the machine learning suite we started building at Human Rights First, greatly expands on its capabilities and combines it with a sophisticated user interface and workflow to make the work of detecting violent threats and hate speech more efficient and effective.”

The Anti-Defamation League’s Center on Extremism has been an early user of the technology. 
“To have a real impact, it’s not enough to react after an event happens, it’s not enough to know how extremists operate in online spaces, we must be able to see what’s next, to get ahead of extremism,” said Oren Segal, Vice President, Center on Extremism at the ADL. “That’s why it’s been so exciting for me and my team to see how this tool has evolved over time.  We’ve seen the insights, and how they can lead to real-world impact in the fight against hate.”   

 “It really is about protecting communities and our inclusive democracy,” said Heidi Beirich, PhD, Chief Strategy Officer and Co-Founder, Global Project Against Hate and Extremism.  “The amount of information has exploded, now we’re talking about massive networks and whole ecosystems – and the threats that are embedded in those places. The Holy Grail for people who work against extremism is to have an AI system that’s intuitive, easy to work with, that can help researchers track movements that are hiding out in the dark reaches of the internet. And that’s what Pyrra does.”

Moving forward, Human Rights First will continue to partner with Pyrra to monitor extremism while building more tools to confront human rights abuses. 

Kristofer Goldsmith, Advisor on Veterans Affairs and Extremism, Human Rights First and the CEO of Sparverius, researches extremism. “We have to spend days and days and days of our lives in the worst places on the internet to get extremists’ context.  But we’re at a point now where we cannot monitor all of these platforms at once. The AI powering Pyrra can,” he said.

Pyrra’s users, including human rights defenders, journalists, and pro-democracy organizations can benefit from using the tool as well as additional tools to monitor extremism that are coming from Human Rights First’s Innovation Lab.

“This is a great step for the Innovation Lab,” said Goldsmith. “We’ve got many other projects like Pyrra that we hope to be launching that we expect to have real-world impact in stopping real-world violent extremism.”   

https://www.humanrightsfirst.org/press-release/counter-domestic-extremism-human-rights-first-launches-pyrra

Social assistance fraud detection system violates human rights says Dutch court

February 12, 2020

An algorithmic risk rating system implemented by the Dutch state to try to predict the likelihood that social security claimants commit benefits or tax fraud violates human rights laws, a court in the Netherlands ruled. The Dutch Risk Indication System (SyRI) legislation uses an undisclosed algorithmic risk model to profile citizens and has been directed exclusively to neighborhoods with mostly low-income and minority residents. Human rights defenders have called it a “welfare surveillance state.”

Several civil society organizations in the Netherlands and two citizens instigated legal action against SyRI, seeking to block its use. The court today ordered an immediate stop to use the system. The ruling is being hailed as historical by human rights defenders, and the court bases its reasoning on European human rights law, specifically the right to privacy established by article 8 of the European Convention on Human Rights ( ECHR) instead of a specific provision in the EU data protection framework (GDPR) that relates to automated processing.

Article 22 of the GDPR includes the right of individuals not to be subject to automated individual decision-making only where they can produce significant legal effects. But there may be some uncertainty about whether this applies if there is a human somewhere in the circle, such as reviewing an objection decision. In this case, the court has avoided such questions by finding that SyRI directly interferes with the rights established in the ECHR. Specifically, the court determined that the SyRI legislation does not pass an equilibrium test in Article 8 of the ECHR that requires that any social interest be considered against the violation of people’s private life, and a fair and reasonable balance is required.

In its current form, the automated risk assessment system did not pass this test, in the opinion of the court. Legal experts suggest that the decision sets some clear limits on how the public sector in the United Kingdom can make use of AI tools, and the court is particularly opposed to the lack of transparency on how the algorithmic rating system worked….

The UN special rapporteur on extreme poverty and human rights, Philip Alston, who intervened in the case by providing the court with a human rights analysis, welcomed the ruling, describing it as “a clear victory for all those who are justifiably concerned about the serious threats that digital welfare systems represent for human rights. ” “This decision sets a strong legal precedent for other courts to follow. This is one of the first times that a court stops the use of digital technologies and abundant digital information by welfare authorities for human rights reasons, ”he added in a press release.

In 2018, Alston warned that the UK government’s rush to apply digital technologies and data tools to socially redesign the provision of large-scale public services risked having a huge impact on the human rights of the most vulnerable. Therefore, the decision of the Dutch court could have some short-term implications for UK policy in this area.

The ruling does not close the door to the use by states of automated profiling systems, but it does make it clear that in Europe human rights laws must be fundamental for the design and implementation of risk tools.

..It remains to be seen whether the Commission will push pan-European limits to specific uses of AI in the public sector, such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests that it is leaning towards risk assessments and a mosaic of risk-based rules.

https://newsdio.com/blackboxs-social-assistance-fraud-detection-system-violates-dutch-human-rights-and-judicial-rules-newsdio/44625/

Excellent news: HURIDOCS to receive 1 million $ from Google for AI work

May 8, 2019

Google announced on 7 May 2019 that the Geneva-based NGO HURIDOCS is one of 20 organizations that will share 25 million US dollars in grants from the Google Artificial Intelligence Impact Challenge. The Google Artificial Intelligence Impact Challenge was an open call to nonprofits, social enterprises, and research institutions to submit their ideas to use artificial intelligence (AI) to help address societal challenges. Over 2600 organizations from around the world applied.

Geneva-based HURIDOCS will receive a grant of 1 million US dollars to develop and use machine learning methods to extract, explore and connect relevant information in laws, jurisprudence, victim testimonies, and resolutions. Thanks to these, the NGO will work with partners to make documents better and freely accessible. This will benefit anyone interested in using human rights precedents and laws, for example to lawyers representing victims of human rights violations or students researching non-discrimination.

The machine learning work to liberate information from documents is grounded in more than a decade of work that HURIDOCS has done to provide free access to information. Through pioneering partnerships with the Institute for Human Rights and Development in Africa (IHRDA) and the Center for Justice and International Law (CEJIL), HURIDOCS has co-created some of the most used public human rights databases. A key challenge in creating these databases has been the time-consuming and error-prone manual adding of information – a challenge the machine learning techniques will be used to overcome.

“We have been experimenting with machine learning techniques for more than two years”, said Natalie Widmann, Artificial Intelligence Specialist at HURIDOCS. “We have changed our approach countless times, but we see a clear path to how they can be leveraged in groundbreaking ways to democratise access to information.” HURIDOCS will use the grant from Google to work with partners to co-create the solutions, carefully weighing ethical concerns of automation and focusing on social impact. All the work will be done in the open, including all code being released publicly.

We are truly excited by the opportunity to use these technologies to address a problem that has been holding the human rights movement back”, said Friedhelm Weinberg, Executive Director of HURIDOCS. “We are thankful to Google for the support and look forward to be working with their experts and what will be a fantastic cohort of co-grantees.”

We received thousands of applications to the Google AI Impact Challenge and are excited that HURIDOCS was selected to receive funding and expertise from Google. AI is at a nascent stage when it comes to the value it can have for the social impact sector, and we look forward to seeing the outcomes of this work and considering where there is potential for use to do even more.” – Jacquelline Fuller, President of Google.org

Next week, the HURIDOCS team will travel to San Francisco to work with the other grantees, Google AI experts, Project Managers and the startup specialists from Google’s Launchpad Accelerator for a program that will last six months, from May to November 2019. Each organization will be paired a Google expert who will meet with them regularly for coaching sessions, and will also have access to other Google resources and expert mentorship.

Download the press release in English, Spanish. Learn more about the other Google AI Impact grantees at Google’s blog.

Fo more on HURIDOCS history: https://www.huridocs.org/tag/history-of-huridocs/ and for some of my other posts: https://humanrightsdefenders.blog/tag/huridocs/

HURIDOCS NEWS

Microsoft exercising human rights concerns to turn down facial-recognition sales

April 30, 2019
FILE PHOTO: The Microsoft sign is shown on top of the Microsoft Theatre in Los Angeles, California, U.S. October 19,2018. REUTERS/Mike Blak
REUTERS/Mike Blak

Joseph Menn reported on 16 April 2018 in kfgo.com about Microsoft rejecting a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns. Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white and male pictures. AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

Anytime they pulled anyone over, they wanted to run a face scan” against a database of suspects, company President Brad Smith said without naming the agency. After thinking through the uneven impact, “we said this technology is not your answer.” Speaking at a Stanford University conference on “human-centered artificial intelligence,” Smith said Microsoft had also declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution. Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse….

Smith has called for greater regulation of facial recognition and other uses of artificial intelligence, and he warned Tuesday that without that, companies amassing the most data might win the race to develop the best AI in a “race to the bottom.”

He shared the stage with the United Nations High Commissioner for Human Rights, Michelle Bachelet, who urged tech companies to refrain from building new tools without weighing their impact. “Please embody the human rights approach when you are developing technology,” said Bachelet, a former president of Chile.

[see also my older: https://humanrightsdefenders.blog/2015/11/19/contrasting-views-of-human-rights-in-business-world-bank-and-it-companies/]

https://kfgo.com/news/articles/2019/apr/16/microsoft-turned-down-facial-recognition-sales-on-human-rights-concerns/

Development of Amnesty’s Panic Button App

September 11, 2013

Having last week referred to 3 different (and competing?) techno initiatives to increase the security of HRDs, i would be amiss not to note the post of 11 september  2013 by Tanya O’Caroll on the AI blog concerning  the development of the Panic button. Over the next couple of months, she will be keeping you posted about the Panic Button. If you want to join the community of people working on Panic Button, please leave a comment on the site mentioned below or email panicbutton@amnesty.org.

via Inside the development of Amnesty’s new Panic Button App | Amnestys global human rights blog.