Posts Tagged ‘Artificial intelligence’

New study shows that on-line attacks against women human rights defenders doubled

December 16, 2025
Stressed businesswoman working on laptop at home, tired freelancer

On 15 December 2025 Emma Woollacott, in Forbes, referred to a new study that shows that 7 in 10 women human rights defenders, activists and journalists have experienced online violence in the course of their work. Produced through the UN Women’s ACT to End Violence against Women program and supported by the European Commission, “Tipping point: The chilling escalation of violence against women in the public sphere” draws on a global survey of women from 119 countries.

Along with online threats and harassment, more than 4 in 10 have experienced offline harm linked to online abuse — more than twice as many as in 2020, the researchers found. This can range from verbal harassment right up to physical assault, stalking and swatting.

“These figures confirm that digital violence is not virtual — it’s real violence with real-world consequences,” said Sarah Hendricks, director of policy, programme and intergovernmental division at UN Women.

“Women who speak up for our human rights, report the news or lead social movements are being targeted with abuse designed to shame, silence and push them out of public debate. Increasingly, those attacks do not stop at the screen — they end at women’s front doors. We cannot allow online spaces to become platforms for intimidation that silence women and undermine democracy.”

And AI is only making things worse, with almost 1 in 4 women human rights defenders, activists and journalists having experienced AI-assisted online violence, such as deepfake imagery and manipulated content. This is most often the case for writers and public communicators who focus on human rights issues, such as social media content creators and influencers, for whom the figure reaches 30%.

“Gender-based online violence is not a new phenomenon, but its scale certainly is,” said report co-author Lea Hellmueller, associate professor in journalism at City St George’s and associate dean for research in the School of Communication and Creativity.

“AI tools enable the production of cheaper and faster abusive content, which is detrimental to women in public life — and beyond,” Hellmueller added.

Tech firms are partly responsible, the researchers said, with the report calling for better tools to identify, monitor, report and fend off AI-assisted online violence. The researchers also want to see more legal and regulatory mechanisms to force tech firms to prevent their technologies being deployed against women in the public sphere.

“Our next steps include publishing data from the survey about the opportunities for, and barriers to, law enforcement and legal redress for survivors of online violence,” said Julie Posetti, chair of the Centre for Journalism and Democracy at City St George’s, University of London, one of the authors of the report. “We will also focus on creative efforts to counter gender-based online violence and policy recommendations to help hold the Big Tech facilitators of this dangerous phenomenon accountable.”

https://www.forbes.com/sites/emmawoollacott/2025/12/15/online-attacks-against-women-human-rights-workers-double-in-five-years/

https://www.globalissues.org/news/2025/12/15/41907

https://theconversation.com/ai-tools-are-being-used-to-subject-women-in-public-life-to-online-violence-271703

New report: a retrospective on the Business frameworks and actions to support defenders

August 10, 2025

ISHR launched a new report  that summarises and assesses progress and challenges over the past decade in relation to initiatives to protect human rights defenders in the context of business frameworks, guidance, initiatives and tools that have emerged at local, national and regional levels. The protection of human rights defenders in relation to business activities is vital.

Defenders play a crucial role in safeguarding human rights and environmental standards against adverse impacts of business operations globally. Despite their essential work, defenders frequently face severe risks, including threats, surveillance, legal and judicial harassment, and violence.  

According to the Business and Human Rights Resource Centre (BHRRC), more than 6,400 attacks on defenders linked to business activities have been documented over the past decade, emphasising the urgency of addressing these challenges.  While this situation is not new, and civil society organisations have constantly pushed for accountability for and prevention of these attacks, public awareness of the issue increased with early efforts to raise the visibility of defenders at the Human Rights Council and the adoption of key thematic resolutions, as well as raising defenders’ voices at other foras like the UN Forum on Business and Human Rights. 

The report ‘Business Frameworks and Actions to Support Human Rights Defenders: a Retrospective and Recommendations’ takes stock of the frameworks, tools, and advocacy developed over the last decade to protect and support human rights defenders in the context of business activities and operations.

The report examines how various standards have been operationalised through company policies, investor guidance, multi-stakeholder initiatives, legal reforms, and sector-specific commitments. At the same time, it highlights how despite these advancements, the actual implementation by businesses remains inadequate. Effective corporate action remains insufficient, highlighting a critical gap that must be urgently addressed to ensure defenders can safely carry out their vital work protecting human rights and environmental justice. In order to address this, drawing on case studies, civil society tracking tools, and policy analysis, the report identifies key barriers to effective protection and proposes targeted recommendations
Download the report

Witness’ Sam Gregory gave Gruber Lecture on artificial intelligence and human rights advocacy

June 23, 2025
Sam Gregory Headshot

Sam Gregory delivered the Spring 2025 Gruber Distinguished Lecture on Global Justice on March 24, 2025, at 4:30 pm at Yale Law School. The lecture was co-moderated by his faculty hosts, Binger Clinical Professor Emeritus of Human Rights Jim Silk ’89 and David Simon, assistant dean for Graduate Education, senior lecturer in Global Affairs and director of the Genocide Studies Program at Yale University. Gregory is the executive director of WITNESS, a human rights nonprofit organization that empowers individuals and communities to use technology to document human rights abuses and advocate for justice. He is an internationally recognized expert on using digital media and smartphone witnessing to defend and protect human rights. With over two decades of experience in the intersection of technology, media, and human rights, Gregory has become a leading figure in the field of digital advocacy. He previously launched the “Prepare, Don’t Panic” initiative in 2018 to prompt concerted, effective, and context-sensitive policy responses to deepfakes and deceptive AI issues worldwide. He focuses on leveraging emerging solutions like authenticity infrastructure, trustworthy audiovisual witnessing, and livestreamed/co-present storytelling to address misinformation, media manipulation, and rising authoritarianism.

Gregory’s lecture, entitled “Fortifying Truth, Trust and Evidence in the Face of Artificial Intelligence and Emerging Technology,” focused on the challenges that artificial intelligence poses to truth, trust, and human rights advocacy. Generative AI’s rapid development and impact on how media is made, edited, and distributed affects how digital technology can be used to expose human rights violations and defend human rights. Gregory considered how photos and videos – essential tools for human rights documentation, evidence, and storytelling – are increasingly distrusted in an era of widespread skepticism and technological advancements that enable deepfakes and AI-generated content. AI can not only create false memories, but also “acts as a powerful conduit for plausible deniability.” Gregory discussed AI’s impact on the ability to believe and trust human rights voices and its role in restructuring the information ecosystem. The escalating burden of proof for human rights activists and the overwhelming volume of digital content underscore how AI can both aid and hinder accountability efforts.

In the face of these concerns, Gregory emphasized the need for human rights defenders to work shape AI systems proactively. He stressed that AI requires a foundational, systemic architecture that ensures information systems serve, rather than undermine, human rights work. Gregory reflected that “at the fundamental (level), this is work enabled by technology, but it’s not about technology.” Digital technologies provide new mechanisms for exposing violence and human rights abuse; the abuse itself has not changed. He also pointed to the need to invest in robust community archives to protect the integrity of human rights evidence against false memories. Stressing the importance of epistemic justice, digital media literacy, and equitable access to technology and technological knowledge, Gregory discussed WITNESS’ work in organizing for digital media literacy and access in human rights digital witnessing, particularly in response to generative AI. One example he highlighted was training individuals how to film audiovisual witnessing videos in ways that are difficult for AI to replicate.

As the floor opened to questions, Gregory pointed to “authenticity infrastructure” as one building block to verify content and maintain truth. Instead of treating information as a binary between AI and not AI, it is necessary to understand the entire “recipe” of how information is created, locating it along the continuum of how AI permeates modern communication. AI must be understood, not disregarded. This new digital territory will only become more relevant in human rights work, Gregory maintained. The discussion also covered regulatory challenges, courts’ struggles with AI generated and audiovisual evidence at large, the importance of AI-infused media literacy, and the necessity of strong civil society institutions in the face of corporate media control.A recording of the lecture is available here.

https://law.yale.edu/centers-workshops/gruber-program-global-justice-and-womens-rights/gruber-lectures/samuel-gregory

International conference on ‘AI and Human Rights’ in Doha

May 27, 2025
HE Chairperson of the NHRC Maryam bint Abdullah Al Attiyah

Chairperson of the NHRC Maryam bint Abdullah Al Attiyah

The international conference ‘Artificial Intelligence and Human Rights: Opportunities, Risks, and Visions for a Better Future,’ gets under way in Doha today. Organised by the National Human Rights Committee (NHRC), the two-day event is being held in collaboration with the UN Development Programme (UNDP), the Office of the High Commissioner for Human Rights (OHCHR), the Global Alliance of National Human Rights Institutions (GANHRI), and Qatar’s Ministry of Communications and Information Technology (MCIT) and National Cyber Security Agency, along with other international entities active in the fields of digital tools and technology.

Chairperson of the NHRC Maryam bint Abdullah Al Attiyah, said in a statement Monday that the conference discusses one of the most prominent human rights issues of our time, one that is becoming increasingly important, especially with the tremendous and growing progress in the field of artificial intelligence, which many human rights activists around the world fear will impact the rights of many individuals worldwide.

She added, that the developments in AI that is observed every day requires the establishment of a legal framework that governs the rights of every individual, whether related to privacy or other rights. The framework must also regulate and control the technologies developed by companies, ensuring that rights are not infringed upon, and that the development of AI technologies is not synonymous with the pursuit of financial gain, neglecting the potential infringements on the rights of individuals and communities.

She emphasised that the conference aims to discuss the impact of AI on human rights, not only limiting itself to the challenges it poses to the lives of individuals, but also extending to identifying the opportunities it presents to human rights specialists around the world. She noted that the coming period must witness a deep focus on this area, which is evolving by the hour.

The conference is expected to bring together around 800 partners from around the world to discuss the future of globalisation. Target attendees include government officials, policymakers, AI and technology experts, human rights defenders and activists, legal professionals, AI ethics specialists, civil society representatives, academics and researchers, international organisations, private sector companies, and technology developers.

..The conference is built around 12 core themes and key topics. It focuses on the foundations of artificial intelligence, including fundamental concepts such as machine learning and natural language processing. It also addresses AI and privacy-its impact on personal data, surveillance, and privacy rights. Other themes include bias and discrimination, with an emphasis on addressing algorithmic bias and ensuring fairness, as well as freedom of expression and the role of AI in content moderation, censorship, and the protection of free speech.

The International conference aims to explore the impact of AI on human rights and fundamental freedoms, analyse the opportunities and risks associated with AI from a human rights perspective, present best practices and standards for the ethical use of AI, and engage with policymakers, technology experts, civil society, and the private sector to foster multi-stakeholder dialogue. It also seeks to propose actionable policy and legal framework recommendations to ensure that AI development aligns with human rights principles.

Participating experts will address the legal and ethical frameworks, laws, policies, and ethical standards for the responsible use of artificial intelligence. They will also explore the theme of “AI and Security,” including issues related to militarisation, armed conflicts, and the protection of human rights. Additionally, the conference will examine AI and democracy, focusing on the role of AI in shaping democratic institutions and promoting inclusive participation.

Conference participants will also discuss artificial intelligence and the future of media from a human rights-based perspective, with a focus on both risks and innovation. The conference will further examine the transformations brought about by AI in employment and job opportunities, its impact on labor rights and economic inequality, as well as the associated challenges and prospects.

As part of its ongoing commitment to employing technology in service of humanity and supporting the ethical use of emerging technologies, the Ministry of Communications and Information Technology (MCIT) is also partnering in organising the conference.

for some other posts on Qatar, see: https://humanrightsdefenders.blog/tag/qatar/

https://www.gulf-times.com/article/705199/qatar/international-conference-on-ai-and-human-rights-opens-in-doha-tuesday

McGovern Foundation awards $73.5 million for human-centered Artificial Intelligence

January 6, 2025
McGovern Foundation awards $73.5 million for human-centered AI

The Boston-based Patrick J. McGovern Foundation has announced on 23 December 2024 grants totaling $73.5 million in 2024 in support of human-centered AI.

Awarded to 144 nonprofit, academic, and governmental organizations in 11 countries, the grants will support the development and delivery of AI solutions built for long-term societal benefit and the creation of institutions designed to address the opportunities and challenges this emerging era presents. Grants will support organizations leveraging data science and AI to drive tangible change in a variety of areas with urgency, including climate change, human rights, media and journalism, crisis response, digital literacy, and health equity.

Gifts include $200,000 to MIT Solveto support the 2025 AI for Humanity Prize; $364,000 to Clear Globalto enable scalable, multilingual, voice-powered communication and information channels for crisis-affected communities; $1.25 million to the Aspen Instituteto enhance public understanding and policy discourse around AI; and $1.5 million to United Nations Educational, Scientific and Cultural Organization(UNESCO) to advance ethical AI governance through civil society networks, policy frameworks, and knowledge resources.

Amnesty Internationalto support Amnesty’s Algorithmic Accountability Lab to mobilize and empower civil society to evaluate AI systems and pursue accountability for AI-driven harms ($750,000)

HURIDOCSto use machine learning to enhance human rights data management and advocacy ($400,000)

This is not a moment to react; it’s a moment to lead,” said McGovern Foundation president Vilas Dhar. “We believe that by investing in AI solutions grounded in human values, we can harness technology’s immense potential to benefit communities and individuals alike. AI can amplify human dignity, protect the vulnerable, drive global prosperity, and become a force for good.

https://philanthropynewsdigest.org/news/mcgovern-foundation-awards-73.5-million-for-human-centered-ai

Two young human rights defenders, Raphael Mimoun and Nikole Yanez, on tech for human rights

May 16, 2024

Mozilla is highlighting each year the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through its Rise 25 Awards. On 13 May 2024 was the turn of Raphael Mimoun, a builder dedicated to making tools that empower journalists and human rights defenders. Aron Yohannes talked with Raphael about the launch of his app, Tella, combatting misinformation online, the future of social media platforms and more.

Raphael Mimoun: So I never worked in tech per se and only developed a passion for technology as I was working in human rights. It was really a time when, basically, the power of technology to support movements and to head movements around the world was kind of getting fully understood. You had the Arab Spring, you had Occupy Wall Street, you had all of these movements for social justice, for democracy, for human rights, that were very much kind of spread through technology, right? Technology played a very, very important role. But just after that, it was kind of like a hangover where we all realized, “OK, it’s not just all good and fine.” You also have the flip side, which is government spying on the citizens, identifying citizens through social media, through hacking, and so on and so forth — harassing them, repressing them online, but translating into offline violence, repression, and so on. And so I think that was the moment where I was like, “OK, there is something that needs to be done around technology,” specifically for those people who are on the front lines because if we just treat it as a tool — one of those neutral tools — we end up getting very vulnerable to violence, and it can be from the state, it can also be from online mobs, armed groups, all sort of things.

There’s so much misinformation out there now that it’s so much harder to tell the difference between what’s real and fake news. Twitter was such a reliable tool of information before, but that’s changed. Do you think that any of these other platforms can be able to help make up for so much of the misinformation that is out there?

I think we all feel the weight of that loss of losing Twitter. Twitter was always a large corporation, partially owned by a billionaire. It was never kind of a community tool, but there was still an ethos, right? Like a philosophy, or the values of the platform were still very much like community-oriented, right? It was that place for activists and human rights defenders and journalists and communities in general to voice their opinions. So I think that loss was very hard on all of us.

I see a lot of misinformation on Instagram as well. There is very little moderation there. It’s also all visual, so if you want traction, you’re going to try to put something that is very spectacular that is very eye catchy, and so I think that leads to even more misinformation.

I am pretty optimistic about some of the alternatives that have popped up since Twitter’s downfall. Mastodon actually blew up after Twitter, but it’s much older — I think it’s 10 years old by now. And there’s Bluesky. So I think those two are building up, and they offer spaces that are much more decentralized with much more autonomy and agency to users. You are more likely to be able to customize your feeds. You are more likely to have tools for your own safety online, right? All of those different things that I feel like you could never get on Threads, on Instagram or on Twitter, or anything like that. I’m hoping it’s actually going to be able to recreate the community that is very much what Twitter was. It’s never going to be exactly the same thing, but I’m hoping we will get there. And I think the fact that it is decentralized, open source and with very much a philosophy of agency and autonomy is going to lead us to a place where these social networks can’t actually be taken over by a power hungry billionaire.

What do you think is the biggest challenge that we face in the world this year on and offline, and then how do you think we can combat it?

I don’t know if that’s the biggest challenge, but one of the really big challenges that we’re seeing is how the digital is meeting real life and how people who are active online or on the phone on the computer are getting repressed for that work in real life. So we developed an app called Tella, which encrypts and hides files on your phone, right? So you take a photo or a video of a demonstration or police violence, or whatever it is, and then if the police tries to catch you and grab your phone to delete it, they won’t be able to find it, or at least it will be much more difficult to find it. Or it would be uploaded already. And things like that, I think is one of the big things that we’re seeing again. I don’t know if that the biggest challenge online at the moment, but one of the big things we’re seeing is just that it’s becoming completely normalized to grab someone’s phone or check someone’s computer at the airport, or at the border, in the street and go through it without any form of accountability. People have no idea what the regulations are, what the rules are, what’s allowed, what’s not allowed. And when they abuse those powers, is there any recourse? Most places in the world, at least, where we are working, there is definitely no recourse. And so I think that connection between thinking you’re just taking a photo for social media but actually the repercussion is so real because you’re going to have someone take your phone, and maybe they’re going to delete the photo, or maybe they’re going to detain you. Or maybe they’re going to beat you up — like all of those different things. I think this is one of the big challenges that we’re seeing at the moment, and something that isn’t traditionally thought of as an internet issue or an online digital rights issue because it’s someone taking a physical device and looking through it. It often gets overlooked, and then we don’t have much kind of advocacy around it, or anything like that.

What do you think is one action everybody can take to make the world and our lives online a little bit better?

I think social media has a lot of negative consequences for everyone’s mental health and many other things, but for people who are active and who want to be active, consider social networks that are open source, privacy-friendly and decentralized. Bluesky, the Fediverse —including Mastodon — are examples because I think it’s our responsibility to kind of build up a community there, so we can move away from those social media platforms that are owned by either billionaires or massive corporations, who only want to extract value from us and who spy on us and who censor us. And I feel like if everyone committed to being active on those social media platforms — one way of doing that is just having an account, and whatever you post on one, you just post on the other — I feel like that’s one thing that can make a big difference in the long run.

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope that people are celebrating in the next 25 years?

I was talking a little bit earlier about how we are building a culture that is more privacy-centric, like people are becoming aware, becoming wary about all these things happening to the data, the identity, and so on. And I do think we are at a turning point in terms of the technology that’s available to us, the practices and what we need as users to maintain our privacy and our security.  I feel like in honestly not even 25, I think in 10 years, if things go well — which it’s hard to know in this field — and if we keep on building what we already are building, I can see how we will have an internet that is a lot more privacy-centric where communications are by default are private. Where end-to-end encryption is ubiquitous in our communication, in our emailing. Where social media isn’t extractive and people have actual ownership and agency in the social network networks they use. Where data mining is no longer a thing. I feel like overall, I can see how the infrastructure is now getting built, and that in 10,15 or 25 years, we will be in a place where we can use the internet without having to constantly watch over our shoulder to see if someone is spying on us or seeing who has access and all of those things.

Lastly, what gives you hope about the future of our world?

That people are not getting complacent and that it is always people who are standing up to fight back. We’re seeing it at. We saw it at Google with people standing up as part of No Tech for Apartheid coalition and people losing the jobs. We’re seeing it on university campuses around the country. We’re seeing it on the streets. People fight back. That’s where any change has ever come from: the bottom up. I think now, more than ever, people are willing to put something on the line to make sure that they defend their rights. So I think that really gives me hope.

—————

The second story comes from Amnesty International, 14 May 2024 [https://www.amnesty.org/en/latest/campaigns/2024/05/i-come-from-the-world-of-technology-where-there-are-very-few-women/]

Nikole Yanez is a computer scientist by training, and a human rights defender from Honduras. She is passionate about feminism, the impact of the internet and protecting activists. She was first drawn to human rights through her work as a reporter with a local community radio station. After surviving the coup d’état in Honduras in 2009, Nikole broadened her approach to focus her activism on technology. When she applied for the Digital Forensics Fellowship with the Amnesty Tech Security Lab in 2022, she was looking to learn more about cybersecurity and apply what she learnt with the organizations and collectives she works with regularly.  

She highlighted her commitment to fostering a network of tech-savvy communities across Latin America in an interview with Elina Castillo, Amnesty Tech’s Advocacy and Policy Advisor:

I grew up in Honduras, where I lived through the coup d’état, which took place in 2009. It was a difficult time where rights were non-existent, and people were constantly afraid. I thought it was something you only read about in history books, but it was happening in front of my eyes. I felt myself just trying to survive, but as time went by it made me stronger and want to fight for justice. Despite the difficulties, people in my community remained hopeful and we created a community radio station, which broadcast stories about everyday people and their lives with the aim of informing people about their human rights. I was a reporter, developing stories about individual people and their fight for their rights. From there, I found a passion for working with technology and it inspired me to train to become a computer scientist.

I am always looking for ways to connect technology with activism, and specifically to support women and Indigenous people in their struggles. As much as technology presents risks for human rights defenders, it also offers opportunities for us to better protect ourselves and strengthen our movements. Technology can bring more visibility to our movements, and it can empower our work by allowing us to connect with other people and learn new strategies.

Is there one moment where you realized how to connect what you’ve been doing with feminism with technology?

In my work, my perspective as a feminist helps me centre the experiences and needs of marginalised people for trainings and outreach. It is important for me to publicly identify as an Afrofeminist in a society where there is impunity for gendered and racist violence that occurs every day. In Honduras we need to put our energy into supporting these communities whose rights are most violated, and whose stories are invisible.

For example, in 2006, I was working with a Union to install the Ubuntu operating system (an open-source operating system) on their computers. We realized that the unionists didn’t know how to use a computer, so we created a space for digital literacy and learning about how to use a computer at the same time. This became not just a teaching exercise, but an exercise for me to figure out how to connect these tools to what people are interested in. Something clicked for me in this moment, and this experience helped solidify my approach to working on technology and human rights.

There are not many women working in technology and human rights. I don’t want to be one of the only women, so my goal is to see more women colleagues working on technical issues. I want to make it possible for women to work in this field. I also want to motivate more women to create change within the intersection of technology and human rights. Using a feminist perspective and approach, we ask big questions about how we are doing the work, what our approach needs to be, and who we need to work with.   Nikole Yanez Honduras Human Rights Defender

For me, building a feminist internet means building an internet for everyone. This means creating a space where we do not reproduce sexist violence, where we find a community that responds to the people, to the groups, and to the organizations that fight for human rights. This includes involving women and marginalised people in building the infrastructure, in the configuration of servers, and in the development of protocols for how we use all these tools.

In Honduras, there aren’t many people trained in digital forensics analysis, yet there are organizations that are always seeking me out to help check their phones. The fellowship helped me learn about forensic analysis on phones and computers and tied the learning to what I’m actually doing in my area with different organizations and women’s rights defenders. The fellowship was practical and rooted in the experience of civil society organizations.

Nikole Yanez running a technology and human rights session in Honduras

How do you explain the importance of digital forensics? Well first, it’s incredibly relevant for women rights defenders. Everyone wants to know if their phone has been hacked. That’s the first thing they ask:, “Can you actually know whether your phone has been hacked?” and “How do I know? Can you do it for me? How?” Those are the things that come up in my trainings and conversations.

I like to help people to think about protection as a process, something ongoing, because we use technology all day long. There are organizations and people that take years to understand that. So, it’s not something that can be achieved in a single conversation. Sometimes a lot of things need to happen, including bad things, before people really take this topic seriously…

I try to use very basic tools when I’m doing digital security support, to say you can do this on whatever device you’re on, this is a prevention tool. It’s not just applying technical knowledge, it’s also a process of explaining, training, showing how this work is not just for hackers or people who know a lot about computers.

One of the challenges is to spread awareness about cybersecurity among Indigenous and grassroots organizations, which aren’t hyper-connected and don’t think that digital forensics work is relevant to them. Sometimes what we do is completely disconnected from their lives, and they ask us: “But what are you doing?” So, our job is to understand their questions and where they are coming from and ground our knowledge-sharing in what people are actually doing.

To someone reading this piece and saying, oh, this kind of resonates with me, where do I start, what would your recommendation be?

If you are a human rights defender, I would recommend that you share your knowledge with your collective. You can teach them the importance of knowing about them, practicing them, as well as encouraging training to prevent digital attacks, because, in the end, forensic analysis is a reaction to something that has happened.

We can take a lot of preventive measures to ensure the smallest possible impact. That’s the best way to start. And it’s crucial to stay informed, to keep reading, to stay up to date with the news and build community.

If there are girls or gender non-conforming people reading this who are interested in technical issues, it doesn’t matter if you don’t have a degree or a formal education, as long as you like it. Most hackers I’ve met become hackers because they dive into a subject, they like it and they’re passionate about it.Nikole Yanez Honduras Human Rights Defender.

See also:¨https://www.amnesty.org/en/what-we-do/technology/online-violence/

blog.mozilla.org/en/internet-culture/raphael-mimoun-mozilla-rise-25-human-rights-justice-journalists/

In the deepfake era, we need to hear the Human Rights Defenders

December 19, 2023

In a Blog Post (Council on Foreign Relations of 18 December 2023) Raquel Vazquez Llorente argues that ‘Artificial intelligence is increasingly used to alter and generate content online. As development of AI continues, societies and policymakers need to ensure that it incorporates fundamental human rights.” Raquel is the Head of Law and Policy, Technology Threats and Opportunities at WITNESS

The urgency of integrating human rights into the DNA of emerging technologies has never been more pressing. Through my role at WITNESS, I’ve observed first-hand the profound impact of generative AI across societies, and most importantly, on those defending democracy at the frontlines.

The recent elections in Argentina were marked by the widespread use of AI in campaigning material. Generative AI has also been used to target candidates with embarrassing content (increasingly of a sexual nature), to generate political ads, and to support candidates’ campaigns and outreach activities in India, the United States, Poland, Zambia, and Bangladesh (to name a few). The overall result of the lack of strong frameworks for the use of synthetic media in political settings has been a climate of mistrust regarding what we see or hear.

Not all digital alteration is harmful, though. Part of my work involves identifying how emerging technologies can foster positive change. For instance, with appropriate disclosure, synthetic media could be used to enhance voter education and engagement. Generative AI could help create informative content about candidates and their platforms, or of wider election processes, in different languages and formats, improving inclusivity or reducing barriers for underdog or outsider candidates. For voters with disabilities, synthetic media could provide accessible formats of election materials, such as sign language avatars or audio descriptions of written content. Satirical deepfakes could engage people who might otherwise be disinterested in politics, bringing attention to issues that might not be covered in mainstream media. We need to celebrate and protect these uses.

As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? When AI can blur the lines between reality and fiction with increasing credibility and ease, discerning truth from falsehood becomes not just a technological battle, but a fight to uphold democracy.

From conversations with journalists, activists, technologists and other communities impacted by generative AI and deepfakes, I have learnt that the effects of synthetic media on democracy are a mix of new, old, and borrowed challenges.

Generative AI introduces a daunting new reality: inconvenient truths can be denied as deep faked, or at least facilitate claims of plausible deniability to evade accountability. The burden of proof, or perhaps more accurately, the “burden of truth” has shifted onto those circulating authentic content and holding the powerful to account. This is not just a crisis of identifying what is fake. It is also a crisis of protecting what is true. When anything and everything can be dismissed as AI-generated or manipulated, how do we elevate the real stories of those defending our democracy at the frontlines?

But AI’s impact doesn’t stop at new challenges; it exacerbates old inequalities. Those who are already marginalized and disenfranchised—due to their gender, ethnicity, race or belonging to a particular group—face amplified risks. AI is like a magnifying glass for exclusion, and its harms are cumulative. AI deepens existing vulnerabilities, bringing a serious threat to principles of inclusivity and fairness that lie at the heart of democratic values. Similarly, sexual deepfakes can have an additional chilling effect, discouraging women, LGBTQ+ people and individuals from minoritized communities to participate in public life, thus eroding the diversity and representativeness that are essential for a healthy democracy.

Lastly, much as with social media, where we failed to incorporate the voices of the global majority, we have borrowed previous mistakes. The shortcomings in moderating content, combating misinformation, and protecting user privacy have had profound implications on democracy and social discourse. Similarly, in the context of AI, we are yet to see meaningful policies and regulation that not only consult globally those that are being impacted by AI but, more importantly, center the solutions that affected communities beyond the United States and Europe prioritize. This highlights a crucial gap: the urgent need for a global perspective in AI governance, one that learns from the failures of social media in addressing cultural and political nuances across different societies.

As we navigate AI’s impact on democracy and human rights, our approach to these challenges should be multifaceted. We must draw on a blend of strategies—ones that address the immediate ‘new’ realities of AI, respond to the ‘old’ but persistent challenges of inequality, and incorporate ‘borrowed’ wisdom from our past experiences.

First, we must ensure that new AI regulations and companies’ policies are steeped in human rights law and principles, such as those enshrined in the Universal Declaration of Human Rights. In the coming years, one of the most important areas in socio-technical expertise will be the ability to translate human rights protections into AI policies and legislation.

While anchoring new policies in human rights is crucial, we should not lose sight of the historical context of these technological advancements. We must look back as we move forward. As with technological advancements of the past, we should remind ourselves that progress is not how far you go, but how many people you bring along. We should really ask, is it technological progress if it is not inclusive, if it reproduces a disadvantage? Technological advancement that leaves people behind is not true progress; it is an illusion of progress that perpetuates inequality and systems of oppression. This past weekend marked twenty-five years since the adoption of the UN Declaration on Human Rights Defenders, which recognizes the key role of human rights defenders in realizing the Universal Declaration of Human Rights and other legally binding treaties. In the current wave of excitement around generative AI, the voices of those protecting human rights at the frontlines have rarely been more vital.

Our journey towards a future shaped by AI is also about learning from the routes we have already travelled, especially those from the social media era. Synthetic media has to be understood in the context of the broader information ecosystem. We are monetizing the spread of falsehoods while keeping local content moderators and third-party fact-checkers on precarious salaries, and putting the blame on platform users for not being educated enough to spot the fakery. The only way to align democratic values with technology goals is by both placing responsibility and establishing accountability across the whole information and AI ecosystem, from the foundation models researchers, to those commercializing AI tools, and those creating content and distributing it.

In weaving together these new, old, and borrowed strands of thought, we create a powerful blueprint for steering the course of AI. This is not just about countering a wave of digital manipulation—it is about championing technology advancement that amplifies our democratic values, deepens our global engagement, and preserves the core of our common humanity in an increasingly AI-powered and image-driven world. By centering people’s rights in AI development, we not only protect our individual freedoms, but also fortify our shared democratic future.

https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines

Should HRDs worry about Artificial Intelligence?

April 12, 2023

Towards Life 3.0: Ethics and Technology in the 21st Century is a talk series organized and facilitated by Dr. Mathias Risse, Director of the Carr Center for Human Rights Policy, and Berthold Beitz Professor in Human Rights, Global Affairs, and Philosophy. Drawing inspiration from the title of Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, the series draws upon a range of scholars, technology leaders, and public interest technologists to address the ethical aspects of the long-term impact of artificial intelligence on society and human life.

On 20 April you can join for 45 minutes with WITNESS’ new Executive Director Sam Gregory [see: https://humanrightsdefenders.blog/2023/04/05/sam-gregory-finally-in-the-lead-at-witness/]o n how AI is changing the media and information landscape; the creative opportunities for activists and threats to truth created by synthetic image, video, and audio; and the people and places being impacted but left out of the current conversation.

Sam says “Don’t let the hype-cycle around ChatGPT and Midjourney pull you into panic, WITNESS has been preparing for this moment for the past decade with foundational research and global advocacy on synthetic and manipulated media. Through structured work with human rights defenders, journalists, and technologists on four continents, we’ve identified the most pressing concerns posed by these emerging technologies and concrete recommendations on what we must do now.

We have been listening to critical voices around the globe to anticipate and design thoughtful responses to the impact of deepfakes and generative AI on our ability to discern the truth. WITNESS has proactively worked on responsible practices for synthetic media as a part of the Partnership on AI and helped develop technical standards to understand media origins and edits with the C2PA. We have directly influenced standards for authenticity infrastructure and continue to forcefully advocate for centering equity and human rights concerns in the development of detection technologies. We are convening with the people in our communities who have most to gain and lose from these technologies to hear what they want and need, most recently in Kenya at the #GenAIAfrica convening”.

 Register here: wit.to/AI-webinar 

To Counter Domestic Extremism, Human Rights First Launches Pyrra

December 26, 2021

New enterprise uses machine learning to detect extremism across online platforms

On 7 December 2021, Human Rights First announced a new enterprise, originally conceived in its Innovation Lab as Extremist Explorer, that will help to track online extremism as the threats of domestic terrorism continue to grow.

Human Rights First originally developed Extremist Explorer to monitor and challenge violent domestic actors who threaten all our human rights. To generate the level of investment needed to quickly scale up this tool, the organization launched it as a venture-backed enterprise called Pyrra Technologies.

“There is an extremist epidemic online that leads to radical violence,” said Human Rights First CEO Michael Breen. “In the 21st century, the misuse of technology by extremists is one of the greatest threats to human rights. We set up our Innovation Lab to discover, develop, and deploy new technology to both protect and promote human rights.  Pyrra is the first tool the lab has launched.”

Pyrra’s custom AI sweeps sites to detect potentially dangerous content, extremist language, violent threats, and harmful disinformation across social media sites, chatrooms, and forums.

 “We’re in the early innings of threats and disinformation emerging from a proliferating number of smaller social media platforms with neither the resources nor the will to remove violative content,Welton Chang, founding CEO of Pyrra and former CTO at Human Rights First, said at the launch announcement.  “Pyrra takes the machine learning suite we started building at Human Rights First, greatly expands on its capabilities and combines it with a sophisticated user interface and workflow to make the work of detecting violent threats and hate speech more efficient and effective.”

The Anti-Defamation League’s Center on Extremism has been an early user of the technology. 
“To have a real impact, it’s not enough to react after an event happens, it’s not enough to know how extremists operate in online spaces, we must be able to see what’s next, to get ahead of extremism,” said Oren Segal, Vice President, Center on Extremism at the ADL. “That’s why it’s been so exciting for me and my team to see how this tool has evolved over time.  We’ve seen the insights, and how they can lead to real-world impact in the fight against hate.”   

 “It really is about protecting communities and our inclusive democracy,” said Heidi Beirich, PhD, Chief Strategy Officer and Co-Founder, Global Project Against Hate and Extremism.  “The amount of information has exploded, now we’re talking about massive networks and whole ecosystems – and the threats that are embedded in those places. The Holy Grail for people who work against extremism is to have an AI system that’s intuitive, easy to work with, that can help researchers track movements that are hiding out in the dark reaches of the internet. And that’s what Pyrra does.”

Moving forward, Human Rights First will continue to partner with Pyrra to monitor extremism while building more tools to confront human rights abuses. 

Kristofer Goldsmith, Advisor on Veterans Affairs and Extremism, Human Rights First and the CEO of Sparverius, researches extremism. “We have to spend days and days and days of our lives in the worst places on the internet to get extremists’ context.  But we’re at a point now where we cannot monitor all of these platforms at once. The AI powering Pyrra can,” he said.

Pyrra’s users, including human rights defenders, journalists, and pro-democracy organizations can benefit from using the tool as well as additional tools to monitor extremism that are coming from Human Rights First’s Innovation Lab.

“This is a great step for the Innovation Lab,” said Goldsmith. “We’ve got many other projects like Pyrra that we hope to be launching that we expect to have real-world impact in stopping real-world violent extremism.”   

https://www.humanrightsfirst.org/press-release/counter-domestic-extremism-human-rights-first-launches-pyrra

Social assistance fraud detection system violates human rights says Dutch court

February 12, 2020

An algorithmic risk rating system implemented by the Dutch state to try to predict the likelihood that social security claimants commit benefits or tax fraud violates human rights laws, a court in the Netherlands ruled. The Dutch Risk Indication System (SyRI) legislation uses an undisclosed algorithmic risk model to profile citizens and has been directed exclusively to neighborhoods with mostly low-income and minority residents. Human rights defenders have called it a “welfare surveillance state.”

Several civil society organizations in the Netherlands and two citizens instigated legal action against SyRI, seeking to block its use. The court today ordered an immediate stop to use the system. The ruling is being hailed as historical by human rights defenders, and the court bases its reasoning on European human rights law, specifically the right to privacy established by article 8 of the European Convention on Human Rights ( ECHR) instead of a specific provision in the EU data protection framework (GDPR) that relates to automated processing.

Article 22 of the GDPR includes the right of individuals not to be subject to automated individual decision-making only where they can produce significant legal effects. But there may be some uncertainty about whether this applies if there is a human somewhere in the circle, such as reviewing an objection decision. In this case, the court has avoided such questions by finding that SyRI directly interferes with the rights established in the ECHR. Specifically, the court determined that the SyRI legislation does not pass an equilibrium test in Article 8 of the ECHR that requires that any social interest be considered against the violation of people’s private life, and a fair and reasonable balance is required.

In its current form, the automated risk assessment system did not pass this test, in the opinion of the court. Legal experts suggest that the decision sets some clear limits on how the public sector in the United Kingdom can make use of AI tools, and the court is particularly opposed to the lack of transparency on how the algorithmic rating system worked….

The UN special rapporteur on extreme poverty and human rights, Philip Alston, who intervened in the case by providing the court with a human rights analysis, welcomed the ruling, describing it as “a clear victory for all those who are justifiably concerned about the serious threats that digital welfare systems represent for human rights. ” “This decision sets a strong legal precedent for other courts to follow. This is one of the first times that a court stops the use of digital technologies and abundant digital information by welfare authorities for human rights reasons, ”he added in a press release.

In 2018, Alston warned that the UK government’s rush to apply digital technologies and data tools to socially redesign the provision of large-scale public services risked having a huge impact on the human rights of the most vulnerable. Therefore, the decision of the Dutch court could have some short-term implications for UK policy in this area.

The ruling does not close the door to the use by states of automated profiling systems, but it does make it clear that in Europe human rights laws must be fundamental for the design and implementation of risk tools.

..It remains to be seen whether the Commission will push pan-European limits to specific uses of AI in the public sector, such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests that it is leaning towards risk assessments and a mosaic of risk-based rules.

https://newsdio.com/blackboxs-social-assistance-fraud-detection-system-violates-dutch-human-rights-and-judicial-rules-newsdio/44625/