HURIDOCS is recruiting a đ Global Repository Coordinator đ , a fixed-term role focused on a project thatâs been in the works here for a long time. This Global Repository is where so many threads finally come together. Years of work on machine learning, documentation, and human rights data, all coming into one shared âplaygroundâ to help unlock judgments, decisions, and human rights information at scale.
Weâre building this together with partners, including the The Patrick J. McGovern Foundation and the Oxford Institute of Technology and Justice ( supported by the Clooney Foundation for Justice). This short video from Oxford gives a glimpse of the vision (at min 2:17).
https://lnkd.in/gmQaxxad
Because this is such an exciting and ambitious role, weâre looking for someone whoâs dynamic, independent, and a great problem-solver. Someone who knows how to push things forward and also when to when to slow down and consult, with communities, partners, donors, and more. Someone comfortable with the international human rights ecosystem and committed to leveraging technology for justice.
Often when weâre recruiting for roles like this we talk about looking for a unicorn, but this time that doesn’t quite work. I think weâre really looking for a tiger. Someone who can help us actually make this thing real. đŻ
đŁHURIDOCS is looking for a Project Coordinator to play a key role in building the Global Repository of Human Rights.
Applications sent by email or direct message will not be considered. Please apply via the form provided in the job description.
hashtag#NGOJobs hashtag#ProjectCoordinator hashtag#RemoteJobs
Posts Tagged ‘Artificial intelligence’
HURIDOCS looking for a Global Repository Coordinator
February 9, 2026New study shows that on-line attacks against women human rights defenders doubled
December 16, 2025
On 15 December 2025 Emma Woollacott, in Forbes, referred to a new study that shows that 7 in 10 women human rights defenders, activists and journalists have experienced online violence in the course of their work. Produced through the UN Womenâs ACT to End Violence against Women program and supported by the European Commission, âTipping point: The chilling escalation of violence against women in the public sphereâ draws on a global survey of women from 119 countries.
Along with online threats and harassment, more than 4 in 10 have experienced offline harm linked to online abuse â more than twice as many as in 2020, the researchers found. This can range from verbal harassment right up to physical assault, stalking and swatting.
âThese figures confirm that digital violence is not virtual â itâs real violence with real-world consequences,â said Sarah Hendricks, director of policy, programme and intergovernmental division at UN Women.
âWomen who speak up for our human rights, report the news or lead social movements are being targeted with abuse designed to shame, silence and push them out of public debate. Increasingly, those attacks do not stop at the screen â they end at womenâs front doors. We cannot allow online spaces to become platforms for intimidation that silence women and undermine democracy.â
And AI is only making things worse, with almost 1 in 4 women human rights defenders, activists and journalists having experienced AI-assisted online violence, such as deepfake imagery and manipulated content. This is most often the case for writers and public communicators who focus on human rights issues, such as social media content creators and influencers, for whom the figure reaches 30%.
âGender-based online violence is not a new phenomenon, but its scale certainly is,” said report co-author Lea Hellmueller, associate professor in journalism at City St Georgeâs and associate dean for research in the School of Communication and Creativity.
“AI tools enable the production of cheaper and faster abusive content, which is detrimental to women in public life â and beyond,” Hellmueller added.
Tech firms are partly responsible, the researchers said, with the report calling for better tools to identify, monitor, report and fend off AI-assisted online violence. The researchers also want to see more legal and regulatory mechanisms to force tech firms to prevent their technologies being deployed against women in the public sphere.
âOur next steps include publishing data from the survey about the opportunities for, and barriers to, law enforcement and legal redress for survivors of online violence,â said Julie Posetti, chair of the Centre for Journalism and Democracy at City St Georgeâs, University of London, one of the authors of the report. âWe will also focus on creative efforts to counter gender-based online violence and policy recommendations to help hold the Big Tech facilitators of this dangerous phenomenon accountable.â
New report: a retrospective on the Business frameworks and actions to support defenders
August 10, 2025
ISHR launched a new report that summarises and assesses progress and challenges over the past decade in relation to initiatives to protect human rights defenders in the context of business frameworks, guidance, initiatives and tools that have emerged at local, national and regional levels. The protection of human rights defenders in relation to business activities is vital.
Defenders play a crucial role in safeguarding human rights and environmental standards against adverse impacts of business operations globally. Despite their essential work, defenders frequently face severe risks, including threats, surveillance, legal and judicial harassment, and violence. Â
According to the Business and Human Rights Resource Centre (BHRRC), more than 6,400 attacks on defenders linked to business activities have been documented over the past decade, emphasising the urgency of addressing these challenges. While this situation is not new, and civil society organisations have constantly pushed for accountability for and prevention of these attacks, public awareness of the issue increased with early efforts to raise the visibility of defenders at the Human Rights Council and the adoption of key thematic resolutions, as well as raising defendersâ voices at other foras like the UN Forum on Business and Human Rights.Â
The report âBusiness Frameworks and Actions to Support Human Rights Defenders: a Retrospective and Recommendationsâ takes stock of the frameworks, tools, and advocacy developed over the last decade to protect and support human rights defenders in the context of business activities and operations.
| The report examines how various standards have been operationalised through company policies, investor guidance, multi-stakeholder initiatives, legal reforms, and sector-specific commitments. At the same time, it highlights how despite these advancements, the actual implementation by businesses remains inadequate. Effective corporate action remains insufficient, highlighting a critical gap that must be urgently addressed to ensure defenders can safely carry out their vital work protecting human rights and environmental justice. In order to address this, drawing on case studies, civil society tracking tools, and policy analysis, the report identifies key barriers to effective protection and proposes targeted recommendations. |
Witness’ Sam Gregory gave Gruber Lecture on artificial intelligence and human rights advocacy
June 23, 2025
Sam Gregory delivered the Spring 2025 Gruber Distinguished Lecture on Global Justice on March 24, 2025, at 4:30 pm at Yale Law School. The lecture was co-moderated by his faculty hosts, Binger Clinical Professor Emeritus of Human Rights Jim Silk â89 and David Simon, assistant dean for Graduate Education, senior lecturer in Global Affairs and director of the Genocide Studies Program at Yale University. Gregory is the executive director of WITNESS, a human rights nonprofit organization that empowers individuals and communities to use technology to document human rights abuses and advocate for justice. He is an internationally recognized expert on using digital media and smartphone witnessing to defend and protect human rights. With over two decades of experience in the intersection of technology, media, and human rights, Gregory has become a leading figure in the field of digital advocacy. He previously launched the âPrepare, Donât Panicâ initiative in 2018 to prompt concerted, effective, and context-sensitive policy responses to deepfakes and deceptive AI issues worldwide. He focuses on leveraging emerging solutions like authenticity infrastructure, trustworthy audiovisual witnessing, and livestreamed/co-present storytelling to address misinformation, media manipulation, and rising authoritarianism.
Gregoryâs lecture, entitled âFortifying Truth, Trust and Evidence in the Face of Artificial Intelligence and Emerging Technology,â focused on the challenges that artificial intelligence poses to truth, trust, and human rights advocacy. Generative AIâs rapid development and impact on how media is made, edited, and distributed affects how digital technology can be used to expose human rights violations and defend human rights. Gregory considered how photos and videos â essential tools for human rights documentation, evidence, and storytelling â are increasingly distrusted in an era of widespread skepticism and technological advancements that enable deepfakes and AI-generated content. AI can not only create false memories, but also âacts as a powerful conduit for plausible deniability.â Gregory discussed AIâs impact on the ability to believe and trust human rights voices and its role in restructuring the information ecosystem. The escalating burden of proof for human rights activists and the overwhelming volume of digital content underscore how AI can both aid and hinder accountability efforts.
In the face of these concerns, Gregory emphasized the need for human rights defenders to work shape AI systems proactively. He stressed that AI requires a foundational, systemic architecture that ensures information systems serve, rather than undermine, human rights work. Gregory reflected that âat the fundamental (level), this is work enabled by technology, but itâs not about technology.â Digital technologies provide new mechanisms for exposing violence and human rights abuse; the abuse itself has not changed. He also pointed to the need to invest in robust community archives to protect the integrity of human rights evidence against false memories. Stressing the importance of epistemic justice, digital media literacy, and equitable access to technology and technological knowledge, Gregory discussed WITNESSâ work in organizing for digital media literacy and access in human rights digital witnessing, particularly in response to generative AI. One example he highlighted was training individuals how to film audiovisual witnessing videos in ways that are difficult for AI to replicate.
As the floor opened to questions, Gregory pointed to âauthenticity infrastructureâ as one building block to verify content and maintain truth. Instead of treating information as a binary between AI and not AI, it is necessary to understand the entire ârecipeâ of how information is created, locating it along the continuum of how AI permeates modern communication. AI must be understood, not disregarded. This new digital territory will only become more relevant in human rights work, Gregory maintained. The discussion also covered regulatory challenges, courtsâ struggles with AI generated and audiovisual evidence at large, the importance of AI-infused media literacy, and the necessity of strong civil society institutions in the face of corporate media control.A recording of the lecture is available here.
International conference on ‘AI and Human Rights’ in Doha
May 27, 2025
Chairperson of the NHRC Maryam bint Abdullah Al Attiyah
The international conference ‘Artificial Intelligence and Human Rights: Opportunities, Risks, and Visions for a Better Future,’ gets under way in Doha today. Organised by the National Human Rights Committee (NHRC), the two-day event is being held in collaboration with the UN Development Programme (UNDP), the Office of the High Commissioner for Human Rights (OHCHR), the Global Alliance of National Human Rights Institutions (GANHRI), and Qatarâs Ministry of Communications and Information Technology (MCIT) and National Cyber Security Agency, along with other international entities active in the fields of digital tools and technology.
Chairperson of the NHRC Maryam bint Abdullah Al Attiyah, said in a statement Monday that the conference discusses one of the most prominent human rights issues of our time, one that is becoming increasingly important, especially with the tremendous and growing progress in the field of artificial intelligence, which many human rights activists around the world fear will impact the rights of many individuals worldwide.
She added, that the developments in AI that is observed every day requires the establishment of a legal framework that governs the rights of every individual, whether related to privacy or other rights. The framework must also regulate and control the technologies developed by companies, ensuring that rights are not infringed upon, and that the development of AI technologies is not synonymous with the pursuit of financial gain, neglecting the potential infringements on the rights of individuals and communities.
She emphasised that the conference aims to discuss the impact of AI on human rights, not only limiting itself to the challenges it poses to the lives of individuals, but also extending to identifying the opportunities it presents to human rights specialists around the world. She noted that the coming period must witness a deep focus on this area, which is evolving by the hour.
The conference is expected to bring together around 800 partners from around the world to discuss the future of globalisation. Target attendees include government officials, policymakers, AI and technology experts, human rights defenders and activists, legal professionals, AI ethics specialists, civil society representatives, academics and researchers, international organisations, private sector companies, and technology developers.
..The conference is built around 12 core themes and key topics. It focuses on the foundations of artificial intelligence, including fundamental concepts such as machine learning and natural language processing. It also addresses AI and privacy-its impact on personal data, surveillance, and privacy rights. Other themes include bias and discrimination, with an emphasis on addressing algorithmic bias and ensuring fairness, as well as freedom of expression and the role of AI in content moderation, censorship, and the protection of free speech.
The International conference aims to explore the impact of AI on human rights and fundamental freedoms, analyse the opportunities and risks associated with AI from a human rights perspective, present best practices and standards for the ethical use of AI, and engage with policymakers, technology experts, civil society, and the private sector to foster multi-stakeholder dialogue. It also seeks to propose actionable policy and legal framework recommendations to ensure that AI development aligns with human rights principles.
Participating experts will address the legal and ethical frameworks, laws, policies, and ethical standards for the responsible use of artificial intelligence. They will also explore the theme of “AI and Security,” including issues related to militarisation, armed conflicts, and the protection of human rights. Additionally, the conference will examine AI and democracy, focusing on the role of AI in shaping democratic institutions and promoting inclusive participation.
Conference participants will also discuss artificial intelligence and the future of media from a human rights-based perspective, with a focus on both risks and innovation. The conference will further examine the transformations brought about by AI in employment and job opportunities, its impact on labor rights and economic inequality, as well as the associated challenges and prospects.
As part of its ongoing commitment to employing technology in service of humanity and supporting the ethical use of emerging technologies, the Ministry of Communications and Information Technology (MCIT) is also partnering in organising the conference.
for some other posts on Qatar, see: https://humanrightsdefenders.blog/tag/qatar/
McGovern Foundation awards $73.5 million for human-centered Artificial Intelligence
January 6, 2025
The Boston-based Patrick J. McGovern Foundation has announced on 23 December 2024 grants totaling $73.5 million in 2024 in support of human-centered AI.
Awarded to 144 nonprofit, academic, and governmental organizations in 11 countries, the grants will support the development and delivery of AI solutions built for long-term societal benefit and the creation of institutions designed to address the opportunities and challenges this emerging era presents. Grants will support organizations leveraging data science and AI to drive tangible change in a variety of areas with urgency, including climate change, human rights, media and journalism, crisis response, digital literacy, and health equity.
Gifts include $200,000 to MIT Solveto support the 2025 AI for Humanity Prize; $364,000 to Clear Globalto enable scalable, multilingual, voice-powered communication and information channels for crisis-affected communities; $1.25 million to the Aspen Instituteto enhance public understanding and policy discourse around AI; and $1.5 million to United Nations Educational, Scientific and Cultural Organization(UNESCO) to advance ethical AI governance through civil society networks, policy frameworks, and knowledge resources.
Amnesty Internationalto support Amnestyâs Algorithmic Accountability Lab to mobilize and empower civil society to evaluate AI systems and pursue accountability for AI-driven harms ($750,000)
HURIDOCSto use machine learning to enhance human rights data management and advocacy ($400,000)
âThis is not a moment to react; itâs a moment to lead,â said McGovern Foundation president Vilas Dhar. âWe believe that by investing in AI solutions grounded in human values, we can harness technologyâs immense potential to benefit communities and individuals alike. AI can amplify human dignity, protect the vulnerable, drive global prosperity, and become a force for good.â
Two young human rights defenders, Raphael Mimoun and Nikole Yanez, on tech for human rights
May 16, 2024Mozilla is highlighting each year the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through its Rise 25 Awards. On 13 May 2024 was the turn of Raphael Mimoun, a builder dedicated to making tools that empower journalists and human rights defenders. Aron Yohannes talked with Raphael about the launch of his app, Tella, combatting misinformation online, the future of social media platforms and more.
Raphael Mimoun: So I never worked in tech per se and only developed a passion for technology as I was working in human rights. It was really a time when, basically, the power of technology to support movements and to head movements around the world was kind of getting fully understood. You had the Arab Spring, you had Occupy Wall Street, you had all of these movements for social justice, for democracy, for human rights, that were very much kind of spread through technology, right? Technology played a very, very important role. But just after that, it was kind of like a hangover where we all realized, âOK, itâs not just all good and fine.â You also have the flip side, which is government spying on the citizens, identifying citizens through social media, through hacking, and so on and so forth â harassing them, repressing them online, but translating into offline violence, repression, and so on. And so I think that was the moment where I was like, âOK, there is something that needs to be done around technology,â specifically for those people who are on the front lines because if we just treat it as a tool â one of those neutral tools â we end up getting very vulnerable to violence, and it can be from the state, it can also be from online mobs, armed groups, all sort of things.
Thereâs so much misinformation out there now that itâs so much harder to tell the difference between whatâs real and fake news. Twitter was such a reliable tool of information before, but thatâs changed. Do you think that any of these other platforms can be able to help make up for so much of the misinformation that is out there?
I think we all feel the weight of that loss of losing Twitter. Twitter was always a large corporation, partially owned by a billionaire. It was never kind of a community tool, but there was still an ethos, right? Like a philosophy, or the values of the platform were still very much like community-oriented, right? It was that place for activists and human rights defenders and journalists and communities in general to voice their opinions. So I think that loss was very hard on all of us.
I see a lot of misinformation on Instagram as well. There is very little moderation there. Itâs also all visual, so if you want traction, youâre going to try to put something that is very spectacular that is very eye catchy, and so I think that leads to even more misinformation.
I am pretty optimistic about some of the alternatives that have popped up since Twitterâs downfall. Mastodon actually blew up after Twitter, but itâs much older â I think itâs 10 years old by now. And thereâs Bluesky. So I think those two are building up, and they offer spaces that are much more decentralized with much more autonomy and agency to users. You are more likely to be able to customize your feeds. You are more likely to have tools for your own safety online, right? All of those different things that I feel like you could never get on Threads, on Instagram or on Twitter, or anything like that. Iâm hoping itâs actually going to be able to recreate the community that is very much what Twitter was. Itâs never going to be exactly the same thing, but Iâm hoping we will get there. And I think the fact that it is decentralized, open source and with very much a philosophy of agency and autonomy is going to lead us to a place where these social networks canât actually be taken over by a power hungry billionaire.
What do you think is the biggest challenge that we face in the world this year on and offline, and then how do you think we can combat it?
I donât know if thatâs the biggest challenge, but one of the really big challenges that weâre seeing is how the digital is meeting real life and how people who are active online or on the phone on the computer are getting repressed for that work in real life. So we developed an app called Tella, which encrypts and hides files on your phone, right? So you take a photo or a video of a demonstration or police violence, or whatever it is, and then if the police tries to catch you and grab your phone to delete it, they wonât be able to find it, or at least it will be much more difficult to find it. Or it would be uploaded already. And things like that, I think is one of the big things that weâre seeing again. I donât know if that the biggest challenge online at the moment, but one of the big things weâre seeing is just that itâs becoming completely normalized to grab someoneâs phone or check someoneâs computer at the airport, or at the border, in the street and go through it without any form of accountability. People have no idea what the regulations are, what the rules are, whatâs allowed, whatâs not allowed. And when they abuse those powers, is there any recourse? Most places in the world, at least, where we are working, there is definitely no recourse. And so I think that connection between thinking youâre just taking a photo for social media but actually the repercussion is so real because youâre going to have someone take your phone, and maybe theyâre going to delete the photo, or maybe theyâre going to detain you. Or maybe theyâre going to beat you up â like all of those different things. I think this is one of the big challenges that weâre seeing at the moment, and something that isnât traditionally thought of as an internet issue or an online digital rights issue because itâs someone taking a physical device and looking through it. It often gets overlooked, and then we donât have much kind of advocacy around it, or anything like that.
What do you think is one action everybody can take to make the world and our lives online a little bit better?
I think social media has a lot of negative consequences for everyoneâs mental health and many other things, but for people who are active and who want to be active, consider social networks that are open source, privacy-friendly and decentralized. Bluesky, the Fediverse âincluding Mastodon â are examples because I think itâs our responsibility to kind of build up a community there, so we can move away from those social media platforms that are owned by either billionaires or massive corporations, who only want to extract value from us and who spy on us and who censor us. And I feel like if everyone committed to being active on those social media platforms â one way of doing that is just having an account, and whatever you post on one, you just post on the other â I feel like thatâs one thing that can make a big difference in the long run.
We started Rise25 to celebrate Mozillaâs 25th anniversary. What do you hope that people are celebrating in the next 25 years?
I was talking a little bit earlier about how we are building a culture that is more privacy-centric, like people are becoming aware, becoming wary about all these things happening to the data, the identity, and so on. And I do think we are at a turning point in terms of the technology thatâs available to us, the practices and what we need as users to maintain our privacy and our security. I feel like in honestly not even 25, I think in 10 years, if things go well â which itâs hard to know in this field â and if we keep on building what we already are building, I can see how we will have an internet that is a lot more privacy-centric where communications are by default are private. Where end-to-end encryption is ubiquitous in our communication, in our emailing. Where social media isnât extractive and people have actual ownership and agency in the social network networks they use. Where data mining is no longer a thing. I feel like overall, I can see how the infrastructure is now getting built, and that in 10,15 or 25 years, we will be in a place where we can use the internet without having to constantly watch over our shoulder to see if someone is spying on us or seeing who has access and all of those things.
Lastly, what gives you hope about the future of our world?
That people are not getting complacent and that it is always people who are standing up to fight back. Weâre seeing it at. We saw it at Google with people standing up as part of No Tech for Apartheid coalition and people losing the jobs. Weâre seeing it on university campuses around the country. Weâre seeing it on the streets. People fight back. Thatâs where any change has ever come from: the bottom up. I think now, more than ever, people are willing to put something on the line to make sure that they defend their rights. So I think that really gives me hope.
—————
The second story comes from Amnesty International, 14 May 2024 [https://www.amnesty.org/en/latest/campaigns/2024/05/i-come-from-the-world-of-technology-where-there-are-very-few-women/]
Nikole Yanez is a computer scientist by training, and a human rights defender from Honduras. She is passionate about feminism, the impact of the internet and protecting activists. She was first drawn to human rights through her work as a reporter with a local community radio station. After surviving the coup dâĂŠtat in Honduras in 2009, Nikole broadened her approach to focus her activism on technology. When she applied for the Digital Forensics Fellowship with the Amnesty Tech Security Lab in 2022, she was looking to learn more about cybersecurity and apply what she learnt with the organizations and collectives she works with regularly.
She highlighted her commitment to fostering a network of tech-savvy communities across Latin America in an interview with Elina Castillo, Amnesty Techâs Advocacy and Policy Advisor:
I grew up in Honduras, where I lived through the coup dâĂŠtat, which took place in 2009. It was a difficult time where rights were non-existent, and people were constantly afraid. I thought it was something you only read about in history books, but it was happening in front of my eyes. I felt myself just trying to survive, but as time went by it made me stronger and want to fight for justice. Despite the difficulties, people in my community remained hopeful and we created a community radio station, which broadcast stories about everyday people and their lives with the aim of informing people about their human rights. I was a reporter, developing stories about individual people and their fight for their rights. From there, I found a passion for working with technology and it inspired me to train to become a computer scientist.
I am always looking for ways to connect technology with activism, and specifically to support women and Indigenous people in their struggles. As much as technology presents risks for human rights defenders, it also offers opportunities for us to better protect ourselves and strengthen our movements. Technology can bring more visibility to our movements, and it can empower our work by allowing us to connect with other people and learn new strategies.
Is there one moment where you realized how to connect what youâve been doing with feminism with technology?
In my work, my perspective as a feminist helps me centre the experiences and needs of marginalised people for trainings and outreach. It is important for me to publicly identify as an Afrofeminist in a society where there is impunity for gendered and racist violence that occurs every day. In Honduras we need to put our energy into supporting these communities whose rights are most violated, and whose stories are invisible.
For example, in 2006, I was working with a Union to install the Ubuntu operating system (an open-source operating system) on their computers. We realized that the unionists didnât know how to use a computer, so we created a space for digital literacy and learning about how to use a computer at the same time. This became not just a teaching exercise, but an exercise for me to figure out how to connect these tools to what people are interested in. Something clicked for me in this moment, and this experience helped solidify my approach to working on technology and human rights.
There are not many women working in technology and human rights. I donât want to be one of the only women, so my goal is to see more women colleagues working on technical issues. I want to make it possible for women to work in this field. I also want to motivate more women to create change within the intersection of technology and human rights. Using a feminist perspective and approach, we ask big questions about how we are doing the work, what our approach needs to be, and who we need to work with. Nikole Yanez Honduras Human Rights Defender
For me, building a feminist internet means building an internet for everyone. This means creating a space where we do not reproduce sexist violence, where we find a community that responds to the people, to the groups, and to the organizations that fight for human rights. This includes involving women and marginalised people in building the infrastructure, in the configuration of servers, and in the development of protocols for how we use all these tools.
In Honduras, there arenât many people trained in digital forensics analysis, yet there are organizations that are always seeking me out to help check their phones. The fellowship helped me learn about forensic analysis on phones and computers and tied the learning to what Iâm actually doing in my area with different organizations and womenâs rights defenders. The fellowship was practical and rooted in the experience of civil society organizations.

How do you explain the importance of digital forensics? Well first, itâs incredibly relevant for women rights defenders. Everyone wants to know if their phone has been hacked. Thatâs the first thing they ask:, âCan you actually know whether your phone has been hacked?â and âHow do I know? Can you do it for me? How?â Those are the things that come up in my trainings and conversations.
I like to help people to think about protection as a process, something ongoing, because we use technology all day long. There are organizations and people that take years to understand that. So, itâs not something that can be achieved in a single conversation. Sometimes a lot of things need to happen, including bad things, before people really take this topic seriously…
I try to use very basic tools when Iâm doing digital security support, to say you can do this on whatever device youâre on, this is a prevention tool. Itâs not just applying technical knowledge, itâs also a process of explaining, training, showing how this work is not just for hackers or people who know a lot about computers.
One of the challenges is to spread awareness about cybersecurity among Indigenous and grassroots organizations, which arenât hyper-connected and donât think that digital forensics work is relevant to them. Sometimes what we do is completely disconnected from their lives, and they ask us: âBut what are you doing?â So, our job is to understand their questions and where they are coming from and ground our knowledge-sharing in what people are actually doing.
To someone reading this piece and saying, oh, this kind of resonates with me, where do I start, what would your recommendation be?
If you are a human rights defender, I would recommend that you share your knowledge with your collective. You can teach them the importance of knowing about them, practicing them, as well as encouraging training to prevent digital attacks, because, in the end, forensic analysis is a reaction to something that has happened.
We can take a lot of preventive measures to ensure the smallest possible impact. Thatâs the best way to start. And itâs crucial to stay informed, to keep reading, to stay up to date with the news and build community.
If there are girls or gender non-conforming people reading this who are interested in technical issues, it doesnât matter if you donât have a degree or a formal education, as long as you like it. Most hackers Iâve met become hackers because they dive into a subject, they like it and theyâre passionate about it.Nikole Yanez Honduras Human Rights Defender.
See also:¨https://www.amnesty.org/en/what-we-do/technology/online-violence/
In the deepfake era, we need to hear the Human Rights Defenders
December 19, 2023In a Blog Post (Council on Foreign Relations of 18 December 2023) Raquel Vazquez Llorenteâargues that ‘Artificial intelligence is increasingly used to alter and generate content online. As development of AI continues, societies and policymakers need to ensure that it incorporates fundamental human rights.” Raquel is the Head of Law and Policy, Technology Threats and Opportunities at WITNESS
The urgency of integrating human rights into the DNA of emerging technologies has never been more pressing. Through my role at WITNESS, I’ve observed first-hand the profound impact of generative AI across societies, and most importantly, on those defending democracy at the frontlines.
The recent elections in Argentina were marked by the widespread use of AI in campaigning material. Generative AI has also been used to target candidates with embarrassing content (increasingly of a sexual nature), to generate political ads, and to support candidatesâ campaigns and outreach activities in India, the United States, Poland, Zambia, and Bangladesh (to name a few). The overall result of the lack of strong frameworks for the use of synthetic media in political settings has been a climate of mistrust regarding what we see or hear.
Not all digital alteration is harmful, though. Part of my work involves identifying how emerging technologies can foster positive change. For instance, with appropriate disclosure, synthetic media could be used to enhance voter education and engagement. Generative AI could help create informative content about candidates and their platforms, or of wider election processes, in different languages and formats, improving inclusivity or reducing barriers for underdog or outsider candidates. For voters with disabilities, synthetic media could provide accessible formats of election materials, such as sign language avatars or audio descriptions of written content. Satirical deepfakes could engage people who might otherwise be disinterested in politics, bringing attention to issues that might not be covered in mainstream media. We need to celebrate and protect these uses.
As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? When AI can blur the lines between reality and fiction with increasing credibility and ease, discerning truth from falsehood becomes not just a technological battle, but a fight to uphold democracy.
From conversations with journalists, activists, technologists and other communities impacted by generative AI and deepfakes, I have learnt that the effects of synthetic media on democracy are a mix of new, old, and borrowed challenges.
Generative AI introduces a daunting new reality: inconvenient truths can be denied as deep faked, or at least facilitate claims of plausible deniability to evade accountability. The burden of proof, or perhaps more accurately, the âburden of truthâ has shifted onto those circulating authentic content and holding the powerful to account. This is not just a crisis of identifying what is fake. It is also a crisis of protecting what is true. When anything and everything can be dismissed as AI-generated or manipulated, how do we elevate the real stories of those defending our democracy at the frontlines?
But AIâs impact doesnât stop at new challenges; it exacerbates old inequalities. Those who are already marginalized and disenfranchisedâdue to their gender, ethnicity, race or belonging to a particular groupâface amplified risks. AI is like a magnifying glass for exclusion, and its harms are cumulative. AI deepens existing vulnerabilities, bringing a serious threat to principles of inclusivity and fairness that lie at the heart of democratic values. Similarly, sexual deepfakes can have an additional chilling effect, discouraging women, LGBTQ+ people and individuals from minoritized communities to participate in public life, thus eroding the diversity and representativeness that are essential for a healthy democracy.
Lastly, much as with social media, where we failed to incorporate the voices of the global majority, we have borrowed previous mistakes. The shortcomings in moderating content, combating misinformation, and protecting user privacy have had profound implications on democracy and social discourse. Similarly, in the context of AI, we are yet to see meaningful policies and regulation that not only consult globally those that are being impacted by AI but, more importantly, center the solutions that affected communities beyond the United States and Europe prioritize. This highlights a crucial gap: the urgent need for a global perspective in AI governance, one that learns from the failures of social media in addressing cultural and political nuances across different societies.
As we navigate AI’s impact on democracy and human rights, our approach to these challenges should be multifaceted. We must draw on a blend of strategiesâones that address the immediate ‘new’ realities of AI, respond to the ‘old’ but persistent challenges of inequality, and incorporate ‘borrowed’ wisdom from our past experiences.
First, we must ensure that new AI regulations and companiesâ policies are steeped in human rights law and principles, such as those enshrined in the Universal Declaration of Human Rights. In the coming years, one of the most important areas in socio-technical expertise will be the ability to translate human rights protections into AI policies and legislation.
While anchoring new policies in human rights is crucial, we should not lose sight of the historical context of these technological advancements. We must look back as we move forward. As with technological advancements of the past, we should remind ourselves that progress is not how far you go, but how many people you bring along. We should really ask, is it technological progress if it is not inclusive, if it reproduces a disadvantage? Technological advancement that leaves people behind is not true progress; it is an illusion of progress that perpetuates inequality and systems of oppression. This past weekend marked twenty-five years since the adoption of the UN Declaration on Human Rights Defenders, which recognizes the key role of human rights defenders in realizing the Universal Declaration of Human Rights and other legally binding treaties. In the current wave of excitement around generative AI, the voices of those protecting human rights at the frontlines have rarely been more vital.
Our journey towards a future shaped by AI is also about learning from the routes we have already travelled, especially those from the social media era. Synthetic media has to be understood in the context of the broader information ecosystem. We are monetizing the spread of falsehoods while keeping local content moderators and third-party fact-checkers on precarious salaries, and putting the blame on platform users for not being educated enough to spot the fakery. The only way to align democratic values with technology goals is by both placing responsibility and establishing accountability across the whole information and AI ecosystem, from the foundation models researchers, to those commercializing AI tools, and those creating content and distributing it.
In weaving together these new, old, and borrowed strands of thought, we create a powerful blueprint for steering the course of AI. This is not just about countering a wave of digital manipulationâit is about championing technology advancement that amplifies our democratic values, deepens our global engagement, and preserves the core of our common humanity in an increasingly AI-powered and image-driven world. By centering peopleâs rights in AI development, we not only protect our individual freedoms, but also fortify our shared democratic future.
Should HRDs worry about Artificial Intelligence?
April 12, 2023Towards Life 3.0: Ethics and Technology in the 21st Century is a talk series organized and facilitated by Dr. Mathias Risse, Director of the Carr Center for Human Rights Policy, and Berthold Beitz Professor in Human Rights, Global Affairs, and Philosophy. Drawing inspiration from the title of Max Tegmarkâs book, Life 3.0: Being Human in the Age of Artificial Intelligence, the series draws upon a range of scholars, technology leaders, and public interest technologists to address the ethical aspects of the long-term impact of artificial intelligence on society and human life.
On 20 April you can join for 45 minutes with WITNESS’ new Executive Director Sam Gregory [see: https://humanrightsdefenders.blog/2023/04/05/sam-gregory-finally-in-the-lead-at-witness/]o n how AI is changing the media and information landscape; the creative opportunities for activists and threats to truth created by synthetic image, video, and audio; and the people and places being impacted but left out of the current conversation.
Sam says “Donât let the hype-cycle around ChatGPT and Midjourney pull you into panic, WITNESS has been preparing for this moment for the past decade with foundational research and global advocacy on synthetic and manipulated media. Through structured work with human rights defenders, journalists, and technologists on four continents, weâve identified the most pressing concerns posed by these emerging technologies and concrete recommendations on what we must do now.
We have been listening to critical voices around the globe to anticipate and design thoughtful responses to the impact of deepfakes and generative AI on our ability to discern the truth. WITNESS has proactively worked on responsible practices for synthetic media as a part of the Partnership on AI and helped develop technical standards to understand media origins and edits with the C2PA. We have directly influenced standards for authenticity infrastructure and continue to forcefully advocate for centering equity and human rights concerns in the development of detection technologies. We are convening with the people in our communities who have most to gain and lose from these technologies to hear what they want and need, most recently in Kenya at the #GenAIAfrica convening”.
 Register here: wit.to/AI-webinarÂ