ISHR launched a new report that summarises and assesses progress and challenges over the past decade in relation to initiatives to protect human rights defenders in the context of business frameworks, guidance, initiatives and tools that have emerged at local, national and regional levels. The protection of human rights defenders in relation to business activities is vital.
Defenders play a crucial role in safeguarding human rights and environmental standards against adverse impacts of business operations globally. Despite their essential work, defenders frequently face severe risks, including threats, surveillance, legal and judicial harassment, and violence.
According to the Business and Human Rights Resource Centre (BHRRC), more than 6,400 attacks on defenders linked to business activities have been documented over the past decade, emphasising the urgency of addressing these challenges. While this situation is not new, and civil society organisations have constantly pushed for accountability for and prevention of these attacks, public awareness of the issue increased with early efforts to raise the visibility of defenders at the Human Rights Council and the adoption of key thematic resolutions, as well as raising defenders’ voices at other foras like the UN Forum on Business and Human Rights.
The report‘Business Frameworks and Actions to Support Human Rights Defenders: a Retrospective and Recommendations’ takes stock of the frameworks, tools, and advocacy developed over the last decade to protect and support human rights defenders in the context of business activities and operations.
The report examines how various standards have been operationalised through company policies, investor guidance, multi-stakeholder initiatives, legal reforms, and sector-specific commitments. At the same time, it highlights how despite these advancements, the actual implementation by businesses remains inadequate. Effective corporate action remains insufficient, highlighting a critical gap that must be urgently addressed to ensure defenders can safely carry out their vital work protecting human rights and environmental justice. In order to address this, drawing on case studies, civil society tracking tools, and policy analysis, the report identifies key barriers to effective protection and proposes targeted recommendations.
Call for applications is now open! Photos by Maria Diaz Justice & Peace Netherlands is launching a new call for applications for human rights defenders at risk to participate in Shelter City Netherlands. The deadline for applications is 10 August 2025 at 23:59 CEST (Central European Summer Time).Help us reach more human rights defenders at risk and in need of temporary relocation to a safer space by sharing this call with your network.Shelter City is a global movement of cities, organizations and people who stand side by side with human rights defenders at risk. Shelter City provides temporary safe and inspiring spaces for human rights defenders at risk where they re-energize, receive tailormade support and engage with allies. The term ‘human rights defender’ is intended to refer to the broad range of activists, journalists and independent media professionals, scholars, writers, artists, lawyers, civil and political rights defenders, civil society members, and others working to advance human rights and democracy around the world in a peaceful manner. From March 2026 onwards, 14 cities in the Netherlands will receive human rights defenders for a period of three months. At the end of their stay in the Netherlands, participants are expected to return with new tools and energy to carry out their work at home. Asser Institute Fellowship (only available for English accompaniment beginning in September 2026)Justice & Peace and the Asser Institute have established a collaborative relationship to strengthen and support the capacity of local human rights defenders worldwide. In the context of the Institute’s Visiting Researchers Programme, the Asser Institute hosts one Fellow per year within the framework of the Shelter City initiative by Justice & Peace. The fellowship will take place in September 2026.The selected Fellow will carry out a research project during the three-month period and take part in other relevant human rights (research) activities of the Asser Institute. In line with these activities, closer to the end of the three-month period, the Fellow will have to present the relevant research findings in a public or closed event. The Fellow may also participate in other (public) events like lectures or (panel) discussions.
To be eligible for Shelter City Netherlands, human rights defenders should meet the following conditions:They implement a non-violent approach in their work;They are threatened or otherwise under pressure due to their work or activism;They are willing and able to return to their country of origin after 3 months;They are willing to speak publicly about their experience or about human rights in their country to the extent that their security situation allows; They have a conversational level* of English;They have a valid passport (with no less than 18 months of validity at the time of applying) or are willing to carry out the procedures necessary for its issuance. Justice & Peace covers the costs of issuing a passport and/or visa (if applicable);They are not subject to any measure or judicial prohibition to leave the country;They are willing to begin their stay in the Netherlands around March 2026. *By conversational English, we mean that participants’ level of English allows them to actively participate in training, speak about their work, communicate with the host city, etc. Note that additional factors will be taken into consideration in the final round of selection, such as the added value of a stay in the Netherlands as well as gender, geographic, and thematic balance. Please note that only under exceptional circumstances are we able to accept human rights defenders currently residing in a third country.Apply nowApplication forms must be submitted by 10 August 2025 at 23:59 CEST (Central European Summer Time). An independent commission will select the participants.Apply now!Note that selected human rights defenders will not automatically participate in Shelter City as Justice & Peace is not in control of issuing the required visas to enter the Netherlands. For more information, please contact us at info@sheltercity.org
The Internet Society (ISOC) and Global Cyber Alliance (GCA), on behalf of the Common Good Cyber secretariat, today announced on 23 June 2025 the launch of the Common Good Cyber Fund, an initiative to strengthen global cybersecurity by supporting nonprofits that deliver core cybersecurity services that protect civil society actors and the Internet as a whole.
This first-of-its-kind effort to fund cybersecurity for the common good—for everyone, including those at the greatest risk—has the potential to fundamentally improve cybersecurity for billions of people around the world. The Common Good Cyber secretariat members working to address this challenge are: Global Cyber Alliance, Cyber Threat Alliance, CyberPeace Institute, Forum of Incident Response and Security Teams, Global Forum on Cyber Expertise, Institute for Security and Technology, and Shadowserver Foundation.
The Fund is a milestone in advancing Common Good Cyber, a global initiative led by the Global Cyber Alliance, to create sustainable funding models for the organizations and individuals working to keep the Internet safe.
Despite serving as a critical frontline defense for the security of the Internet, cybersecurity nonprofits remain severely underfunded—exposing millions of users, including journalists, human rights defenders, and other civil society groups. This underfunding also leaves the wider public exposed to increasingly frequent and sophisticated cyber threats.
Common Good Cyber represents a pivotal step toward a stronger, more inclusive cybersecurity ecosystem. By increasing the resilience and long-term sustainability of nonprofits working in cybersecurity, improving access to trusted services for civil society organizations and human rights defenders, and encouraging greater adoption of best practices and security-by-design principles, the Common Good Cyber Fund ultimately helps protect and empower all Internet users.”Philip Reitinger, President and CEO, Global Cyber Alliance
The fund will support nonprofits that:
Maintain and secure core digital infrastructure, including DNS, routing, and threat intelligence systems for the public good;
Deliver cybersecurity assistance to high-risk actors through training, rapid incident response, and free-to-use tools
These future beneficiaries support the Internet by enabling secure operations and supplying global threat intelligence. They shield civil society from cyber threats through direct, expert intervention and elevate the security baseline for the entire ecosystem by supporting the “invisible infrastructure” on which civil society depends.
The Fund will operate through a collaborative structure. The Internet Society will manage the fund, and a representative and expert advisory board will provide strategic guidance.. Acting on behalf of the Common Good Cyber Secretariat, the Global Cyber Alliance will lead the Fund’s Strategic Advisory Committee and, with the other Secretariat members, engage in educational advocacy and outreach within the broader cybersecurity ecosystem.
The Common Good Cyber Fund is a global commitment to safeguard the digital frontlines, enabling local resilience and long-term digital sustainability. By supporting nonprofits advancing cybersecurity through tools, solutions, and platforms, the Fund builds a safer Internet that works for everyone, everywhere.
The Internet Society and the Global Cyber Alliance are finalizing the Fund’s legal and logistical framework. More information about the funding will be shared in the coming months.
On 27 May 2025, the Oversight Board overturned Meta’s decision to leave up content targeting one of Peru’s leading human rights defenders:
Summary
The Oversight Board overturns Meta’s decision to leave up content targeting one of Peru’s leading human rights defenders. Restrictions on fundamental freedoms, such as the right to assembly and association, are increasing in Peru, with non-governmental organizations (NGOs) among those impacted. Containing an image of the defender that has been altered, likely with AI, to show blood dripping down her face, the post was shared by a member of La Resistencia. This group targets journalists, NGOs, human rights activists and institutions in Peru with disinformation, intimidation and violence. Taken in its whole context, this post qualifies as a “veiled threat” under the Violence and Incitement policy. As this case reveals potential underenforcement of veiled or coded threats on Meta’s platforms, the Board makes two related recommendations.
……
The Oversight Board’s Decision
The Oversight Board overturns Meta’s decision to leave up the content. The Board also recommends that Meta:
Clarify that “coded statements where the method of violence is not clearly articulated” are prohibited in written, visual and verbal form, under the Violence and Incitement Community Standard.
Produce an annual accuracy assessment on potential veiled threats, including a specific focus on content containing threats against human rights defenders that incorrectly remains up on the platform and instances of political speech incorrectly being taken down.
Sam Gregory delivered the Spring 2025 Gruber Distinguished Lecture on Global Justice on March 24, 2025, at 4:30 pm at Yale Law School. The lecture was co-moderated by his faculty hosts, Binger Clinical Professor Emeritus of Human Rights Jim Silk ’89 and David Simon, assistant dean for Graduate Education, senior lecturer in Global Affairs and director of the Genocide Studies Program at Yale University. Gregory is the executive director of WITNESS, a human rights nonprofit organization that empowers individuals and communities to use technology to document human rights abuses and advocate for justice. He is an internationally recognized expert on using digital media and smartphone witnessing to defend and protect human rights. With over two decades of experience in the intersection of technology, media, and human rights, Gregory has become a leading figure in the field of digital advocacy. He previously launched the “Prepare, Don’t Panic” initiative in 2018 to prompt concerted, effective, and context-sensitive policy responses to deepfakes and deceptive AI issues worldwide. He focuses on leveraging emerging solutions like authenticity infrastructure, trustworthy audiovisual witnessing, and livestreamed/co-present storytelling to address misinformation, media manipulation, and rising authoritarianism.
Gregory’s lecture, entitled “Fortifying Truth, Trust and Evidence in the Face of Artificial Intelligence and Emerging Technology,” focused on the challenges that artificial intelligence poses to truth, trust, and human rights advocacy. Generative AI’s rapid development and impact on how media is made, edited, and distributed affects how digital technology can be used to expose human rights violations and defend human rights. Gregory considered how photos and videos – essential tools for human rights documentation, evidence, and storytelling – are increasingly distrusted in an era of widespread skepticism and technological advancements that enable deepfakes and AI-generated content. AI can not only create false memories, but also “acts as a powerful conduit for plausible deniability.” Gregory discussed AI’s impact on the ability to believe and trust human rights voices and its role in restructuring the information ecosystem. The escalating burden of proof for human rights activists and the overwhelming volume of digital content underscore how AI can both aid and hinder accountability efforts.
In the face of these concerns, Gregory emphasized the need for human rights defenders to work shape AI systems proactively. He stressed that AI requires a foundational, systemic architecture that ensures information systems serve, rather than undermine, human rights work. Gregory reflected that “at the fundamental (level), this is work enabled by technology, but it’s not about technology.” Digital technologies provide new mechanisms for exposing violence and human rights abuse; the abuse itself has not changed. He also pointed to the need to invest in robust community archives to protect the integrity of human rights evidence against false memories. Stressing the importance of epistemic justice, digital media literacy, and equitable access to technology and technological knowledge, Gregory discussed WITNESS’ work in organizing for digital media literacy and access in human rights digital witnessing, particularly in response to generative AI. One example he highlighted was training individuals how to film audiovisual witnessing videos in ways that are difficult for AI to replicate.
As the floor opened to questions, Gregory pointed to “authenticity infrastructure” as one building block to verify content and maintain truth. Instead of treating information as a binary between AI and not AI, it is necessary to understand the entire “recipe” of how information is created, locating it along the continuum of how AI permeates modern communication. AI must be understood, not disregarded. This new digital territory will only become more relevant in human rights work, Gregory maintained. The discussion also covered regulatory challenges, courts’ struggles with AI generated and audiovisual evidence at large, the importance of AI-infused media literacy, and the necessity of strong civil society institutions in the face of corporate media control.A recording of the lecture is available here.
Chairperson of the NHRC Maryam bint Abdullah Al Attiyah
The international conference ‘Artificial Intelligence and Human Rights: Opportunities, Risks, and Visions for a Better Future,’ gets under way in Doha today. Organised by the National Human Rights Committee (NHRC), the two-day event is being held in collaboration with the UN Development Programme (UNDP), the Office of the High Commissioner for Human Rights (OHCHR), the Global Alliance of National Human Rights Institutions (GANHRI), and Qatar’s Ministry of Communications and Information Technology (MCIT) and National Cyber Security Agency, along with other international entities active in the fields of digital tools and technology.
Chairperson of the NHRC Maryam bint Abdullah Al Attiyah, said in a statement Monday that the conference discusses one of the most prominent human rights issues of our time, one that is becoming increasingly important, especially with the tremendous and growing progress in the field of artificial intelligence, which many human rights activists around the world fear will impact the rights of many individuals worldwide.
She added, that the developments in AI that is observed every day requires the establishment of a legal framework that governs the rights of every individual, whether related to privacy or other rights. The framework must also regulate and control the technologies developed by companies, ensuring that rights are not infringed upon, and that the development of AI technologies is not synonymous with the pursuit of financial gain, neglecting the potential infringements on the rights of individuals and communities.
She emphasised that the conference aims to discuss the impact of AI on human rights, not only limiting itself to the challenges it poses to the lives of individuals, but also extending to identifying the opportunities it presents to human rights specialists around the world. She noted that the coming period must witness a deep focus on this area, which is evolving by the hour.
The conference is expected to bring together around 800 partners from around the world to discuss the future of globalisation. Target attendees include government officials, policymakers, AI and technology experts, human rights defenders and activists, legal professionals, AI ethics specialists, civil society representatives, academics and researchers, international organisations, private sector companies, and technology developers.
..The conference is built around 12 core themes and key topics. It focuses on the foundations of artificial intelligence, including fundamental concepts such as machine learning and natural language processing. It also addresses AI and privacy-its impact on personal data, surveillance, and privacy rights. Other themes include bias and discrimination, with an emphasis on addressing algorithmic bias and ensuring fairness, as well as freedom of expression and the role of AI in content moderation, censorship, and the protection of free speech.
The International conference aims to explore the impact of AI on human rights and fundamental freedoms, analyse the opportunities and risks associated with AI from a human rights perspective, present best practices and standards for the ethical use of AI, and engage with policymakers, technology experts, civil society, and the private sector to foster multi-stakeholder dialogue. It also seeks to propose actionable policy and legal framework recommendations to ensure that AI development aligns with human rights principles.
Participating experts will address the legal and ethical frameworks, laws, policies, and ethical standards for the responsible use of artificial intelligence. They will also explore the theme of “AI and Security,” including issues related to militarisation, armed conflicts, and the protection of human rights. Additionally, the conference will examine AI and democracy, focusing on the role of AI in shaping democratic institutions and promoting inclusive participation.
Conference participants will also discuss artificial intelligence and the future of media from a human rights-based perspective, with a focus on both risks and innovation. The conference will further examine the transformations brought about by AI in employment and job opportunities, its impact on labor rights and economic inequality, as well as the associated challenges and prospects.
As part of its ongoing commitment to employing technology in service of humanity and supporting the ethical use of emerging technologies, the Ministry of Communications and Information Technology (MCIT) is also partnering in organising the conference.
‘New technology provides human rights defenders with tools to organize, spread information, and reach people. At the same time, many experience digital surveillance, online violence, and harassment. It is important that these issues are discussed in the UN, and therefore, Norway is presenting a resolution in the UN Human Rights Council this spring’, says Foreign Minister Espen Barth Eide.
The resolution emphasizes that human rights are universal and apply in the same manner online as offline. It advocates for increased protection against digital threats and surveillance and ensures that new technology is not used to restrict freedom of expression, freedom of assembly, or the right to privacy. The resolution also highlights the need for dialogue with tech companies to discuss the challenges faced by human rights defenders in the digital space.
‘We want to gather broad support for the resolution and secure clear commitments from the international community to protect those who fight for our shared rights – also in the digital sphere’, says Eide.
Norway has a long tradition of advocating for the protection of human rights defenders. The new resolution is the result of close dialogue with civil society actors, technology experts, and other countries. The resolution will be presented and adopted at the UN this spring. Moving forward, Norway will work to gain as much international support as possible for the resolution’s important message.
New and emerging technologies have become a fundamental tool for human rights defenders to conduct their activities, boost solidarity among movements and reach different audiences. Unfortunately, these positive aspects have been overshadowed by negative impacts on the enjoyment of human rights, including increased threats and risks for human rights defenders. While we see the increased negative impacts of new technologies, we do not see that governments are addressing these impacts comprehensively.
Furthermore, States and their law enforcement agencies (often through the help of non-State actors, including business enterprises) often take down or censor the information shared by defenders on social media and other platforms. In other cases, we have seen that businesses are also complicit in attacks and violations against human right defenders.
Conversely, lack of access to the internet and the digital gaps in many countries and regions, or affecting specific groups, limits the potential of digital technologies for activism and movement building, as well as access to information.
The Declaration on Human Rights Defenders, adopted in 1998, does not consider these challenges, which have largely arisen with the rapid evolution of technology. In this context, and, as part of activities to mark the 25th anniversary of the UN Declaration on human rights defenders, a coalition of NGOs launched a consultative initiative to identify the key issues faced by human rights defenders that are insufficiently addressed by the UN Declaration, including on the area of digital and new technologies. These issues are also reflected in the open letter to States on the draft resolution on human rights defenders that will be considered during HRC58.
This side event will be an opportunity to continue discussing the reality and the challenges that human rights defenders face in the context of new and emerging technologies. It will also be an opportunity to hear directly from those who, on a daily basis, work with defenders in the field of digital rights while highlighting their specific protection needs. Finally, the event will also help remind States about the range of obligations in this field that can contribute to inform the consultations on the HRC58 resolution on human rights defenders.
Panelists:
Opening remarks: Permanent Mission of Norway
Speakers:
Carla Vitoria – Association for Progressive Communications
Human rights defender from Kenya regarding the Safaricom case (via video message)
Woman human rights defender from Colombia regarding use of new technologies during peaceful protests
Human rights defender from Myanmar regarding online incitement to violence against Rohingya people
Video montage of civil society priorities for the human rights defender resolution at HRC58
Moderator: Ulises Quero, Programme Manager, Land, Environment and Business & Human Rights (ISHR)
This event is co-sponsored by Access Now, Asian Forum for Human Rights & Development (FORUM-ASIA), Association for Progressive Communications (APC), Business and Human Rights Resource Centre (BHRRC), DefendDefenders (East and Horn of Africa HRD Project), Huridocs, Gulf Centre for Human Rights (GCHR), International Lesbian, Gay, Bisexual, Trans and Intersex Association (ILGA World), International Service for Human Rights (ISHR), Peace Brigades International, Privacy International, Protection International, Regional Coalition of WHRDs in Southwest Asia and North Africa (WHRD MENA Coalition).
Mozilla is highlighting each year the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through its Rise 25 Awards. On 13 May 2024 was the turn of Raphael Mimoun, a builder dedicated to making tools that empower journalists and human rights defenders. Aron Yohannes talked with Raphael about the launch of his app, Tella, combatting misinformation online, the future of social media platforms and more.
Raphael Mimoun: So I never worked in tech per se and only developed a passion for technology as I was working in human rights. It was really a time when, basically, the power of technology to support movements and to head movements around the world was kind of getting fully understood. You had the Arab Spring, you had Occupy Wall Street, you had all of these movements for social justice, for democracy, for human rights, that were very much kind of spread through technology, right? Technology played a very, very important role. But just after that, it was kind of like a hangover where we all realized, “OK, it’s not just all good and fine.” You also have the flip side, which is government spying on the citizens, identifying citizens through social media, through hacking, and so on and so forth — harassing them, repressing them online, but translating into offline violence, repression, and so on. And so I think that was the moment where I was like, “OK, there is something that needs to be done around technology,” specifically for those people who are on the front lines because if we just treat it as a tool — one of those neutral tools — we end up getting very vulnerable to violence, and it can be from the state, it can also be from online mobs, armed groups, all sort of things.
There’s so much misinformation out there now that it’s so much harder to tell the difference between what’s real and fake news. Twitter was such a reliable tool of information before, but that’s changed. Do you think that any of these other platforms can be able to help make up for so much of the misinformation that is out there?
I think we all feel the weight of that loss of losing Twitter. Twitter was always a large corporation, partially owned by a billionaire. It was never kind of a community tool, but there was still an ethos, right? Like a philosophy, or the values of the platform were still very much like community-oriented, right? It was that place for activists and human rights defenders and journalists and communities in general to voice their opinions. So I think that loss was very hard on all of us.
I see a lot of misinformation on Instagram as well. There is very little moderation there. It’s also all visual, so if you want traction, you’re going to try to put something that is very spectacular that is very eye catchy, and so I think that leads to even more misinformation.
I am pretty optimistic about some of the alternatives that have popped up since Twitter’s downfall. Mastodon actually blew up after Twitter, but it’s much older — I think it’s 10 years old by now. And there’s Bluesky. So I think those two are building up, and they offer spaces that are much more decentralized with much more autonomy and agency to users. You are more likely to be able to customize your feeds. You are more likely to have tools for your own safety online, right? All of those different things that I feel like you could never get on Threads, on Instagram or on Twitter, or anything like that. I’m hoping it’s actually going to be able to recreate the community that is very much what Twitter was. It’s never going to be exactly the same thing, but I’m hoping we will get there. And I think the fact that it is decentralized, open source and with very much a philosophy of agency and autonomy is going to lead us to a place where these social networks can’t actually be taken over by a power hungry billionaire.
What do you think is the biggest challenge that we face in the world this year on and offline, and then how do you think we can combat it?
I don’t know if that’s the biggest challenge, but one of the really big challenges that we’re seeing is how the digital is meeting real life and how people who are active online or on the phone on the computer are getting repressed for that work in real life. So we developed an app called Tella, which encrypts and hides files on your phone, right? So you take a photo or a video of a demonstration or police violence, or whatever it is, and then if the police tries to catch you and grab your phone to delete it, they won’t be able to find it, or at least it will be much more difficult to find it. Or it would be uploaded already. And things like that, I think is one of the big things that we’re seeing again. I don’t know if that the biggest challenge online at the moment, but one of the big things we’re seeing is just that it’s becoming completely normalized to grab someone’s phone or check someone’s computer at the airport, or at the border, in the street and go through it without any form of accountability. People have no idea what the regulations are, what the rules are, what’s allowed, what’s not allowed. And when they abuse those powers, is there any recourse? Most places in the world, at least, where we are working, there is definitely no recourse. And so I think that connection between thinking you’re just taking a photo for social media but actually the repercussion is so real because you’re going to have someone take your phone, and maybe they’re going to delete the photo, or maybe they’re going to detain you. Or maybe they’re going to beat you up — like all of those different things. I think this is one of the big challenges that we’re seeing at the moment, and something that isn’t traditionally thought of as an internet issue or an online digital rights issue because it’s someone taking a physical device and looking through it. It often gets overlooked, and then we don’t have much kind of advocacy around it, or anything like that.
What do you think is one action everybody can take to make the world and our lives online a little bit better?
I think social media has a lot of negative consequences for everyone’s mental health and many other things, but for people who are active and who want to be active, consider social networks that are open source, privacy-friendly and decentralized. Bluesky, the Fediverse —including Mastodon — are examples because I think it’s our responsibility to kind of build up a community there, so we can move away from those social media platforms that are owned by either billionaires or massive corporations, who only want to extract value from us and who spy on us and who censor us. And I feel like if everyone committed to being active on those social media platforms — one way of doing that is just having an account, and whatever you post on one, you just post on the other — I feel like that’s one thing that can make a big difference in the long run.
We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope that people are celebrating in the next 25 years?
I was talking a little bit earlier about how we are building a culture that is more privacy-centric, like people are becoming aware, becoming wary about all these things happening to the data, the identity, and so on. And I do think we are at a turning point in terms of the technology that’s available to us, the practices and what we need as users to maintain our privacy and our security. I feel like in honestly not even 25, I think in 10 years, if things go well — which it’s hard to know in this field — and if we keep on building what we already are building, I can see how we will have an internet that is a lot more privacy-centric where communications are by default are private. Where end-to-end encryption is ubiquitous in our communication, in our emailing. Where social media isn’t extractive and people have actual ownership and agency in the social network networks they use. Where data mining is no longer a thing. I feel like overall, I can see how the infrastructure is now getting built, and that in 10,15 or 25 years, we will be in a place where we can use the internet without having to constantly watch over our shoulder to see if someone is spying on us or seeing who has access and all of those things.
Lastly, what gives you hope about the future of our world?
That people are not getting complacent and that it is always people who are standing up to fight back. We’re seeing it at. We saw it at Google with people standing up as part of No Tech for Apartheid coalition and people losing the jobs. We’re seeing it on university campuses around the country. We’re seeing it on the streets. People fight back. That’s where any change has ever come from: the bottom up. I think now, more than ever, people are willing to put something on the line to make sure that they defend their rights. So I think that really gives me hope.
Nikole Yanez is a computer scientist by training, and a human rights defender from Honduras. She is passionate about feminism, the impact of the internet and protecting activists. She was first drawn to human rights through her work as a reporter with a local community radio station. After surviving the coup d’état in Honduras in 2009, Nikole broadened her approach to focus her activism on technology. When she applied for the Digital Forensics Fellowship with the Amnesty Tech Security Lab in 2022, she was looking to learn more about cybersecurity and apply what she learnt with the organizations and collectives she works with regularly.
She highlighted her commitment to fostering a network of tech-savvy communities across Latin America in an interview with Elina Castillo, Amnesty Tech’s Advocacy and Policy Advisor:
I grew up in Honduras, where I lived through the coup d’état, which took place in 2009. It was a difficult time where rights were non-existent, and people were constantly afraid. I thought it was something you only read about in history books, but it was happening in front of my eyes. I felt myself just trying to survive, but as time went by it made me stronger and want to fight for justice. Despite the difficulties, people in my community remained hopeful and we created a community radio station, which broadcast stories about everyday people and their lives with the aim of informing people about their human rights. I was a reporter, developing stories about individual people and their fight for their rights. From there, I found a passion for working with technology and it inspired me to train to become a computer scientist.
I am always looking for ways to connect technology with activism, and specifically to support women and Indigenous people in their struggles. As much as technology presents risks for human rights defenders, it also offers opportunities for us to better protect ourselves and strengthen our movements. Technology can bring more visibility to our movements, and it can empower our work by allowing us to connect with other people and learn new strategies.
Is there one moment where you realized how to connect what you’ve been doing with feminism with technology?
In my work, my perspective as a feminist helps me centre the experiences and needs of marginalised people for trainings and outreach. It is important for me to publicly identify as an Afrofeminist in a society where there is impunity for gendered and racist violence that occurs every day. In Honduras we need to put our energy into supporting these communities whose rights are most violated, and whose stories are invisible.
For example, in 2006, I was working with a Union to install the Ubuntu operating system (an open-source operating system) on their computers. We realized that the unionists didn’t know how to use a computer, so we created a space for digital literacy and learning about how to use a computer at the same time. This became not just a teaching exercise, but an exercise for me to figure out how to connect these tools to what people are interested in. Something clicked for me in this moment, and this experience helped solidify my approach to working on technology and human rights.
There are not many women working in technology and human rights. I don’t want to be one of the only women, so my goal is to see more women colleagues working on technical issues. I want to make it possible for women to work in this field. I also want to motivate more women to create change within the intersection of technology and human rights. Using a feminist perspective and approach, we ask big questions about how we are doing the work, what our approach needs to be, and who we need to work with. Nikole Yanez Honduras Human Rights Defender
For me, building a feminist internet means building an internet for everyone. This means creating a space where we do not reproduce sexist violence, where we find a community that responds to the people, to the groups, and to the organizations that fight for human rights. This includes involving women and marginalised people in building the infrastructure, in the configuration of servers, and in the development of protocols for how we use all these tools.
In Honduras, there aren’t many people trained in digital forensics analysis, yet there are organizations that are always seeking me out to help check their phones. The fellowship helped me learn about forensic analysis on phones and computers and tied the learning to what I’m actually doing in my area with different organizations and women’s rights defenders. The fellowship was practical and rooted in the experience of civil society organizations.
How do you explain the importance of digital forensics? Well first, it’s incredibly relevant for women rights defenders. Everyone wants to know if their phone has been hacked. That’s the first thing they ask:, “Can you actually know whether your phone has been hacked?” and “How do I know? Can you do it for me? How?” Those are the things that come up in my trainings and conversations.
I like to help people to think about protection as a process, something ongoing, because we use technology all day long. There are organizations and people that take years to understand that. So, it’s not something that can be achieved in a single conversation. Sometimes a lot of things need to happen, including bad things, before people really take this topic seriously…
I try to use very basic tools when I’m doing digital security support, to say you can do this on whatever device you’re on, this is a prevention tool. It’s not just applying technical knowledge, it’s also a process of explaining, training, showing how this work is not just for hackers or people who know a lot about computers.
One of the challenges is to spread awareness about cybersecurity among Indigenous and grassroots organizations, which aren’t hyper-connected and don’t think that digital forensics work is relevant to them. Sometimes what we do is completely disconnected from their lives, and they ask us: “But what are you doing?” So, our job is to understand their questions and where they are coming from and ground our knowledge-sharing in what people are actually doing.
To someone reading this piece and saying, oh, this kind of resonates with me, where do I start, what would your recommendation be?
If you are a human rights defender, I would recommend that you share your knowledge with your collective. You can teach them the importance of knowing about them, practicing them, as well as encouraging training to prevent digital attacks, because, in the end, forensic analysis is a reaction to something that has happened.
We can take a lot of preventive measures to ensure the smallest possible impact. That’s the best way to start. And it’s crucial to stay informed, to keep reading, to stay up to date with the news and build community.
If there are girls or gender non-conforming people reading this who are interested in technical issues, it doesn’t matter if you don’t have a degree or a formal education, as long as you like it. Most hackers I’ve met become hackers because they dive into a subject, they like it and they’re passionate about it.Nikole Yanez Honduras Human Rights Defender.
Powerful governments cast humanity into an era devoid of effective international rule of law, with civilians in conflicts paying the highest price
Rapidly changing artificial intelligence is left to create fertile ground for racism, discrimination and division in landmark year for public elections
Standing against these abuses, people the world over mobilized in unprecedented numbers, demanding human rights protection and respect for our common humanity
The world is reaping a harvest of terrifying consequences from escalating conflict and the near breakdown of international law, said Amnesty International as it launched its annual The State of the World’s Human Rights report, delivering an assessment of human rights in 155 countries.
Amnesty International also warned that the breakdown of the rule of law is likely to accelerate with rapid advancement in artificial intelligence (AI) which, coupled with the dominance of Big Tech, risks a “supercharging” of human rights violations if regulation continues to lag behind advances.
“Amnesty International’s report paints a dismal picture of alarming human rights repression and prolific international rule-breaking, all in the midst of deepening global inequality, superpowers vying for supremacy and an escalating climate crisis,” said Amnesty International’s Secretary General, Agnès Callamard.
“Israel’s flagrant disregard for international law is compounded by the failures of its allies to stop the indescribable civilian bloodshed meted out in Gaza. Many of those allies were the very architects of that post-World War Two system of law. Alongside Russia’s ongoing aggression against Ukraine, the growing number of armed conflicts, and massive human rights violations witnessed, for example, in Sudan, Ethiopia and Myanmar – the global rule-based order is at risk of decimation.”
Lawlessness, discrimination and impunity in conflicts and elsewhere have been enabled by unchecked use of new and familiar technologies which are now routinely weaponized by military, political and corporate actors. Big Tech’s platforms have stoked conflict. Spyware and mass surveillance tools are used to encroach on fundamental rights and freedoms, while governments are deploying automated tools targeting the most marginalized groups in society.
“In an increasingly precarious world, unregulated proliferation and deployment of technologies such as generative AI, facial recognition and spyware are poised to be a pernicious foe – scaling up and supercharging violations of international law and human rights to exceptional levels,” said Agnès Callamard.
“During a landmark year of elections and in the face of the increasingly powerful anti-regulation lobby driven and financed by Big Tech actors, these rogue and unregulated technological advances pose an enormous threat to us all. They can be weaponized to discriminate, disinform and divide.”
Amnesty International’s report paints a dismal picture of alarming human rights repression and prolific international rule-breaking, all in the midst of deepening global inequality, superpowers vying for supremacy and an escalating climate crisis. Amnesty International’s Secretary General, Agnès Callamard