Egyptian media reported on 22 September, 2025, that President Abdel Fattah al-Sisi had issued a presidential pardon for the imprisoned Egyptian-British activist Alaa Abdel Fattah. On 23 September the Guardian, HRW and others reported that the British-Egyptian human rights activist Alaa Abd el-Fattah has indeed been released from jail after serving six years for sharing a Facebook post.
Early on Tuesday morning his campaign said in a statement that Abd el-Fattah was released from Wadi Natron prison and was now in his home in Cairo. “I can’t even describe what I feel,” his mother, Laila Soueif, said from her house in Giza as she stood next to her son surrounded by family and friends. “We’re happy, of course. But our greatest joy will come when there are no [political] prisoners in Egypt,” she added.
Alaa Abd el-Fattah stands next to his mother, Laila Soueif, and sister, Sanaa, at their home in Giza. Photograph: Mohamed Abd El Ghany/Reuters
Amnesty International’s Erika Guevara Rosas said the release was welcome but long overdue. “His pardon ends a grave injustice and is a testament to the tireless efforts of his family and lawyers, including his courageous mother Laila Soueif and activists all over the world who have been relentlessly demanding his release,” she said. The following quote can be attributed to Amr Magdi, senior Middle East and North Africa researcher at Human Rights Watch: “President Sisi’s pardon of the imprisoned Egyptian activist Alaa Abdel Fattah is long overdue good news. Though we celebrate his pardon.
The campaign for Abd el-Fattah’s release was led by his family, including his mother, who was admitted to hospital in London twice after going on hunger strikes trying to secure his release. The UK prime minister, Keir Starmer, is also known to have telephoned Sisi three times to lobby for Abd el-Fattah’s release. see: https://humanrightsdefenders.blog/2022/07/07/mona-seifs-letter-a-cry-for-help-for-alaa/
ISHR launched a new report that summarises and assesses progress and challenges over the past decade in relation to initiatives to protect human rights defenders in the context of business frameworks, guidance, initiatives and tools that have emerged at local, national and regional levels. The protection of human rights defenders in relation to business activities is vital.
Defenders play a crucial role in safeguarding human rights and environmental standards against adverse impacts of business operations globally. Despite their essential work, defenders frequently face severe risks, including threats, surveillance, legal and judicial harassment, and violence.
According to the Business and Human Rights Resource Centre (BHRRC), more than 6,400 attacks on defenders linked to business activities have been documented over the past decade, emphasising the urgency of addressing these challenges. While this situation is not new, and civil society organisations have constantly pushed for accountability for and prevention of these attacks, public awareness of the issue increased with early efforts to raise the visibility of defenders at the Human Rights Council and the adoption of key thematic resolutions, as well as raising defenders’ voices at other foras like the UN Forum on Business and Human Rights.
The report‘Business Frameworks and Actions to Support Human Rights Defenders: a Retrospective and Recommendations’ takes stock of the frameworks, tools, and advocacy developed over the last decade to protect and support human rights defenders in the context of business activities and operations.
The report examines how various standards have been operationalised through company policies, investor guidance, multi-stakeholder initiatives, legal reforms, and sector-specific commitments. At the same time, it highlights how despite these advancements, the actual implementation by businesses remains inadequate. Effective corporate action remains insufficient, highlighting a critical gap that must be urgently addressed to ensure defenders can safely carry out their vital work protecting human rights and environmental justice. In order to address this, drawing on case studies, civil society tracking tools, and policy analysis, the report identifies key barriers to effective protection and proposes targeted recommendations.
The Internet Society (ISOC) and Global Cyber Alliance (GCA), on behalf of the Common Good Cyber secretariat, today announced on 23 June 2025 the launch of the Common Good Cyber Fund, an initiative to strengthen global cybersecurity by supporting nonprofits that deliver core cybersecurity services that protect civil society actors and the Internet as a whole.
This first-of-its-kind effort to fund cybersecurity for the common good—for everyone, including those at the greatest risk—has the potential to fundamentally improve cybersecurity for billions of people around the world. The Common Good Cyber secretariat members working to address this challenge are: Global Cyber Alliance, Cyber Threat Alliance, CyberPeace Institute, Forum of Incident Response and Security Teams, Global Forum on Cyber Expertise, Institute for Security and Technology, and Shadowserver Foundation.
The Fund is a milestone in advancing Common Good Cyber, a global initiative led by the Global Cyber Alliance, to create sustainable funding models for the organizations and individuals working to keep the Internet safe.
Despite serving as a critical frontline defense for the security of the Internet, cybersecurity nonprofits remain severely underfunded—exposing millions of users, including journalists, human rights defenders, and other civil society groups. This underfunding also leaves the wider public exposed to increasingly frequent and sophisticated cyber threats.
Common Good Cyber represents a pivotal step toward a stronger, more inclusive cybersecurity ecosystem. By increasing the resilience and long-term sustainability of nonprofits working in cybersecurity, improving access to trusted services for civil society organizations and human rights defenders, and encouraging greater adoption of best practices and security-by-design principles, the Common Good Cyber Fund ultimately helps protect and empower all Internet users.”Philip Reitinger, President and CEO, Global Cyber Alliance
The fund will support nonprofits that:
Maintain and secure core digital infrastructure, including DNS, routing, and threat intelligence systems for the public good;
Deliver cybersecurity assistance to high-risk actors through training, rapid incident response, and free-to-use tools
These future beneficiaries support the Internet by enabling secure operations and supplying global threat intelligence. They shield civil society from cyber threats through direct, expert intervention and elevate the security baseline for the entire ecosystem by supporting the “invisible infrastructure” on which civil society depends.
The Fund will operate through a collaborative structure. The Internet Society will manage the fund, and a representative and expert advisory board will provide strategic guidance.. Acting on behalf of the Common Good Cyber Secretariat, the Global Cyber Alliance will lead the Fund’s Strategic Advisory Committee and, with the other Secretariat members, engage in educational advocacy and outreach within the broader cybersecurity ecosystem.
The Common Good Cyber Fund is a global commitment to safeguard the digital frontlines, enabling local resilience and long-term digital sustainability. By supporting nonprofits advancing cybersecurity through tools, solutions, and platforms, the Fund builds a safer Internet that works for everyone, everywhere.
The Internet Society and the Global Cyber Alliance are finalizing the Fund’s legal and logistical framework. More information about the funding will be shared in the coming months.
On 27 May 2025, the Oversight Board overturned Meta’s decision to leave up content targeting one of Peru’s leading human rights defenders:
Summary
The Oversight Board overturns Meta’s decision to leave up content targeting one of Peru’s leading human rights defenders. Restrictions on fundamental freedoms, such as the right to assembly and association, are increasing in Peru, with non-governmental organizations (NGOs) among those impacted. Containing an image of the defender that has been altered, likely with AI, to show blood dripping down her face, the post was shared by a member of La Resistencia. This group targets journalists, NGOs, human rights activists and institutions in Peru with disinformation, intimidation and violence. Taken in its whole context, this post qualifies as a “veiled threat” under the Violence and Incitement policy. As this case reveals potential underenforcement of veiled or coded threats on Meta’s platforms, the Board makes two related recommendations.
……
The Oversight Board’s Decision
The Oversight Board overturns Meta’s decision to leave up the content. The Board also recommends that Meta:
Clarify that “coded statements where the method of violence is not clearly articulated” are prohibited in written, visual and verbal form, under the Violence and Incitement Community Standard.
Produce an annual accuracy assessment on potential veiled threats, including a specific focus on content containing threats against human rights defenders that incorrectly remains up on the platform and instances of political speech incorrectly being taken down.
Sam Gregory delivered the Spring 2025 Gruber Distinguished Lecture on Global Justice on March 24, 2025, at 4:30 pm at Yale Law School. The lecture was co-moderated by his faculty hosts, Binger Clinical Professor Emeritus of Human Rights Jim Silk ’89 and David Simon, assistant dean for Graduate Education, senior lecturer in Global Affairs and director of the Genocide Studies Program at Yale University. Gregory is the executive director of WITNESS, a human rights nonprofit organization that empowers individuals and communities to use technology to document human rights abuses and advocate for justice. He is an internationally recognized expert on using digital media and smartphone witnessing to defend and protect human rights. With over two decades of experience in the intersection of technology, media, and human rights, Gregory has become a leading figure in the field of digital advocacy. He previously launched the “Prepare, Don’t Panic” initiative in 2018 to prompt concerted, effective, and context-sensitive policy responses to deepfakes and deceptive AI issues worldwide. He focuses on leveraging emerging solutions like authenticity infrastructure, trustworthy audiovisual witnessing, and livestreamed/co-present storytelling to address misinformation, media manipulation, and rising authoritarianism.
Gregory’s lecture, entitled “Fortifying Truth, Trust and Evidence in the Face of Artificial Intelligence and Emerging Technology,” focused on the challenges that artificial intelligence poses to truth, trust, and human rights advocacy. Generative AI’s rapid development and impact on how media is made, edited, and distributed affects how digital technology can be used to expose human rights violations and defend human rights. Gregory considered how photos and videos – essential tools for human rights documentation, evidence, and storytelling – are increasingly distrusted in an era of widespread skepticism and technological advancements that enable deepfakes and AI-generated content. AI can not only create false memories, but also “acts as a powerful conduit for plausible deniability.” Gregory discussed AI’s impact on the ability to believe and trust human rights voices and its role in restructuring the information ecosystem. The escalating burden of proof for human rights activists and the overwhelming volume of digital content underscore how AI can both aid and hinder accountability efforts.
In the face of these concerns, Gregory emphasized the need for human rights defenders to work shape AI systems proactively. He stressed that AI requires a foundational, systemic architecture that ensures information systems serve, rather than undermine, human rights work. Gregory reflected that “at the fundamental (level), this is work enabled by technology, but it’s not about technology.” Digital technologies provide new mechanisms for exposing violence and human rights abuse; the abuse itself has not changed. He also pointed to the need to invest in robust community archives to protect the integrity of human rights evidence against false memories. Stressing the importance of epistemic justice, digital media literacy, and equitable access to technology and technological knowledge, Gregory discussed WITNESS’ work in organizing for digital media literacy and access in human rights digital witnessing, particularly in response to generative AI. One example he highlighted was training individuals how to film audiovisual witnessing videos in ways that are difficult for AI to replicate.
As the floor opened to questions, Gregory pointed to “authenticity infrastructure” as one building block to verify content and maintain truth. Instead of treating information as a binary between AI and not AI, it is necessary to understand the entire “recipe” of how information is created, locating it along the continuum of how AI permeates modern communication. AI must be understood, not disregarded. This new digital territory will only become more relevant in human rights work, Gregory maintained. The discussion also covered regulatory challenges, courts’ struggles with AI generated and audiovisual evidence at large, the importance of AI-infused media literacy, and the necessity of strong civil society institutions in the face of corporate media control.A recording of the lecture is available here.
Chairperson of the NHRC Maryam bint Abdullah Al Attiyah
The international conference ‘Artificial Intelligence and Human Rights: Opportunities, Risks, and Visions for a Better Future,’ gets under way in Doha today. Organised by the National Human Rights Committee (NHRC), the two-day event is being held in collaboration with the UN Development Programme (UNDP), the Office of the High Commissioner for Human Rights (OHCHR), the Global Alliance of National Human Rights Institutions (GANHRI), and Qatar’s Ministry of Communications and Information Technology (MCIT) and National Cyber Security Agency, along with other international entities active in the fields of digital tools and technology.
Chairperson of the NHRC Maryam bint Abdullah Al Attiyah, said in a statement Monday that the conference discusses one of the most prominent human rights issues of our time, one that is becoming increasingly important, especially with the tremendous and growing progress in the field of artificial intelligence, which many human rights activists around the world fear will impact the rights of many individuals worldwide.
She added, that the developments in AI that is observed every day requires the establishment of a legal framework that governs the rights of every individual, whether related to privacy or other rights. The framework must also regulate and control the technologies developed by companies, ensuring that rights are not infringed upon, and that the development of AI technologies is not synonymous with the pursuit of financial gain, neglecting the potential infringements on the rights of individuals and communities.
She emphasised that the conference aims to discuss the impact of AI on human rights, not only limiting itself to the challenges it poses to the lives of individuals, but also extending to identifying the opportunities it presents to human rights specialists around the world. She noted that the coming period must witness a deep focus on this area, which is evolving by the hour.
The conference is expected to bring together around 800 partners from around the world to discuss the future of globalisation. Target attendees include government officials, policymakers, AI and technology experts, human rights defenders and activists, legal professionals, AI ethics specialists, civil society representatives, academics and researchers, international organisations, private sector companies, and technology developers.
..The conference is built around 12 core themes and key topics. It focuses on the foundations of artificial intelligence, including fundamental concepts such as machine learning and natural language processing. It also addresses AI and privacy-its impact on personal data, surveillance, and privacy rights. Other themes include bias and discrimination, with an emphasis on addressing algorithmic bias and ensuring fairness, as well as freedom of expression and the role of AI in content moderation, censorship, and the protection of free speech.
The International conference aims to explore the impact of AI on human rights and fundamental freedoms, analyse the opportunities and risks associated with AI from a human rights perspective, present best practices and standards for the ethical use of AI, and engage with policymakers, technology experts, civil society, and the private sector to foster multi-stakeholder dialogue. It also seeks to propose actionable policy and legal framework recommendations to ensure that AI development aligns with human rights principles.
Participating experts will address the legal and ethical frameworks, laws, policies, and ethical standards for the responsible use of artificial intelligence. They will also explore the theme of “AI and Security,” including issues related to militarisation, armed conflicts, and the protection of human rights. Additionally, the conference will examine AI and democracy, focusing on the role of AI in shaping democratic institutions and promoting inclusive participation.
Conference participants will also discuss artificial intelligence and the future of media from a human rights-based perspective, with a focus on both risks and innovation. The conference will further examine the transformations brought about by AI in employment and job opportunities, its impact on labor rights and economic inequality, as well as the associated challenges and prospects.
As part of its ongoing commitment to employing technology in service of humanity and supporting the ethical use of emerging technologies, the Ministry of Communications and Information Technology (MCIT) is also partnering in organising the conference.
The Boston-based Patrick J. McGovern Foundation has announced on 23 December 2024 grants totaling $73.5 million in 2024 in support of human-centered AI.
Awarded to 144 nonprofit, academic, and governmental organizations in 11 countries, the grants will support the development and delivery of AI solutions built for long-term societal benefit and the creation of institutions designed to address the opportunities and challenges this emerging era presents. Grants will support organizations leveraging data science and AI to drive tangible change in a variety of areas with urgency, including climate change, human rights, media and journalism, crisis response, digital literacy, and health equity.
Gifts include $200,000 to MIT Solveto support the 2025 AI for Humanity Prize; $364,000 to Clear Globalto enable scalable, multilingual, voice-powered communication and information channels for crisis-affected communities; $1.25 million to the Aspen Instituteto enhance public understanding and policy discourse around AI; and $1.5 million to United Nations Educational, Scientific and Cultural Organization(UNESCO) to advance ethical AI governance through civil society networks, policy frameworks, and knowledge resources.
Amnesty Internationalto support Amnesty’s Algorithmic Accountability Lab to mobilize and empower civil society to evaluate AI systems and pursue accountability for AI-driven harms ($750,000)
HURIDOCSto use machine learning to enhance human rights data management and advocacy ($400,000)
“This is not a moment to react; it’s a moment to lead,” said McGovern Foundation president Vilas Dhar. “We believe that by investing in AI solutions grounded in human values, we can harness technology’s immense potential to benefit communities and individuals alike. AI can amplify human dignity, protect the vulnerable, drive global prosperity, and become a force for good.”
Powerful governments cast humanity into an era devoid of effective international rule of law, with civilians in conflicts paying the highest price
Rapidly changing artificial intelligence is left to create fertile ground for racism, discrimination and division in landmark year for public elections
Standing against these abuses, people the world over mobilized in unprecedented numbers, demanding human rights protection and respect for our common humanity
The world is reaping a harvest of terrifying consequences from escalating conflict and the near breakdown of international law, said Amnesty International as it launched its annual The State of the World’s Human Rights report, delivering an assessment of human rights in 155 countries.
Amnesty International also warned that the breakdown of the rule of law is likely to accelerate with rapid advancement in artificial intelligence (AI) which, coupled with the dominance of Big Tech, risks a “supercharging” of human rights violations if regulation continues to lag behind advances.
“Amnesty International’s report paints a dismal picture of alarming human rights repression and prolific international rule-breaking, all in the midst of deepening global inequality, superpowers vying for supremacy and an escalating climate crisis,” said Amnesty International’s Secretary General, Agnès Callamard.
“Israel’s flagrant disregard for international law is compounded by the failures of its allies to stop the indescribable civilian bloodshed meted out in Gaza. Many of those allies were the very architects of that post-World War Two system of law. Alongside Russia’s ongoing aggression against Ukraine, the growing number of armed conflicts, and massive human rights violations witnessed, for example, in Sudan, Ethiopia and Myanmar – the global rule-based order is at risk of decimation.”
Lawlessness, discrimination and impunity in conflicts and elsewhere have been enabled by unchecked use of new and familiar technologies which are now routinely weaponized by military, political and corporate actors. Big Tech’s platforms have stoked conflict. Spyware and mass surveillance tools are used to encroach on fundamental rights and freedoms, while governments are deploying automated tools targeting the most marginalized groups in society.
“In an increasingly precarious world, unregulated proliferation and deployment of technologies such as generative AI, facial recognition and spyware are poised to be a pernicious foe – scaling up and supercharging violations of international law and human rights to exceptional levels,” said Agnès Callamard.
“During a landmark year of elections and in the face of the increasingly powerful anti-regulation lobby driven and financed by Big Tech actors, these rogue and unregulated technological advances pose an enormous threat to us all. They can be weaponized to discriminate, disinform and divide.”
Amnesty International’s report paints a dismal picture of alarming human rights repression and prolific international rule-breaking, all in the midst of deepening global inequality, superpowers vying for supremacy and an escalating climate crisis. Amnesty International’s Secretary General, Agnès Callamard
The New Indian Express of 22 March 2024 reports (based on Al Jazeera) that Prime Minister Narendra Modi government has approached a major Indian think tank to develop its own democracy ratings index that could help it counter recent downgrades in rankings issued by international groups that New Delhi fears could affect the country’s credit rating. The Observer Research Foundation (ORF), which works closely with the Indian government on multiple initiatives, is preparing the ratings framework,
On June 2023, The Guardian reported that the Indian government has been secretly working to keep its reputation as the “world’s largest democracy” alive after being called out by researchers for serious democratic backsliding under the nationalist rule of the Narendra Modi government, according to internal reports seen by The Guardian.
Despite publicly dismissing several global rankings that suggest the country is on a dangerous downward trajectory, officials from government ministries have been quietly assigned to monitor India’s performance, minutes from meetings show, The Guardian said. Al Jazeera revealed that the Observer Research Foundation (ORF), which works closely with the Indian government on multiple initiatives, is preparing the ratings framework. The new rankings system could be released soon, an official was quoted as saying.
Global human rights NGO Amnesty International has continued to highlight the erosion of civil rights and religious freedom under the Narendra Modi regime.
Similarly, Human Rights Watch has also continued to highlight the crackdown on civil society and media under the Modi government citing persecution of activists, journalists, protesters and critics on fabricated counterterrorism and hate speech laws. The vilification of Muslims and other minorities by some BJP leaders and police inaction against government supporters who commit violence are also among HRW’s concerns in India.
Notably, the ‘Democracy Index’, prepared by The Economist Group’s Economist Intelligence Unit, had downgraded India to a “flawed democracy” in its 2022 report due to the serious backsliding of democratic freedom under the Modi government.
In response to the news that the court granted former Philippine Senator Leila de Lima bail for the third and last drug-related charge against her, Butch Olano, Amnesty International’s Philippines Director, said: “This is a welcome development and a step towards justice.
As a human rights activist and former Senator, she has been one of the staunchest critics of the human rights violations under the administration of former President Rodrigo Duterte. Since her arrest, Amnesty alongside many other organisations have repeatedly said that the charges against her were fabricated and that the testimonies by witnesses against her were manufactured. See: https://www.trueheroesfilms.org/thedigest/laureates/35cd51c0-93fb-11e8-b157-db4feecb7a6f
The authorities arrested de Lima after she sought to investigate violations committed in the context of the so-called “war on drugs” under the former Duterte administration, including the extrajudicial execution of thousands of people suspected of using or selling drugs, which Amnesty has said may amount to crimes against humanity. As in the case of de Lima, there has been almost no justice or accountability for the victims of these abuses and their families.
Court proceedings against de Lima in the last six years have been marked by undue delays, including the repeated failure of prosecution witnesses to appear in court and changes in judges handling the cases against her. In 2018, the UN Working Group on Arbitrary Detention concluded that the detention of de Lima was arbitrary because of the lack of legal basis and the non-observance of international norms relating to the right to a fair trial.
The arbitrary detention of de Lima reflects the broader context of increasing impunity for human rights violations in the Philippines. These violations include killings, threats and harassment of political activists, human rights defenders, members of the media and other targeted groups.