Posts Tagged ‘information technology’

HURIDOCS’ 2024 Annual report is out

July 8, 2025

In 2024, HURIDOCS continued strengthening partnerships, evolving tools, and expanding its reach.

“I see our contribution not just as code, but as something living—like the root bridges of Northeast India, grown with care and shaped by community. This is how I envision HURIDOCS: building human rights infrastructure that is resilient, collaborative, and deeply rooted in justice.”
— Danna Ingleton, HURIDOCS Executive Director

Supporting the global community

This year, HURIDOCS partnered with 73 organisations across 38 countries, helping develop documentation strategies, launch new platforms, and provide targeted support. This means 73 documentation projects were reimagined and refined through bespoke customisation through our flagship tool, Uwazi. From databases mapping attacks on environmental defenders to resources preserving collective memory, our work continues to be shaped by those on the frontlines of human rights struggles.

Uwazi: Built with and for civil society

In 2024, our open-source platform, Uwazi, continued to grow with new machine learning tools for translation and metadata extraction, tighter security, and full integration with the Tella mobile app, making it more responsive, secure, and aligned with the needs of human rights defenders worldwide.

Convening global conversations

In 2024, HURIDOCS engaged in key global events, including a side event at the 56th Human Rights Council, the Geneva Human Rights Platform, the first Google Impact Summit, and a Human Rights Day webinar highlighting four global initiatives powered by Uwazi.

Through these events, we advanced vital discussions on the ethical use of AI, digital monitoring technologies, and the future of technology infrastructure in support of human rights.

Navigating fundraising challenges while building resilience and sustainability

HURIDOCS continued to navigate a complex funding landscape in 2024, strengthening our financial foundations to ensure long-term resilience.

We remain committed to aligning our resource strategies with our mission, providing steadfast support, insight, and partnership to those advancing human rights globally

Strengthening our foundations

2024 marked the second year of Danna Ingleton’s leadership as Executive Director. It was a year of growth and transition, including the appointment of Grace Kwak Danciu as Chair of the HURIDOCS Board, and a heartfelt farewell to Lisa Reinsberg, whose contributions shaped the organisation for more than five years.

To ensure the long-term sustainability of our mission, we launched a new Development and Communications team under the leadership of Yolanda Booyzen. We also welcomed new staff across programmes, tech, and product, each one contributing to a stronger, more agile HURIDOCS.

As our team grows and our documentation tools evolve, we strive to build a fit-for-purpose civil society equipped to achieve justice, accountability, and the protection of human rights.

Looking ahead, we hold hope that the years to come will bring renewed compassion as we work towards a world where human rights are upheld for all.

See also: https://humanrightsdefenders.blog/2023/11/06/40-years-of-huridocs-a-bit-of-history/

Download the 2024 Annual Report


Witness’ Sam Gregory gave Gruber Lecture on artificial intelligence and human rights advocacy

June 23, 2025
Sam Gregory Headshot

Sam Gregory delivered the Spring 2025 Gruber Distinguished Lecture on Global Justice on March 24, 2025, at 4:30 pm at Yale Law School. The lecture was co-moderated by his faculty hosts, Binger Clinical Professor Emeritus of Human Rights Jim Silk ’89 and David Simon, assistant dean for Graduate Education, senior lecturer in Global Affairs and director of the Genocide Studies Program at Yale University. Gregory is the executive director of WITNESS, a human rights nonprofit organization that empowers individuals and communities to use technology to document human rights abuses and advocate for justice. He is an internationally recognized expert on using digital media and smartphone witnessing to defend and protect human rights. With over two decades of experience in the intersection of technology, media, and human rights, Gregory has become a leading figure in the field of digital advocacy. He previously launched the “Prepare, Don’t Panic” initiative in 2018 to prompt concerted, effective, and context-sensitive policy responses to deepfakes and deceptive AI issues worldwide. He focuses on leveraging emerging solutions like authenticity infrastructure, trustworthy audiovisual witnessing, and livestreamed/co-present storytelling to address misinformation, media manipulation, and rising authoritarianism.

Gregory’s lecture, entitled “Fortifying Truth, Trust and Evidence in the Face of Artificial Intelligence and Emerging Technology,” focused on the challenges that artificial intelligence poses to truth, trust, and human rights advocacy. Generative AI’s rapid development and impact on how media is made, edited, and distributed affects how digital technology can be used to expose human rights violations and defend human rights. Gregory considered how photos and videos – essential tools for human rights documentation, evidence, and storytelling – are increasingly distrusted in an era of widespread skepticism and technological advancements that enable deepfakes and AI-generated content. AI can not only create false memories, but also “acts as a powerful conduit for plausible deniability.” Gregory discussed AI’s impact on the ability to believe and trust human rights voices and its role in restructuring the information ecosystem. The escalating burden of proof for human rights activists and the overwhelming volume of digital content underscore how AI can both aid and hinder accountability efforts.

In the face of these concerns, Gregory emphasized the need for human rights defenders to work shape AI systems proactively. He stressed that AI requires a foundational, systemic architecture that ensures information systems serve, rather than undermine, human rights work. Gregory reflected that “at the fundamental (level), this is work enabled by technology, but it’s not about technology.” Digital technologies provide new mechanisms for exposing violence and human rights abuse; the abuse itself has not changed. He also pointed to the need to invest in robust community archives to protect the integrity of human rights evidence against false memories. Stressing the importance of epistemic justice, digital media literacy, and equitable access to technology and technological knowledge, Gregory discussed WITNESS’ work in organizing for digital media literacy and access in human rights digital witnessing, particularly in response to generative AI. One example he highlighted was training individuals how to film audiovisual witnessing videos in ways that are difficult for AI to replicate.

As the floor opened to questions, Gregory pointed to “authenticity infrastructure” as one building block to verify content and maintain truth. Instead of treating information as a binary between AI and not AI, it is necessary to understand the entire “recipe” of how information is created, locating it along the continuum of how AI permeates modern communication. AI must be understood, not disregarded. This new digital territory will only become more relevant in human rights work, Gregory maintained. The discussion also covered regulatory challenges, courts’ struggles with AI generated and audiovisual evidence at large, the importance of AI-infused media literacy, and the necessity of strong civil society institutions in the face of corporate media control.A recording of the lecture is available here.

https://law.yale.edu/centers-workshops/gruber-program-global-justice-and-womens-rights/gruber-lectures/samuel-gregory

International conference on ‘AI and Human Rights’ in Doha

May 27, 2025
HE Chairperson of the NHRC Maryam bint Abdullah Al Attiyah

Chairperson of the NHRC Maryam bint Abdullah Al Attiyah

The international conference ‘Artificial Intelligence and Human Rights: Opportunities, Risks, and Visions for a Better Future,’ gets under way in Doha today. Organised by the National Human Rights Committee (NHRC), the two-day event is being held in collaboration with the UN Development Programme (UNDP), the Office of the High Commissioner for Human Rights (OHCHR), the Global Alliance of National Human Rights Institutions (GANHRI), and Qatar’s Ministry of Communications and Information Technology (MCIT) and National Cyber Security Agency, along with other international entities active in the fields of digital tools and technology.

Chairperson of the NHRC Maryam bint Abdullah Al Attiyah, said in a statement Monday that the conference discusses one of the most prominent human rights issues of our time, one that is becoming increasingly important, especially with the tremendous and growing progress in the field of artificial intelligence, which many human rights activists around the world fear will impact the rights of many individuals worldwide.

She added, that the developments in AI that is observed every day requires the establishment of a legal framework that governs the rights of every individual, whether related to privacy or other rights. The framework must also regulate and control the technologies developed by companies, ensuring that rights are not infringed upon, and that the development of AI technologies is not synonymous with the pursuit of financial gain, neglecting the potential infringements on the rights of individuals and communities.

She emphasised that the conference aims to discuss the impact of AI on human rights, not only limiting itself to the challenges it poses to the lives of individuals, but also extending to identifying the opportunities it presents to human rights specialists around the world. She noted that the coming period must witness a deep focus on this area, which is evolving by the hour.

The conference is expected to bring together around 800 partners from around the world to discuss the future of globalisation. Target attendees include government officials, policymakers, AI and technology experts, human rights defenders and activists, legal professionals, AI ethics specialists, civil society representatives, academics and researchers, international organisations, private sector companies, and technology developers.

..The conference is built around 12 core themes and key topics. It focuses on the foundations of artificial intelligence, including fundamental concepts such as machine learning and natural language processing. It also addresses AI and privacy-its impact on personal data, surveillance, and privacy rights. Other themes include bias and discrimination, with an emphasis on addressing algorithmic bias and ensuring fairness, as well as freedom of expression and the role of AI in content moderation, censorship, and the protection of free speech.

The International conference aims to explore the impact of AI on human rights and fundamental freedoms, analyse the opportunities and risks associated with AI from a human rights perspective, present best practices and standards for the ethical use of AI, and engage with policymakers, technology experts, civil society, and the private sector to foster multi-stakeholder dialogue. It also seeks to propose actionable policy and legal framework recommendations to ensure that AI development aligns with human rights principles.

Participating experts will address the legal and ethical frameworks, laws, policies, and ethical standards for the responsible use of artificial intelligence. They will also explore the theme of “AI and Security,” including issues related to militarisation, armed conflicts, and the protection of human rights. Additionally, the conference will examine AI and democracy, focusing on the role of AI in shaping democratic institutions and promoting inclusive participation.

Conference participants will also discuss artificial intelligence and the future of media from a human rights-based perspective, with a focus on both risks and innovation. The conference will further examine the transformations brought about by AI in employment and job opportunities, its impact on labor rights and economic inequality, as well as the associated challenges and prospects.

As part of its ongoing commitment to employing technology in service of humanity and supporting the ethical use of emerging technologies, the Ministry of Communications and Information Technology (MCIT) is also partnering in organising the conference.

for some other posts on Qatar, see: https://humanrightsdefenders.blog/tag/qatar/

https://www.gulf-times.com/article/705199/qatar/international-conference-on-ai-and-human-rights-opens-in-doha-tuesday

Side event 7 March 2025 on Protection of defenders against technology-facilitated rights violations

February 25, 2025
  • Location: Physical
  • Date: 07 March 2025
  • Time: 1:00PM – 2:00PM CET
  • Address: Room XXV, Palais des Nations
  • Event language(s) English
  • RSVP Needed: no

New and emerging technologies have become a fundamental tool for human rights defenders to conduct their activities, boost solidarity among movements and reach different audiences. Unfortunately, these positive aspects have been overshadowed by negative impacts on the enjoyment of human rights, including increased threats and risks for human rights defenders. While we see the increased negative impacts of new technologies, we do not see that governments are addressing these impacts comprehensively.

Furthermore, States and their law enforcement agencies (often through the help of non-State actors, including business enterprises) often take down or censor the information shared by defenders on social media and other platforms. In other cases, we have seen that businesses are also complicit in attacks and violations against human right defenders.

Conversely, lack of access to the internet and the digital gaps in many countries and regions, or affecting specific groups, limits the potential of digital technologies for activism and movement building, as well as access to information. 

The Declaration on Human Rights Defenders, adopted in 1998, does not consider these challenges, which have largely arisen with the rapid evolution of technology. In this context, and, as part of activities to mark the 25th anniversary of the UN Declaration on human rights defenders, a coalition of NGOs launched a consultative initiative to identify the key issues faced by human rights defenders that are insufficiently addressed by the UN Declaration, including on the area of digital and new technologies. These issues are also reflected in the open letter to States on the draft resolution on human rights defenders that will be considered during HRC58. 

This side event will be an opportunity to continue discussing the reality and the challenges that human rights defenders face in the context of new and emerging technologies. It will also be an opportunity to hear directly from those who, on a daily basis, work with defenders in the field of digital rights while highlighting their specific protection needs. Finally, the event will also help remind States about the range of obligations in this field that can contribute to inform the consultations on the HRC58 resolution on human rights defenders. 

Panelists:

  • Opening remarks: Permanent Mission of Norway
  • Speakers:
    • Carla Vitoria – Association for Progressive Communications 
    • Human rights defender from Kenya regarding the Safaricom case (via video message)
    • Woman human rights defender from Colombia regarding use of new technologies during peaceful protests
    • Human rights defender from Myanmar regarding online incitement to violence against Rohingya people
  • Video montage of civil society priorities for the human rights defender resolution at HRC58
  • Moderator: Ulises Quero, Programme Manager, Land, Environment and Business & Human Rights (ISHR)

This event is co-sponsored by Access Now, Asian Forum for Human Rights & Development (FORUM-ASIA), Association for Progressive Communications (APC), Business and Human Rights Resource Centre (BHRRC), DefendDefenders (East and Horn of Africa HRD Project), Huridocs, Gulf Centre for Human Rights (GCHR), International Lesbian, Gay, Bisexual, Trans and Intersex Association (ILGA World), International Service for Human Rights (ISHR), Peace Brigades International, Privacy International, Protection International,  Regional Coalition of WHRDs in Southwest Asia and North Africa (WHRD MENA Coalition). 

for the report, see: https://ishr.ch/latest-updates/human-rights-defenders-and-new-emerging-forms-of-technology-a-blessing-or-a-curse/

https://ishr.ch/events/protection-of-defenders-against-new-and-emerging-forms-of-technology-facilitated-rights-violations

McGovern Foundation awards $73.5 million for human-centered Artificial Intelligence

January 6, 2025
McGovern Foundation awards $73.5 million for human-centered AI

The Boston-based Patrick J. McGovern Foundation has announced on 23 December 2024 grants totaling $73.5 million in 2024 in support of human-centered AI.

Awarded to 144 nonprofit, academic, and governmental organizations in 11 countries, the grants will support the development and delivery of AI solutions built for long-term societal benefit and the creation of institutions designed to address the opportunities and challenges this emerging era presents. Grants will support organizations leveraging data science and AI to drive tangible change in a variety of areas with urgency, including climate change, human rights, media and journalism, crisis response, digital literacy, and health equity.

Gifts include $200,000 to MIT Solveto support the 2025 AI for Humanity Prize; $364,000 to Clear Globalto enable scalable, multilingual, voice-powered communication and information channels for crisis-affected communities; $1.25 million to the Aspen Instituteto enhance public understanding and policy discourse around AI; and $1.5 million to United Nations Educational, Scientific and Cultural Organization(UNESCO) to advance ethical AI governance through civil society networks, policy frameworks, and knowledge resources.

Amnesty Internationalto support Amnesty’s Algorithmic Accountability Lab to mobilize and empower civil society to evaluate AI systems and pursue accountability for AI-driven harms ($750,000)

HURIDOCSto use machine learning to enhance human rights data management and advocacy ($400,000)

This is not a moment to react; it’s a moment to lead,” said McGovern Foundation president Vilas Dhar. “We believe that by investing in AI solutions grounded in human values, we can harness technology’s immense potential to benefit communities and individuals alike. AI can amplify human dignity, protect the vulnerable, drive global prosperity, and become a force for good.

https://philanthropynewsdigest.org/news/mcgovern-foundation-awards-73.5-million-for-human-centered-ai

Amnesty Tech call for nominations for 3rd Digital Forensics Fellowship

January 6, 2025

On 2 January 2025 Amnesty Tech – a global collective of advocates, hackers, researchers, and technologists – announced the launch of the third Digital Forensics Fellowship (DFF).

This innovative Fellowship is an opportunity for 5 – 7 human rights defenders (HRDs), journalists, and/or technologists working in civil society organisations around the world to train with Amnesty Tech’s Security Lab to build skills and knowledge on advanced digital threats and forensic investigation techniques. This is a part-time Fellowship that will last 3-4 months and will come with a stipend.

Fellowship start and end date: The Fellowship is expected to run from April – July 2025.

Application Deadline, 23 January 2025 
Location: dependent upon the suitable applicant’s location.

Remuneration: Successful applicants will be given a stipend of £500/month for their time.

Background

Across the world, hard-won rights are being weakened and denied every day. Increasingly, much of the repression faced by HRDs and journalists begins online. Since 2017, Amnesty Tech’s investigations have exposed vast and well-orchestrated digital attacks against activists and journalists in countries such as Morocco, Egypt, Azerbaijan, Qatar, Serbia, Mexico and Pakistan.

Advanced technical capacity is needed in all world regions to tackle the mercenary spyware crisis. By fostering a more decentralised, global, and diverse network of well-trained incident responders and investigators, we can jointly contribute to more timely and effective protection of HRDs and journalists against unlawful surveillance.

The spyware landscape changes rapidly, and creativity and persistence are needed to research and identify new trends, tools, and tactics used to target civil society. The curriculum for the third edition of the DFF will be tailored to the cohort and will be future facing to prepare Fellows to work on current and future spyware threats. [see also: https://humanrightsdefenders.blog/2024/05/16/two-young-human-rights-defenders-raphael-mimoun-and-nikole-yanez-on-tech-for-human-rights/]

Objectives and deliverables

Participants in the Digital Forensics Fellowship will be expected to:

  • Attend an in-person, week-long convening where the majority of trainings will be conducted. This training will take place in June 2025, the exact location is set to be confirmed shortly.
  • Dedicate approximately 10 – 12 hours per month to the Fellowship, outside of the convening, by participating in remote training sessions and through independent work outside of scheduled sessions to deepen understanding of training topics.
  • Engage with the programme cohort and the Security Lab during the in-person and remote trainings, and in discussion groups on an ad-hoc basis.

Essential Requirements

  • An understanding of the technical threats, digital attacks and challenges faced by journalists, HRDs, and civil society organisations in their local contexts.
  • Demonstrated interest in conducting investigations to identify digital attacks against civil society, with the goal of building resilience among civil society actors in the face of surveillance after the Fellowship.
  • Familiarity using command line tools and basic knowledge of scripting languages like Bash and Python to analyse data.
  • An understanding of how internet infrastructure works, for example the role of IP addresses, TLS certificates, and DNS queries.
  • Technical familiarity with GNU/Linux operating systems, as well as Android and iPhone systems.
  • Engaging with the English language as the primary language throughout the Fellowship.
  • Application instructions:

To apply, applicants will be required to submit the following via our recruitment system eArcu – please upload all relevant documents to the CV section of the application portal.

  1. A copy of your most recent CV.
  2. A cover letter explaining your motivation and interest in the Fellowship and outlining how you meet the essential requirements outlined in the job description.

Applications must be in PDF, Word, PowerPoint or Excel format.

Application Process:

Shortlisted applicants will be invited to complete a record video interview week commencing 10th February, answering a series of pre-set questions via video, which allows us to learn more about you and your suitability for the Fellowship. Successful applicants from this process will be invited to a Microsoft Teams interview with the panel week commencing 3rd March.

How to apply;

Careers | Amnesty International

Amnesty International

Tribute to James Lawson of the Council of Europe

February 17, 2024

Bert Verstappen – retired from HURIDOCS – wrote the following tribute to a person who – from an intergovernmental position – contributed greatly to the development of the NGO network:

It is with deep sadness that HURIDOCS has to announce the unexpected passing away on 11 February of James Lawson. James had a leading function in the field of information management in the Council of Europe. In addition, he devoted a huge amount of time and energy to HURIDOCS.

James was a visionary who introduced new tools and techniques for human rights information handling to the HURIDOCS network. He was keen to share his enthusiasm for developments in the field of information management that could and should benefit human rights organisations. He was convinced that, in the age of the Internet, librarians continue to play an important role as information sharers.

A major initiative taken by James was the multilingual human rights search engine HuriSearch. He planned HuriSearch as from 1998. HuriSearch provided a singular point of access to information published on over 5’000 websites of human rights organisations worldwide. HuriSearch indexed and crawled about 8 to 10 million web pages. It was publicly available from 2003 until 2016.

James was an active and committed member of the HURIDOCS Continuation Committee – as its Board was called for many years. He served on the CC from 1992 to 2009. He was HURIDOCS Treasurer and as leader of the Task Force on Software Development oversaw the development of HURIDOCS’ tools and techniques.

James was also Coordinator and Chairperson of a large number of meetings of the European Co-ordination Committee on Human Rights Documentation https://www.ecchrd.org/. During these meetings, he introduced advances in technology such as the use of metadata and the eXtended Markup Language XML.

James also provided various trainings on human rights documentation on behalf of HURIDOCS. Among other activities, he trained NGOs preparing evidence for Truth Commission in South Africa and was the main resource person in a training for French-speaking African trainers in Senegal. He also held HURIDOCS trainings with local human rights organisations in Burkina Faso, the DR Congo, Georgia, Ghana, Haiti, Indonesia and other countries.

HURIDOCS thanks James for his engagement and expertise. We wish strength to Hanne and his daughters.

In the deepfake era, we need to hear the Human Rights Defenders

December 19, 2023

In a Blog Post (Council on Foreign Relations of 18 December 2023) Raquel Vazquez Llorente argues that ‘Artificial intelligence is increasingly used to alter and generate content online. As development of AI continues, societies and policymakers need to ensure that it incorporates fundamental human rights.” Raquel is the Head of Law and Policy, Technology Threats and Opportunities at WITNESS

The urgency of integrating human rights into the DNA of emerging technologies has never been more pressing. Through my role at WITNESS, I’ve observed first-hand the profound impact of generative AI across societies, and most importantly, on those defending democracy at the frontlines.

The recent elections in Argentina were marked by the widespread use of AI in campaigning material. Generative AI has also been used to target candidates with embarrassing content (increasingly of a sexual nature), to generate political ads, and to support candidates’ campaigns and outreach activities in India, the United States, Poland, Zambia, and Bangladesh (to name a few). The overall result of the lack of strong frameworks for the use of synthetic media in political settings has been a climate of mistrust regarding what we see or hear.

Not all digital alteration is harmful, though. Part of my work involves identifying how emerging technologies can foster positive change. For instance, with appropriate disclosure, synthetic media could be used to enhance voter education and engagement. Generative AI could help create informative content about candidates and their platforms, or of wider election processes, in different languages and formats, improving inclusivity or reducing barriers for underdog or outsider candidates. For voters with disabilities, synthetic media could provide accessible formats of election materials, such as sign language avatars or audio descriptions of written content. Satirical deepfakes could engage people who might otherwise be disinterested in politics, bringing attention to issues that might not be covered in mainstream media. We need to celebrate and protect these uses.

As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? When AI can blur the lines between reality and fiction with increasing credibility and ease, discerning truth from falsehood becomes not just a technological battle, but a fight to uphold democracy.

From conversations with journalists, activists, technologists and other communities impacted by generative AI and deepfakes, I have learnt that the effects of synthetic media on democracy are a mix of new, old, and borrowed challenges.

Generative AI introduces a daunting new reality: inconvenient truths can be denied as deep faked, or at least facilitate claims of plausible deniability to evade accountability. The burden of proof, or perhaps more accurately, the “burden of truth” has shifted onto those circulating authentic content and holding the powerful to account. This is not just a crisis of identifying what is fake. It is also a crisis of protecting what is true. When anything and everything can be dismissed as AI-generated or manipulated, how do we elevate the real stories of those defending our democracy at the frontlines?

But AI’s impact doesn’t stop at new challenges; it exacerbates old inequalities. Those who are already marginalized and disenfranchised—due to their gender, ethnicity, race or belonging to a particular group—face amplified risks. AI is like a magnifying glass for exclusion, and its harms are cumulative. AI deepens existing vulnerabilities, bringing a serious threat to principles of inclusivity and fairness that lie at the heart of democratic values. Similarly, sexual deepfakes can have an additional chilling effect, discouraging women, LGBTQ+ people and individuals from minoritized communities to participate in public life, thus eroding the diversity and representativeness that are essential for a healthy democracy.

Lastly, much as with social media, where we failed to incorporate the voices of the global majority, we have borrowed previous mistakes. The shortcomings in moderating content, combating misinformation, and protecting user privacy have had profound implications on democracy and social discourse. Similarly, in the context of AI, we are yet to see meaningful policies and regulation that not only consult globally those that are being impacted by AI but, more importantly, center the solutions that affected communities beyond the United States and Europe prioritize. This highlights a crucial gap: the urgent need for a global perspective in AI governance, one that learns from the failures of social media in addressing cultural and political nuances across different societies.

As we navigate AI’s impact on democracy and human rights, our approach to these challenges should be multifaceted. We must draw on a blend of strategies—ones that address the immediate ‘new’ realities of AI, respond to the ‘old’ but persistent challenges of inequality, and incorporate ‘borrowed’ wisdom from our past experiences.

First, we must ensure that new AI regulations and companies’ policies are steeped in human rights law and principles, such as those enshrined in the Universal Declaration of Human Rights. In the coming years, one of the most important areas in socio-technical expertise will be the ability to translate human rights protections into AI policies and legislation.

While anchoring new policies in human rights is crucial, we should not lose sight of the historical context of these technological advancements. We must look back as we move forward. As with technological advancements of the past, we should remind ourselves that progress is not how far you go, but how many people you bring along. We should really ask, is it technological progress if it is not inclusive, if it reproduces a disadvantage? Technological advancement that leaves people behind is not true progress; it is an illusion of progress that perpetuates inequality and systems of oppression. This past weekend marked twenty-five years since the adoption of the UN Declaration on Human Rights Defenders, which recognizes the key role of human rights defenders in realizing the Universal Declaration of Human Rights and other legally binding treaties. In the current wave of excitement around generative AI, the voices of those protecting human rights at the frontlines have rarely been more vital.

Our journey towards a future shaped by AI is also about learning from the routes we have already travelled, especially those from the social media era. Synthetic media has to be understood in the context of the broader information ecosystem. We are monetizing the spread of falsehoods while keeping local content moderators and third-party fact-checkers on precarious salaries, and putting the blame on platform users for not being educated enough to spot the fakery. The only way to align democratic values with technology goals is by both placing responsibility and establishing accountability across the whole information and AI ecosystem, from the foundation models researchers, to those commercializing AI tools, and those creating content and distributing it.

In weaving together these new, old, and borrowed strands of thought, we create a powerful blueprint for steering the course of AI. This is not just about countering a wave of digital manipulation—it is about championing technology advancement that amplifies our democratic values, deepens our global engagement, and preserves the core of our common humanity in an increasingly AI-powered and image-driven world. By centering people’s rights in AI development, we not only protect our individual freedoms, but also fortify our shared democratic future.

https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines

Amnesty International website now accessible even in repressive countries

December 5, 2023
Amnesty International Logotype
A person browsing information on a laptop.

On 5 December 2023 Amnesty International launched its global website as an .onion site on the Tor network, giving users greater access to its work exposing and documenting human rights violations in areas where government censorship and digital surveillance are rife.

In recent years, a number of countries including Algeria, China, Iran, Russia and Viet Nam have blocked Amnesty International websites.

By making Amnesty International’s website available as a secure .onion site on Tor, more people will be able to read our human rights research and engage with the vital work of speaking truth to power, and defending human rights.”Donncha Ó Cearbhaill, Head of Security Lab at Amnesty Tech.

However, audiences accessing the Amnesty.org website through Tor will be able to bypass attempts at censorship.

An .onion site is a website that is only accessible through Tor, a volunteer-run network of servers which encrypt and route internet traffic through multiple servers around the world, providing users with an added layer of privacy and anonymity.

The onion site provides a means for individuals around the world to exercise their rights to privacy, freedom of expression, freedom of peaceful assembly, and freedom of association in a safe and secure online environment,” said Donncha Ó Cearbhaill, Head of Security Lab at Amnesty Tech.

The new Amnesty onion site can be accessed using the Tor Browser through our secure onion address at: https://www.amnestyl337aduwuvpf57irfl54ggtnuera45ygcxzuftwxjvvmpuzqd.onion.

The browser must be downloaded and installed through the official Tor Project website.

How to access Amnesty websites using Tor

The Tor Project has a version of the Tor Browser for many common platforms, including Windows, Mac, Linux, and Android. Onion sites can also be accessed on iPhone through the Onion Browser app. In countries where the Tor network is blocked, visitors will also need to configure Tor bridges which help bypass attempts to block connections to the network.

Amnesty International is also making language-specific content published in Chinese, Farsi and Russian available on the Amnesty International Tor onion website.

We are thrilled that one of the most recognized human rights organizations has adopted an onion service to provide greater online protections for those seeking information, support and advocacy. Amnesty International’s choice to offer an onion version of their website underlines the critical role of this open-source privacy technology as an important tool in our shared work of advancing human rights,” said Isabela Fernandes, Executive Director, the Tor Project.

What are .onion sites?

Onion services never leave the Tor network. Their location and IP addresses are hidden, making it difficult to censor them or identify their operators. In addition, all traffic between users and onion services is end-to-end encrypted. As a result, users leave no metadata trail making it impossible for their identity or internet activity to be tracked.

Both Tor and virtual private networks (VPNs) can help internet users bypass website blocking and censorship.

Tor routes connection through a number of volunteer run and randomly assigned servers preventing anyone individual or organization from being able to track both the identity and internet activity of users while a VPN connects through a single privately owned server.

The Tor software was first released more than 20 years ago and is now developed and maintained by the Tor Project, a US-registered not-for-profit organization which is focused on advancing human rights and freedoms by creating and deploying free and open-source anonymity software and privacy technologies.

https://www.amnesty.org/en/latest/news/

HURIDOCS – who will continue Friedhelm Weinberg’s excellent leadership?

December 12, 2022

After more than 10 years, Friedhelm Weinberg will be leaving HURIDOCS in early 2023. Having worked with him in person on many occasions, I can testify that his leadership has been most impressive, for the NGO itself [see e.g. https://humanrightsdefenders.blog/category/organisations/huridocs/] and the in the area of networking with others, such as the MEA and THF [see e.g. his: https://youtu.be/zDxPbd9St9Y]. In his own announcement, he modestly refers to all his colleagues:

It has been an incredible decade with HURIDOCS, working with amazing colleagues and partners at the intersection of human rights and technology. Together, we have drastically increased support to activists to leverage technology for documentation, litigation and advocacy work. We have pioneered flexible, reliable and robust software tools such as Uwazi, while responsibly sunsetting the past generation of open source software.

None of this would have been possible without the team we have built, and that was collaborating remotely across the globe well before 2020. It’s a committed, humorous and professional bunch, and I have learned so much with every single one of them, as we made things happen and as we hit walls and then picked each other up. I am also grateful to our board that brings together wisdom from leading NGOs, technology companies, the financial sector, but, more importantly, people that were generous with guidance, encouragement and critique.

It has also been a decade of many heartbreaks. From partners whose offices have been raided, that have been declared foreign agents, threatened, attacked. From wars and conflicts breaking out, affecting people we work with. From the difficulties of all we’re doing sometimes not being enough. From worrying how to raise the money to sustain and grow a team that can rise to these challenges.

It is a bittersweet departure, because it has been life-affirming – and yet it is for a perspective that fills me with warmth and excitement. For a while, I will be with our children, with the second one due to arrive in early 2023. 

As I have made the decision to leave HURIDOCS, I also have felt really down and much of the stress built up over a decade manifested physically. Seeking treatment, I have been diagnosed with burnout and depression, and have been recovering with the support from specialists, friends and family. This is neither a badge of honor nor something I want to be shy about, it’s just the reason you haven’t seen much of me recently in professional circles. It’s getting better and I am grateful to have the time and space for healing.

Currently, Nancy Yu is leading HURIDOCS as Interim Executive Director, as Lisa Reinsberg as the Board Chair holds the space and directs the succession process. I am grateful to both of them to step up and step in, as well as the team, our partners and funders for a decade of working together to advance human rights.

As the search for his successor has started, please have a look at the recruitment announcement and consider applying or sharing it with suitable candidates: https://lnkd.in/e7Y7smqT

https://www.linkedin.com/feed/update/urn:li:activity:7005479545189322752/