Posts Tagged ‘chatgpt’

Witness’ Sam Gregory gave Gruber Lecture on artificial intelligence and human rights advocacy

June 23, 2025
Sam Gregory Headshot

Sam Gregory delivered the Spring 2025 Gruber Distinguished Lecture on Global Justice on March 24, 2025, at 4:30 pm at Yale Law School. The lecture was co-moderated by his faculty hosts, Binger Clinical Professor Emeritus of Human Rights Jim Silk ’89 and David Simon, assistant dean for Graduate Education, senior lecturer in Global Affairs and director of the Genocide Studies Program at Yale University. Gregory is the executive director of WITNESS, a human rights nonprofit organization that empowers individuals and communities to use technology to document human rights abuses and advocate for justice. He is an internationally recognized expert on using digital media and smartphone witnessing to defend and protect human rights. With over two decades of experience in the intersection of technology, media, and human rights, Gregory has become a leading figure in the field of digital advocacy. He previously launched the “Prepare, Don’t Panic” initiative in 2018 to prompt concerted, effective, and context-sensitive policy responses to deepfakes and deceptive AI issues worldwide. He focuses on leveraging emerging solutions like authenticity infrastructure, trustworthy audiovisual witnessing, and livestreamed/co-present storytelling to address misinformation, media manipulation, and rising authoritarianism.

Gregory’s lecture, entitled “Fortifying Truth, Trust and Evidence in the Face of Artificial Intelligence and Emerging Technology,” focused on the challenges that artificial intelligence poses to truth, trust, and human rights advocacy. Generative AI’s rapid development and impact on how media is made, edited, and distributed affects how digital technology can be used to expose human rights violations and defend human rights. Gregory considered how photos and videos – essential tools for human rights documentation, evidence, and storytelling – are increasingly distrusted in an era of widespread skepticism and technological advancements that enable deepfakes and AI-generated content. AI can not only create false memories, but also “acts as a powerful conduit for plausible deniability.” Gregory discussed AI’s impact on the ability to believe and trust human rights voices and its role in restructuring the information ecosystem. The escalating burden of proof for human rights activists and the overwhelming volume of digital content underscore how AI can both aid and hinder accountability efforts.

In the face of these concerns, Gregory emphasized the need for human rights defenders to work shape AI systems proactively. He stressed that AI requires a foundational, systemic architecture that ensures information systems serve, rather than undermine, human rights work. Gregory reflected that “at the fundamental (level), this is work enabled by technology, but it’s not about technology.” Digital technologies provide new mechanisms for exposing violence and human rights abuse; the abuse itself has not changed. He also pointed to the need to invest in robust community archives to protect the integrity of human rights evidence against false memories. Stressing the importance of epistemic justice, digital media literacy, and equitable access to technology and technological knowledge, Gregory discussed WITNESS’ work in organizing for digital media literacy and access in human rights digital witnessing, particularly in response to generative AI. One example he highlighted was training individuals how to film audiovisual witnessing videos in ways that are difficult for AI to replicate.

As the floor opened to questions, Gregory pointed to “authenticity infrastructure” as one building block to verify content and maintain truth. Instead of treating information as a binary between AI and not AI, it is necessary to understand the entire “recipe” of how information is created, locating it along the continuum of how AI permeates modern communication. AI must be understood, not disregarded. This new digital territory will only become more relevant in human rights work, Gregory maintained. The discussion also covered regulatory challenges, courts’ struggles with AI generated and audiovisual evidence at large, the importance of AI-infused media literacy, and the necessity of strong civil society institutions in the face of corporate media control.A recording of the lecture is available here.

https://law.yale.edu/centers-workshops/gruber-program-global-justice-and-womens-rights/gruber-lectures/samuel-gregory

International conference on ‘AI and Human Rights’ in Doha

May 27, 2025
HE Chairperson of the NHRC Maryam bint Abdullah Al Attiyah

Chairperson of the NHRC Maryam bint Abdullah Al Attiyah

The international conference ‘Artificial Intelligence and Human Rights: Opportunities, Risks, and Visions for a Better Future,’ gets under way in Doha today. Organised by the National Human Rights Committee (NHRC), the two-day event is being held in collaboration with the UN Development Programme (UNDP), the Office of the High Commissioner for Human Rights (OHCHR), the Global Alliance of National Human Rights Institutions (GANHRI), and Qatar’s Ministry of Communications and Information Technology (MCIT) and National Cyber Security Agency, along with other international entities active in the fields of digital tools and technology.

Chairperson of the NHRC Maryam bint Abdullah Al Attiyah, said in a statement Monday that the conference discusses one of the most prominent human rights issues of our time, one that is becoming increasingly important, especially with the tremendous and growing progress in the field of artificial intelligence, which many human rights activists around the world fear will impact the rights of many individuals worldwide.

She added, that the developments in AI that is observed every day requires the establishment of a legal framework that governs the rights of every individual, whether related to privacy or other rights. The framework must also regulate and control the technologies developed by companies, ensuring that rights are not infringed upon, and that the development of AI technologies is not synonymous with the pursuit of financial gain, neglecting the potential infringements on the rights of individuals and communities.

She emphasised that the conference aims to discuss the impact of AI on human rights, not only limiting itself to the challenges it poses to the lives of individuals, but also extending to identifying the opportunities it presents to human rights specialists around the world. She noted that the coming period must witness a deep focus on this area, which is evolving by the hour.

The conference is expected to bring together around 800 partners from around the world to discuss the future of globalisation. Target attendees include government officials, policymakers, AI and technology experts, human rights defenders and activists, legal professionals, AI ethics specialists, civil society representatives, academics and researchers, international organisations, private sector companies, and technology developers.

..The conference is built around 12 core themes and key topics. It focuses on the foundations of artificial intelligence, including fundamental concepts such as machine learning and natural language processing. It also addresses AI and privacy-its impact on personal data, surveillance, and privacy rights. Other themes include bias and discrimination, with an emphasis on addressing algorithmic bias and ensuring fairness, as well as freedom of expression and the role of AI in content moderation, censorship, and the protection of free speech.

The International conference aims to explore the impact of AI on human rights and fundamental freedoms, analyse the opportunities and risks associated with AI from a human rights perspective, present best practices and standards for the ethical use of AI, and engage with policymakers, technology experts, civil society, and the private sector to foster multi-stakeholder dialogue. It also seeks to propose actionable policy and legal framework recommendations to ensure that AI development aligns with human rights principles.

Participating experts will address the legal and ethical frameworks, laws, policies, and ethical standards for the responsible use of artificial intelligence. They will also explore the theme of “AI and Security,” including issues related to militarisation, armed conflicts, and the protection of human rights. Additionally, the conference will examine AI and democracy, focusing on the role of AI in shaping democratic institutions and promoting inclusive participation.

Conference participants will also discuss artificial intelligence and the future of media from a human rights-based perspective, with a focus on both risks and innovation. The conference will further examine the transformations brought about by AI in employment and job opportunities, its impact on labor rights and economic inequality, as well as the associated challenges and prospects.

As part of its ongoing commitment to employing technology in service of humanity and supporting the ethical use of emerging technologies, the Ministry of Communications and Information Technology (MCIT) is also partnering in organising the conference.

for some other posts on Qatar, see: https://humanrightsdefenders.blog/tag/qatar/

https://www.gulf-times.com/article/705199/qatar/international-conference-on-ai-and-human-rights-opens-in-doha-tuesday