Posts Tagged ‘technology’

Amnesty’s annual State of the World’s Human Rights report 2023 is out

April 25, 2024
  • Powerful governments cast humanity into an era devoid of effective international rule of law, with civilians in conflicts paying the highest price
  • Rapidly changing artificial intelligence is left to create fertile ground for racism, discrimination and division in landmark year for public elections
  • Standing against these abuses, people the world over mobilized in unprecedented numbers, demanding human rights protection and respect for our common humanity

The world is reaping a harvest of terrifying consequences from escalating conflict and the near breakdown of international law, said Amnesty International as it launched its annual The State of the World’s Human Rights report, delivering an assessment of human rights in 155 countries.

Amnesty International also warned that the breakdown of the rule of law is likely to accelerate with rapid advancement in artificial intelligence (AI) which, coupled with the dominance of Big Tech, risks a “supercharging” of human rights violations if regulation continues to lag behind advances.

Amnesty International’s report paints a dismal picture of alarming human rights repression and prolific international rule-breaking, all in the midst of deepening global inequality, superpowers vying for supremacy and an escalating climate crisis,” said Amnesty International’s Secretary General, Agnès Callamard. 

“Israel’s flagrant disregard for international law is compounded by the failures of its allies to stop the indescribable civilian bloodshed meted out in Gaza. Many of those allies were the very architects of that post-World War Two system of law. Alongside Russia’s ongoing aggression against Ukraine, the growing number of armed conflicts, and massive human rights violations witnessed, for example, in Sudan, Ethiopia and Myanmar – the global rule-based order is at risk of decimation.”

Lawlessness, discrimination and impunity in conflicts and elsewhere have been enabled by unchecked use of new and familiar technologies which are now routinely weaponized by military, political and corporate actors. Big Tech’s platforms have stoked conflict. Spyware and mass surveillance tools are used to encroach on fundamental rights and freedoms, while governments are deploying automated tools targeting the most marginalized groups in society.

“In an increasingly precarious world, unregulated proliferation and deployment of technologies such as generative AI, facial recognition and spyware are poised to be a pernicious foe – scaling up and supercharging violations of international law and human rights to exceptional levels,” said Agnès Callamard.

“During a landmark year of elections and in the face of the increasingly powerful anti-regulation lobby driven and financed by Big Tech actors, these rogue and unregulated technological advances pose an enormous threat to us all. They can be weaponized to discriminate, disinform and divide.”

Read more about Amnesty researchers’ biggest human rights concerns for 2023/24.

Amnesty International’s report paints a dismal picture of alarming human rights repression and prolific international rule-breaking, all in the midst of deepening global inequality, superpowers vying for supremacy and an escalating climate crisis. Amnesty International’s Secretary General, Agnès Callamard

U.S. State Department and the EU release an approach for protecting human rights defenders from online attacks.

March 13, 2024

On 12 March 2024 the U.S. and European Union issued new joint guidance on Monday for online platforms to help mitigate virtual attacks targeting human rights defenders, reports Alexandra Kelley,
Staff Correspondent, Nextgov/FCW.

Outlined in 10 steps, the guidance was formed following stakeholder consulting from January 2023 to February 2024. Entities including nongovernmental organizations, trade unionists, journalists, lawyers, environmental and land activists advised both governments on how to protect human rights defenders on the internet.

Recommendations within the guidance include: committing to an HRD [human rights defender] protection policy; identifying risks to HRDs; sharing information with peers and select stakeholders; creating policy to monitoring performance metric base marks; resource staff adequately; build a capacity to address local risks; offer safety tools education; create an incident reporting channel; provide access to help for HRDs; and incorporate a strong transparent infrastructure.

Digital threats HRDs face include target Internet shutdowns, censorship, malicious cyber activity, unlawful surveillance, and doxxing. Given the severity and reported increase of digital attacks against HRDs, the guidance calls upon online platforms to take mitigating measures.

The United States and the European Union encourage online platforms to use these recommendations to determine and implement concrete steps to identify and mitigate risks to HRDs on or through their services or products,” the guidance reads. 

The ten guiding points laid out in the document reflect existing transatlantic policy commitments, including the Declaration for the Future of the Internet. Like other digital guidance, however, these actions are voluntary. 

“These recommendations may be followed by further actions taken by the United States or the European Union to promote rights-respecting approaches by online platforms to address the needs of HRDs,” the document said

https://www.nextgov.com/digital-government/2024/03/us-eu-recommend-protections-human-rights-defenders-online/394865

Alex, a Romanian activist, works at the intersection of human rights, technology and public policy.

January 24, 2024
Amnesty International Logotype

On 22 January 2024, Amnesty International published an interesting piece by Alex, a 31-year-old Romanian activist working at the intersection of human rights, technology and public policy.

Seeking to use her experience and knowledge of tech for political change, Alex applied and was accepted onto the Digital Forensics Fellowship led by the Security Lab at Amnesty Tech. The Digital Forensics Fellowship (DFF) is an opportunity for human rights defenders (HRDs) working at the nexus of human rights and technology and expand their learning.

Here, Alex shares her activism journey and insight into how like-minded human rights defenders can join the fight against spyware:

In the summer of 2022, I watched a recording of Claudio Guarnieri, former Head of the Amnesty Tech Security Lab, presenting about Security Without Borders at the 2016 Chaos Communication Congress. After following the investigations of the Pegasus Project and other projects centring on spyware being used on journalists and human rights defenders, his call to action at the end — “Find a cause and assist others” — resonated with me long after I watched the talk.

Becoming a tech activist

A few days later, Amnesty Tech announced the launch of the Digital Forensics Fellowship (DFF). It was serendipity, and I didn’t question it. At that point, I had already pushed myself to seek out a more political, more involved way to share my knowledge. Not tech for the sake of tech, but tech activism to ensure political change.

Portrait of a young woman with dark hair looking downwards in a thoughtful manner
Alex is a 31-year-old Romanian activist, working at the intersection of human rights, technology and public policy.

I followed an atypical path for a technologist. Prior to university, I dreamt of being a published fiction author, only to switch to studying industrial automation in college. I spent five years as a developer in the IT industry and two as Chief Technology Officer for an NGO, where I finally found myself using my tech knowledge to support journalists and activists.

My approach to technology, like my approach to art, is informed by political struggles, as well as the questioning of how one can lead a good life. My advocacy for digital rights follows this thread. For me, technology is merely one of many tools at the disposal of humanity, and it should never be a barrier to decent living, nor an oppressive tool for anyone.

Technology is merely one of many tools at the disposal of humanity. It should never be a barrier to decent living, nor an oppressive tool for anyone.

The opportunity offered by the DFF matched my interests and the direction I wanted to take my activism. During the year-long training programme from 2022-2023, the things I learned turned out to be valuable for my advocacy work.

In 2022, the Child Sexual Abuse Regulation was proposed in the EU. I focused on conducting advocacy to make it as clear as possible that losing encrypted communication would make life decidedly worse for everyone in the EU. We ran a campaign to raise awareness of the importance of end-to-end encryption for journalists, activists and people in general. Our communication unfolded under the banner of “you don’t realize how precious encryption is until you’ve lost it”. Apti.ro, the Romanian non-profit organisation that I work with, also participated in the EU-wide campaign, as part of the EDRi coalition. To add fuel to the fire, spyware scandals erupted across the EU. My home country, Romania, borders countries where spyware has been proven to have been used to invade the personal lives of journalists, political opponents of the government and human rights defenders.

The meaning of being a Fellow

The Security Lab provided us with theoretical and practical sessions on digital forensics, while the cohort was a safe, vibrant space to discuss challenges we were facing. We debugged together and discussed awful surveillance technology at length, contributing our own local perspective.

The importance of building cross-border networks of cooperation and solidarity became clear to me during the DFF. I heard stories of struggles from people involved in large and small organizations alike. I am convinced our struggles are intertwined, and we should join forces whenever possible.

Now when I’m working with other activists, I try not to talk of “forensics”. Instead, I talk about keeping ourselves safe, and our conversations private. Often, discussions we have as activists are about caring for a particular part of our lives – our safety when protesting, our confidentiality when organizing, our privacy when convening online. Our devices and data are part of this process, as is our physical body. At the end of the day, digital forensics are just another form of caring for ourselves.

I try to shape discussions about people’s devices similarly to how doctors discuss the symptoms of an illness. The person whose device is at the centre of the discussion is the best judge of the symptoms, and it’s important to never minimize their apprehension. It’s also important to go through the steps of the forensics in a way that allows them to understand what is happening and what the purpose of the procedure is.

I never use a one-size-fits-all approach because the situation of the person who owns a device informs the ways it might be targeted or infected.

The human approach to technology

My work is human-centred and technology-focused and requires care and concentration to achieve meaningful results. For activists interested in working on digital forensics, start by digging deep into the threats you see in your local context. If numerous phishing campaigns are unfolding, dig into network forensics and map out the owners of the domains and the infrastructure.

Secondly, get to know the person you are working with. If they are interested in secure communications, help them gain a better understanding of mobile network-based attacks, as well as suggesting instant messaging apps that preserve the privacy and the security of their users. In time, they will be able to spot “empty words” used to market messaging apps that are not end-to-end encrypted.

Finally, to stay true to the part of me that loves a well-told story, read not only reports of ongoing spyware campaigns, but narrative explorations from people involved. “Pegasus: The Story of the World’s Most Dangerous Spyware” by Laurent Richard and Sandrine Rigaud is a good example that documents both the human and the technical aspects. The Shoot the Messenger podcast, by PRX and Exile Content Studio, is also great as it focuses on Pegasus, starting from the brutal murder of Jamal Khashoggi to the recent infection of the device of journalist and founder of Meduza, Galina Timchenko.

We must continue to do this research, however difficult it may be, and to tell the stories of those impacted by these invasive espionage tactics. Without this work we wouldn’t be making the political progress we’ve seen to stem the development and use of this atrocious technology.

https://www.amnesty.org/en/search/Alex/

In the deepfake era, we need to hear the Human Rights Defenders

December 19, 2023

In a Blog Post (Council on Foreign Relations of 18 December 2023) Raquel Vazquez Llorente argues that ‘Artificial intelligence is increasingly used to alter and generate content online. As development of AI continues, societies and policymakers need to ensure that it incorporates fundamental human rights.” Raquel is the Head of Law and Policy, Technology Threats and Opportunities at WITNESS

The urgency of integrating human rights into the DNA of emerging technologies has never been more pressing. Through my role at WITNESS, I’ve observed first-hand the profound impact of generative AI across societies, and most importantly, on those defending democracy at the frontlines.

The recent elections in Argentina were marked by the widespread use of AI in campaigning material. Generative AI has also been used to target candidates with embarrassing content (increasingly of a sexual nature), to generate political ads, and to support candidates’ campaigns and outreach activities in India, the United States, Poland, Zambia, and Bangladesh (to name a few). The overall result of the lack of strong frameworks for the use of synthetic media in political settings has been a climate of mistrust regarding what we see or hear.

Not all digital alteration is harmful, though. Part of my work involves identifying how emerging technologies can foster positive change. For instance, with appropriate disclosure, synthetic media could be used to enhance voter education and engagement. Generative AI could help create informative content about candidates and their platforms, or of wider election processes, in different languages and formats, improving inclusivity or reducing barriers for underdog or outsider candidates. For voters with disabilities, synthetic media could provide accessible formats of election materials, such as sign language avatars or audio descriptions of written content. Satirical deepfakes could engage people who might otherwise be disinterested in politics, bringing attention to issues that might not be covered in mainstream media. We need to celebrate and protect these uses.

As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? When AI can blur the lines between reality and fiction with increasing credibility and ease, discerning truth from falsehood becomes not just a technological battle, but a fight to uphold democracy.

From conversations with journalists, activists, technologists and other communities impacted by generative AI and deepfakes, I have learnt that the effects of synthetic media on democracy are a mix of new, old, and borrowed challenges.

Generative AI introduces a daunting new reality: inconvenient truths can be denied as deep faked, or at least facilitate claims of plausible deniability to evade accountability. The burden of proof, or perhaps more accurately, the “burden of truth” has shifted onto those circulating authentic content and holding the powerful to account. This is not just a crisis of identifying what is fake. It is also a crisis of protecting what is true. When anything and everything can be dismissed as AI-generated or manipulated, how do we elevate the real stories of those defending our democracy at the frontlines?

But AI’s impact doesn’t stop at new challenges; it exacerbates old inequalities. Those who are already marginalized and disenfranchised—due to their gender, ethnicity, race or belonging to a particular group—face amplified risks. AI is like a magnifying glass for exclusion, and its harms are cumulative. AI deepens existing vulnerabilities, bringing a serious threat to principles of inclusivity and fairness that lie at the heart of democratic values. Similarly, sexual deepfakes can have an additional chilling effect, discouraging women, LGBTQ+ people and individuals from minoritized communities to participate in public life, thus eroding the diversity and representativeness that are essential for a healthy democracy.

Lastly, much as with social media, where we failed to incorporate the voices of the global majority, we have borrowed previous mistakes. The shortcomings in moderating content, combating misinformation, and protecting user privacy have had profound implications on democracy and social discourse. Similarly, in the context of AI, we are yet to see meaningful policies and regulation that not only consult globally those that are being impacted by AI but, more importantly, center the solutions that affected communities beyond the United States and Europe prioritize. This highlights a crucial gap: the urgent need for a global perspective in AI governance, one that learns from the failures of social media in addressing cultural and political nuances across different societies.

As we navigate AI’s impact on democracy and human rights, our approach to these challenges should be multifaceted. We must draw on a blend of strategies—ones that address the immediate ‘new’ realities of AI, respond to the ‘old’ but persistent challenges of inequality, and incorporate ‘borrowed’ wisdom from our past experiences.

First, we must ensure that new AI regulations and companies’ policies are steeped in human rights law and principles, such as those enshrined in the Universal Declaration of Human Rights. In the coming years, one of the most important areas in socio-technical expertise will be the ability to translate human rights protections into AI policies and legislation.

While anchoring new policies in human rights is crucial, we should not lose sight of the historical context of these technological advancements. We must look back as we move forward. As with technological advancements of the past, we should remind ourselves that progress is not how far you go, but how many people you bring along. We should really ask, is it technological progress if it is not inclusive, if it reproduces a disadvantage? Technological advancement that leaves people behind is not true progress; it is an illusion of progress that perpetuates inequality and systems of oppression. This past weekend marked twenty-five years since the adoption of the UN Declaration on Human Rights Defenders, which recognizes the key role of human rights defenders in realizing the Universal Declaration of Human Rights and other legally binding treaties. In the current wave of excitement around generative AI, the voices of those protecting human rights at the frontlines have rarely been more vital.

Our journey towards a future shaped by AI is also about learning from the routes we have already travelled, especially those from the social media era. Synthetic media has to be understood in the context of the broader information ecosystem. We are monetizing the spread of falsehoods while keeping local content moderators and third-party fact-checkers on precarious salaries, and putting the blame on platform users for not being educated enough to spot the fakery. The only way to align democratic values with technology goals is by both placing responsibility and establishing accountability across the whole information and AI ecosystem, from the foundation models researchers, to those commercializing AI tools, and those creating content and distributing it.

In weaving together these new, old, and borrowed strands of thought, we create a powerful blueprint for steering the course of AI. This is not just about countering a wave of digital manipulation—it is about championing technology advancement that amplifies our democratic values, deepens our global engagement, and preserves the core of our common humanity in an increasingly AI-powered and image-driven world. By centering people’s rights in AI development, we not only protect our individual freedoms, but also fortify our shared democratic future.

https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines

NGOs express fear that new EU ‘terrorist content’ draft will make things worse for human rights defenders

January 31, 2019

On Wednesday 30 January 2019 Mike Masnick in TechDirt published a piece entitled: “Human Rights Groups Plead With The EU Not To Pass Its Awful ‘Terrorist Content’ Regulation“. The key argument is that machine-learning algorithms are not able to distinguish between terrorist propaganda and investigations of, say, war crimes, It points out that as an example that Germany’s anti-“hate speech” law has proven to be misused by authoritarian regimes. Read the rest of this entry »

Today official launch of AI’s Panic Button – a new App to fight attack, kidnap and torture

June 23, 2014

Amnesty International launches new open source ‘Panic Button’ app to help activists facing imminent danger.

Today, 23 June 2014, Amnesty International launches its open source ‘Panic Button’ app to help human rights defenders facing imminent danger. The aim is to increase protection for those who face the threat of arrest, attack, kidnap and torture. In short:

Read the rest of this entry »

Can ‘big data’ can help protect human rights?

January 5, 2014

Samir Goswami, managing director of AI USA’s Individuals and Communities at Risk Program, and Mark Cooke,  chief innovation officer at Tax Management Associates, wrote a piece about how ‘big data’ can help human rights rather than just violate them. The piece is worth reading but falls short of being convincing. The better prediction of human rights violations which may [!] result from the analysis of a huge amount of data would of course be welcome but I remain unconvinced that it would therefore lead to a reduction of actual violations. Too many of these are planned and willful, while the mobilization of shame and international solidarity would be less forthcoming for violations that MAY occur. The authors are not the first to state that prevention is better than cure but the current problem is no so much a lack of predictive knowledge as a weakness of curing intervention. Still, the article is worth reading as it describes developments that are likely to come about anyway. Read the rest of this entry »

For HRDs digital surveillance can mark the difference between life and death says Mary Lawlor

September 22, 2013

This blog has tried to pay regularly attention to the crucial issue of electronic security and referred to the different proposal that aim to redress the situation in favour of human rights defenders. In a column of Friday 20 September the Director of Front Line, Mary Lawlor, writes about the digital security programme “Security in a Box” which her organisation and the Tactical Technology collective started some years ago. For Sunday reading here the whole text:

Mary Lawlor

ARE YOU AWARE that the recording device on your smartphone can be activated remotely and record sensitive conversations? And that the webcam on your PC can film inside your office without you knowing?

For most people, debates about the snooping NSA and GCHQ are little more than great material for a chat down the pub, but for human rights defenders around the world, digital security is synonymous with personal security. For a gay rights campaigner in Honduras or a trade unionist in Colombia, safety from interception of communications or seizure of data can be the difference between freedom or imprisonment, life or death.

Digital surveillance has been described as “connecting the boot to the brain of the repressive regime”. Governments are developing the capacity to manipulate, monitor and subvert electronic information. Surveillance and censorship is growing and the lack of security for digitally stored or communicated information is becoming a major problem for human rights defenders in many countries.

By hacking into the computer system of a human rights organisation, governments or hostile hackers can access sensitive information, including the details of the organisation’s members and supporters. They can also install spyware or viruses to monitor or disrupt the work of the organisation.

Dangerous in the wrong hands

One of the best-documented cyber attacks on an NGO was the hacking of the Political Prisoner’s Solidarity Committee, a Colombian human rights organisation. The organisation’s email account was hacked and used to send malicious viruses and spam messages, and all employee work email accounts were deleted.

The hacked email account was also used to send threatening emails to a member of the organisation based in a different region. Their offices were broken into and the hard disk of one computer was stolen and replaced with a faulty one. Spyware was found on the computer used to maintain the organisation’s website; this recorded all the information on the computer and sent it via the internet to an unknown location. This cyber attack also coincided with a wave of anonymous phone calls and direct threats to staff members.

In this digital age how can human rights defenders make sure their online communications and their data are safe and that they are not putting themselves or colleagues in danger?

This is where Front Line Defenders is able to give practical help. With a security grant from Front Line Defenders, the Political Prisoner’s Solidarity Committee installed a new secured server and router, and upgraded their whole computer security system. We also organised a workshop on digital security for all the members of their organisation.

This was useful for a seriously at-risk organisation. But there are effective steps all of us can take to stay safe. Most of us have a computer or laptop and most have a password. That password is probably a cat’s name or a daughter’s name – which can be broken in about 10 seconds. Simply by changing your password to a longer one which combines upper case, lower case and digits makes the password virtually unbreakable and is a simple, first step to improve your online security.

“Back doors”

Recent revelations have shown that even encrypted communications that were previously thought to be secure have been built with deliberately included “back doors”, so that organisations like the NSA and GCHQ can access information that people think is secret. One protection against these built-in weaknesses is to use open-source software – this is software not provided by a big-name company like Microsoft or Apple, but one in which the workings of the software are made available for all to see, so that any such intended weakness in the encryption would be spotted and exposed by the global community of digital security experts.

Even if authorities or malicious hackers can’t see what you’re communicating, it can still be possible for them to see when you communicate and with whom. The Tactical Technology Collective has said, “If you use a computer, surf the internet, text your friends via a mobile phone or shop online – you leave a digital shadow.” If you want to find out the size of your digital shadow, and more importantly want to know what you can do about it, visit their award-winning website myshadow.org (now: https://privacy.net/analyzer/)

.

Security in-a-box (available onlineis a collaborative effort of the Tactical Technology collective and Front Line Defenders. It was created to meet the digital security and privacy needs of advocates and human rights defenders, but can also be used by members of the public.Security in-a-box includes a how-to booklet  which addresses a number of important digital security issues.

It also provides a collection of Hands-on Guides, each of which includes a particular freeware or open source software tool, as well as instructions on how you can use that tool to secure your computer, protect your information or maintain the privacy of your internet communication.

A clear understanding of the risks

When we started our Digital Security Programme we only ran one or two trainings per year. Now we are organising workshops on digital security all over the world, sometimes in secret locations for human rights defenders from countries where even to use the word “encryption” in an email would bring you under the eagle eye of the security services.

Electronic communication enables human rights defenders to network and cooperate as never before but survival depends on having a clear understanding of the risks involved and the need for a well thought-out digital security strategy.

Column: For some people, digital surveillance can mark the difference between life and death.

Protection International opens inscription for new e-learning course on Security for HRDs

September 12, 2013

On 7 October 2013, a new course on “Security and protection management for HRD and social organisations” begins on the e-learning platform of Protection International (http://www.e-learning.protectioninternational.org/course/info.php?id=21).

•       Enrolment until 24 September 2013 Read the rest of this entry »

Development of Amnesty’s Panic Button App

September 11, 2013

Having last week referred to 3 different (and competing?) techno initiatives to increase the security of HRDs, i would be amiss not to note the post of 11 september  2013 by Tanya O’Caroll on the AI blog concerning  the development of the Panic button. Over the next couple of months, she will be keeping you posted about the Panic Button. If you want to join the community of people working on Panic Button, please leave a comment on the site mentioned below or email panicbutton@amnesty.org.

via Inside the development of Amnesty’s new Panic Button App | Amnestys global human rights blog.