Posts Tagged ‘technology’

Two young human rights defenders, Raphael Mimoun and Nikole Yanez, on tech for human rights

May 16, 2024

Mozilla is highlighting each year the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through its Rise 25 Awards. On 13 May 2024 was the turn of Raphael Mimoun, a builder dedicated to making tools that empower journalists and human rights defenders. Aron Yohannes talked with Raphael about the launch of his app, Tella, combatting misinformation online, the future of social media platforms and more.

Raphael Mimoun: So I never worked in tech per se and only developed a passion for technology as I was working in human rights. It was really a time when, basically, the power of technology to support movements and to head movements around the world was kind of getting fully understood. You had the Arab Spring, you had Occupy Wall Street, you had all of these movements for social justice, for democracy, for human rights, that were very much kind of spread through technology, right? Technology played a very, very important role. But just after that, it was kind of like a hangover where we all realized, “OK, it’s not just all good and fine.” You also have the flip side, which is government spying on the citizens, identifying citizens through social media, through hacking, and so on and so forth — harassing them, repressing them online, but translating into offline violence, repression, and so on. And so I think that was the moment where I was like, “OK, there is something that needs to be done around technology,” specifically for those people who are on the front lines because if we just treat it as a tool — one of those neutral tools — we end up getting very vulnerable to violence, and it can be from the state, it can also be from online mobs, armed groups, all sort of things.

There’s so much misinformation out there now that it’s so much harder to tell the difference between what’s real and fake news. Twitter was such a reliable tool of information before, but that’s changed. Do you think that any of these other platforms can be able to help make up for so much of the misinformation that is out there?

I think we all feel the weight of that loss of losing Twitter. Twitter was always a large corporation, partially owned by a billionaire. It was never kind of a community tool, but there was still an ethos, right? Like a philosophy, or the values of the platform were still very much like community-oriented, right? It was that place for activists and human rights defenders and journalists and communities in general to voice their opinions. So I think that loss was very hard on all of us.

I see a lot of misinformation on Instagram as well. There is very little moderation there. It’s also all visual, so if you want traction, you’re going to try to put something that is very spectacular that is very eye catchy, and so I think that leads to even more misinformation.

I am pretty optimistic about some of the alternatives that have popped up since Twitter’s downfall. Mastodon actually blew up after Twitter, but it’s much older — I think it’s 10 years old by now. And there’s Bluesky. So I think those two are building up, and they offer spaces that are much more decentralized with much more autonomy and agency to users. You are more likely to be able to customize your feeds. You are more likely to have tools for your own safety online, right? All of those different things that I feel like you could never get on Threads, on Instagram or on Twitter, or anything like that. I’m hoping it’s actually going to be able to recreate the community that is very much what Twitter was. It’s never going to be exactly the same thing, but I’m hoping we will get there. And I think the fact that it is decentralized, open source and with very much a philosophy of agency and autonomy is going to lead us to a place where these social networks can’t actually be taken over by a power hungry billionaire.

What do you think is the biggest challenge that we face in the world this year on and offline, and then how do you think we can combat it?

I don’t know if that’s the biggest challenge, but one of the really big challenges that we’re seeing is how the digital is meeting real life and how people who are active online or on the phone on the computer are getting repressed for that work in real life. So we developed an app called Tella, which encrypts and hides files on your phone, right? So you take a photo or a video of a demonstration or police violence, or whatever it is, and then if the police tries to catch you and grab your phone to delete it, they won’t be able to find it, or at least it will be much more difficult to find it. Or it would be uploaded already. And things like that, I think is one of the big things that we’re seeing again. I don’t know if that the biggest challenge online at the moment, but one of the big things we’re seeing is just that it’s becoming completely normalized to grab someone’s phone or check someone’s computer at the airport, or at the border, in the street and go through it without any form of accountability. People have no idea what the regulations are, what the rules are, what’s allowed, what’s not allowed. And when they abuse those powers, is there any recourse? Most places in the world, at least, where we are working, there is definitely no recourse. And so I think that connection between thinking you’re just taking a photo for social media but actually the repercussion is so real because you’re going to have someone take your phone, and maybe they’re going to delete the photo, or maybe they’re going to detain you. Or maybe they’re going to beat you up — like all of those different things. I think this is one of the big challenges that we’re seeing at the moment, and something that isn’t traditionally thought of as an internet issue or an online digital rights issue because it’s someone taking a physical device and looking through it. It often gets overlooked, and then we don’t have much kind of advocacy around it, or anything like that.

What do you think is one action everybody can take to make the world and our lives online a little bit better?

I think social media has a lot of negative consequences for everyone’s mental health and many other things, but for people who are active and who want to be active, consider social networks that are open source, privacy-friendly and decentralized. Bluesky, the Fediverse —including Mastodon — are examples because I think it’s our responsibility to kind of build up a community there, so we can move away from those social media platforms that are owned by either billionaires or massive corporations, who only want to extract value from us and who spy on us and who censor us. And I feel like if everyone committed to being active on those social media platforms — one way of doing that is just having an account, and whatever you post on one, you just post on the other — I feel like that’s one thing that can make a big difference in the long run.

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope that people are celebrating in the next 25 years?

I was talking a little bit earlier about how we are building a culture that is more privacy-centric, like people are becoming aware, becoming wary about all these things happening to the data, the identity, and so on. And I do think we are at a turning point in terms of the technology that’s available to us, the practices and what we need as users to maintain our privacy and our security.  I feel like in honestly not even 25, I think in 10 years, if things go well — which it’s hard to know in this field — and if we keep on building what we already are building, I can see how we will have an internet that is a lot more privacy-centric where communications are by default are private. Where end-to-end encryption is ubiquitous in our communication, in our emailing. Where social media isn’t extractive and people have actual ownership and agency in the social network networks they use. Where data mining is no longer a thing. I feel like overall, I can see how the infrastructure is now getting built, and that in 10,15 or 25 years, we will be in a place where we can use the internet without having to constantly watch over our shoulder to see if someone is spying on us or seeing who has access and all of those things.

Lastly, what gives you hope about the future of our world?

That people are not getting complacent and that it is always people who are standing up to fight back. We’re seeing it at. We saw it at Google with people standing up as part of No Tech for Apartheid coalition and people losing the jobs. We’re seeing it on university campuses around the country. We’re seeing it on the streets. People fight back. That’s where any change has ever come from: the bottom up. I think now, more than ever, people are willing to put something on the line to make sure that they defend their rights. So I think that really gives me hope.

—————

The second story comes from Amnesty International, 14 May 2024 [https://www.amnesty.org/en/latest/campaigns/2024/05/i-come-from-the-world-of-technology-where-there-are-very-few-women/]

Nikole Yanez is a computer scientist by training, and a human rights defender from Honduras. She is passionate about feminism, the impact of the internet and protecting activists. She was first drawn to human rights through her work as a reporter with a local community radio station. After surviving the coup d’état in Honduras in 2009, Nikole broadened her approach to focus her activism on technology. When she applied for the Digital Forensics Fellowship with the Amnesty Tech Security Lab in 2022, she was looking to learn more about cybersecurity and apply what she learnt with the organizations and collectives she works with regularly.  

She highlighted her commitment to fostering a network of tech-savvy communities across Latin America in an interview with Elina Castillo, Amnesty Tech’s Advocacy and Policy Advisor:

I grew up in Honduras, where I lived through the coup d’état, which took place in 2009. It was a difficult time where rights were non-existent, and people were constantly afraid. I thought it was something you only read about in history books, but it was happening in front of my eyes. I felt myself just trying to survive, but as time went by it made me stronger and want to fight for justice. Despite the difficulties, people in my community remained hopeful and we created a community radio station, which broadcast stories about everyday people and their lives with the aim of informing people about their human rights. I was a reporter, developing stories about individual people and their fight for their rights. From there, I found a passion for working with technology and it inspired me to train to become a computer scientist.

I am always looking for ways to connect technology with activism, and specifically to support women and Indigenous people in their struggles. As much as technology presents risks for human rights defenders, it also offers opportunities for us to better protect ourselves and strengthen our movements. Technology can bring more visibility to our movements, and it can empower our work by allowing us to connect with other people and learn new strategies.

Is there one moment where you realized how to connect what you’ve been doing with feminism with technology?

In my work, my perspective as a feminist helps me centre the experiences and needs of marginalised people for trainings and outreach. It is important for me to publicly identify as an Afrofeminist in a society where there is impunity for gendered and racist violence that occurs every day. In Honduras we need to put our energy into supporting these communities whose rights are most violated, and whose stories are invisible.

For example, in 2006, I was working with a Union to install the Ubuntu operating system (an open-source operating system) on their computers. We realized that the unionists didn’t know how to use a computer, so we created a space for digital literacy and learning about how to use a computer at the same time. This became not just a teaching exercise, but an exercise for me to figure out how to connect these tools to what people are interested in. Something clicked for me in this moment, and this experience helped solidify my approach to working on technology and human rights.

There are not many women working in technology and human rights. I don’t want to be one of the only women, so my goal is to see more women colleagues working on technical issues. I want to make it possible for women to work in this field. I also want to motivate more women to create change within the intersection of technology and human rights. Using a feminist perspective and approach, we ask big questions about how we are doing the work, what our approach needs to be, and who we need to work with.   Nikole Yanez Honduras Human Rights Defender

For me, building a feminist internet means building an internet for everyone. This means creating a space where we do not reproduce sexist violence, where we find a community that responds to the people, to the groups, and to the organizations that fight for human rights. This includes involving women and marginalised people in building the infrastructure, in the configuration of servers, and in the development of protocols for how we use all these tools.

In Honduras, there aren’t many people trained in digital forensics analysis, yet there are organizations that are always seeking me out to help check their phones. The fellowship helped me learn about forensic analysis on phones and computers and tied the learning to what I’m actually doing in my area with different organizations and women’s rights defenders. The fellowship was practical and rooted in the experience of civil society organizations.

Nikole Yanez running a technology and human rights session in Honduras

How do you explain the importance of digital forensics? Well first, it’s incredibly relevant for women rights defenders. Everyone wants to know if their phone has been hacked. That’s the first thing they ask:, “Can you actually know whether your phone has been hacked?” and “How do I know? Can you do it for me? How?” Those are the things that come up in my trainings and conversations.

I like to help people to think about protection as a process, something ongoing, because we use technology all day long. There are organizations and people that take years to understand that. So, it’s not something that can be achieved in a single conversation. Sometimes a lot of things need to happen, including bad things, before people really take this topic seriously…

I try to use very basic tools when I’m doing digital security support, to say you can do this on whatever device you’re on, this is a prevention tool. It’s not just applying technical knowledge, it’s also a process of explaining, training, showing how this work is not just for hackers or people who know a lot about computers.

One of the challenges is to spread awareness about cybersecurity among Indigenous and grassroots organizations, which aren’t hyper-connected and don’t think that digital forensics work is relevant to them. Sometimes what we do is completely disconnected from their lives, and they ask us: “But what are you doing?” So, our job is to understand their questions and where they are coming from and ground our knowledge-sharing in what people are actually doing.

To someone reading this piece and saying, oh, this kind of resonates with me, where do I start, what would your recommendation be?

If you are a human rights defender, I would recommend that you share your knowledge with your collective. You can teach them the importance of knowing about them, practicing them, as well as encouraging training to prevent digital attacks, because, in the end, forensic analysis is a reaction to something that has happened.

We can take a lot of preventive measures to ensure the smallest possible impact. That’s the best way to start. And it’s crucial to stay informed, to keep reading, to stay up to date with the news and build community.

If there are girls or gender non-conforming people reading this who are interested in technical issues, it doesn’t matter if you don’t have a degree or a formal education, as long as you like it. Most hackers I’ve met become hackers because they dive into a subject, they like it and they’re passionate about it.Nikole Yanez Honduras Human Rights Defender.

See also:¨https://www.amnesty.org/en/what-we-do/technology/online-violence/

blog.mozilla.org/en/internet-culture/raphael-mimoun-mozilla-rise-25-human-rights-justice-journalists/

Amnesty’s annual State of the World’s Human Rights report 2023 is out

April 25, 2024
  • Powerful governments cast humanity into an era devoid of effective international rule of law, with civilians in conflicts paying the highest price
  • Rapidly changing artificial intelligence is left to create fertile ground for racism, discrimination and division in landmark year for public elections
  • Standing against these abuses, people the world over mobilized in unprecedented numbers, demanding human rights protection and respect for our common humanity

The world is reaping a harvest of terrifying consequences from escalating conflict and the near breakdown of international law, said Amnesty International as it launched its annual The State of the World’s Human Rights report, delivering an assessment of human rights in 155 countries.

Amnesty International also warned that the breakdown of the rule of law is likely to accelerate with rapid advancement in artificial intelligence (AI) which, coupled with the dominance of Big Tech, risks a “supercharging” of human rights violations if regulation continues to lag behind advances.

Amnesty International’s report paints a dismal picture of alarming human rights repression and prolific international rule-breaking, all in the midst of deepening global inequality, superpowers vying for supremacy and an escalating climate crisis,” said Amnesty International’s Secretary General, Agnès Callamard. 

“Israel’s flagrant disregard for international law is compounded by the failures of its allies to stop the indescribable civilian bloodshed meted out in Gaza. Many of those allies were the very architects of that post-World War Two system of law. Alongside Russia’s ongoing aggression against Ukraine, the growing number of armed conflicts, and massive human rights violations witnessed, for example, in Sudan, Ethiopia and Myanmar – the global rule-based order is at risk of decimation.”

Lawlessness, discrimination and impunity in conflicts and elsewhere have been enabled by unchecked use of new and familiar technologies which are now routinely weaponized by military, political and corporate actors. Big Tech’s platforms have stoked conflict. Spyware and mass surveillance tools are used to encroach on fundamental rights and freedoms, while governments are deploying automated tools targeting the most marginalized groups in society.

“In an increasingly precarious world, unregulated proliferation and deployment of technologies such as generative AI, facial recognition and spyware are poised to be a pernicious foe – scaling up and supercharging violations of international law and human rights to exceptional levels,” said Agnès Callamard.

“During a landmark year of elections and in the face of the increasingly powerful anti-regulation lobby driven and financed by Big Tech actors, these rogue and unregulated technological advances pose an enormous threat to us all. They can be weaponized to discriminate, disinform and divide.”

Read more about Amnesty researchers’ biggest human rights concerns for 2023/24.

Amnesty International’s report paints a dismal picture of alarming human rights repression and prolific international rule-breaking, all in the midst of deepening global inequality, superpowers vying for supremacy and an escalating climate crisis. Amnesty International’s Secretary General, Agnès Callamard

U.S. State Department and the EU release an approach for protecting human rights defenders from online attacks.

March 13, 2024

On 12 March 2024 the U.S. and European Union issued new joint guidance on Monday for online platforms to help mitigate virtual attacks targeting human rights defenders, reports Alexandra Kelley,
Staff Correspondent, Nextgov/FCW.

Outlined in 10 steps, the guidance was formed following stakeholder consulting from January 2023 to February 2024. Entities including nongovernmental organizations, trade unionists, journalists, lawyers, environmental and land activists advised both governments on how to protect human rights defenders on the internet.

Recommendations within the guidance include: committing to an HRD [human rights defender] protection policy; identifying risks to HRDs; sharing information with peers and select stakeholders; creating policy to monitoring performance metric base marks; resource staff adequately; build a capacity to address local risks; offer safety tools education; create an incident reporting channel; provide access to help for HRDs; and incorporate a strong transparent infrastructure.

Digital threats HRDs face include target Internet shutdowns, censorship, malicious cyber activity, unlawful surveillance, and doxxing. Given the severity and reported increase of digital attacks against HRDs, the guidance calls upon online platforms to take mitigating measures.

The United States and the European Union encourage online platforms to use these recommendations to determine and implement concrete steps to identify and mitigate risks to HRDs on or through their services or products,” the guidance reads. 

The ten guiding points laid out in the document reflect existing transatlantic policy commitments, including the Declaration for the Future of the Internet. Like other digital guidance, however, these actions are voluntary. 

“These recommendations may be followed by further actions taken by the United States or the European Union to promote rights-respecting approaches by online platforms to address the needs of HRDs,” the document said

https://www.nextgov.com/digital-government/2024/03/us-eu-recommend-protections-human-rights-defenders-online/394865

Alex, a Romanian activist, works at the intersection of human rights, technology and public policy.

January 24, 2024
Amnesty International Logotype

On 22 January 2024, Amnesty International published an interesting piece by Alex, a 31-year-old Romanian activist working at the intersection of human rights, technology and public policy.

Seeking to use her experience and knowledge of tech for political change, Alex applied and was accepted onto the Digital Forensics Fellowship led by the Security Lab at Amnesty Tech. The Digital Forensics Fellowship (DFF) is an opportunity for human rights defenders (HRDs) working at the nexus of human rights and technology and expand their learning.

Here, Alex shares her activism journey and insight into how like-minded human rights defenders can join the fight against spyware:

In the summer of 2022, I watched a recording of Claudio Guarnieri, former Head of the Amnesty Tech Security Lab, presenting about Security Without Borders at the 2016 Chaos Communication Congress. After following the investigations of the Pegasus Project and other projects centring on spyware being used on journalists and human rights defenders, his call to action at the end — “Find a cause and assist others” — resonated with me long after I watched the talk.

Becoming a tech activist

A few days later, Amnesty Tech announced the launch of the Digital Forensics Fellowship (DFF). It was serendipity, and I didn’t question it. At that point, I had already pushed myself to seek out a more political, more involved way to share my knowledge. Not tech for the sake of tech, but tech activism to ensure political change.

Portrait of a young woman with dark hair looking downwards in a thoughtful manner
Alex is a 31-year-old Romanian activist, working at the intersection of human rights, technology and public policy.

I followed an atypical path for a technologist. Prior to university, I dreamt of being a published fiction author, only to switch to studying industrial automation in college. I spent five years as a developer in the IT industry and two as Chief Technology Officer for an NGO, where I finally found myself using my tech knowledge to support journalists and activists.

My approach to technology, like my approach to art, is informed by political struggles, as well as the questioning of how one can lead a good life. My advocacy for digital rights follows this thread. For me, technology is merely one of many tools at the disposal of humanity, and it should never be a barrier to decent living, nor an oppressive tool for anyone.

Technology is merely one of many tools at the disposal of humanity. It should never be a barrier to decent living, nor an oppressive tool for anyone.

The opportunity offered by the DFF matched my interests and the direction I wanted to take my activism. During the year-long training programme from 2022-2023, the things I learned turned out to be valuable for my advocacy work.

In 2022, the Child Sexual Abuse Regulation was proposed in the EU. I focused on conducting advocacy to make it as clear as possible that losing encrypted communication would make life decidedly worse for everyone in the EU. We ran a campaign to raise awareness of the importance of end-to-end encryption for journalists, activists and people in general. Our communication unfolded under the banner of “you don’t realize how precious encryption is until you’ve lost it”. Apti.ro, the Romanian non-profit organisation that I work with, also participated in the EU-wide campaign, as part of the EDRi coalition. To add fuel to the fire, spyware scandals erupted across the EU. My home country, Romania, borders countries where spyware has been proven to have been used to invade the personal lives of journalists, political opponents of the government and human rights defenders.

The meaning of being a Fellow

The Security Lab provided us with theoretical and practical sessions on digital forensics, while the cohort was a safe, vibrant space to discuss challenges we were facing. We debugged together and discussed awful surveillance technology at length, contributing our own local perspective.

The importance of building cross-border networks of cooperation and solidarity became clear to me during the DFF. I heard stories of struggles from people involved in large and small organizations alike. I am convinced our struggles are intertwined, and we should join forces whenever possible.

Now when I’m working with other activists, I try not to talk of “forensics”. Instead, I talk about keeping ourselves safe, and our conversations private. Often, discussions we have as activists are about caring for a particular part of our lives – our safety when protesting, our confidentiality when organizing, our privacy when convening online. Our devices and data are part of this process, as is our physical body. At the end of the day, digital forensics are just another form of caring for ourselves.

I try to shape discussions about people’s devices similarly to how doctors discuss the symptoms of an illness. The person whose device is at the centre of the discussion is the best judge of the symptoms, and it’s important to never minimize their apprehension. It’s also important to go through the steps of the forensics in a way that allows them to understand what is happening and what the purpose of the procedure is.

I never use a one-size-fits-all approach because the situation of the person who owns a device informs the ways it might be targeted or infected.

The human approach to technology

My work is human-centred and technology-focused and requires care and concentration to achieve meaningful results. For activists interested in working on digital forensics, start by digging deep into the threats you see in your local context. If numerous phishing campaigns are unfolding, dig into network forensics and map out the owners of the domains and the infrastructure.

Secondly, get to know the person you are working with. If they are interested in secure communications, help them gain a better understanding of mobile network-based attacks, as well as suggesting instant messaging apps that preserve the privacy and the security of their users. In time, they will be able to spot “empty words” used to market messaging apps that are not end-to-end encrypted.

Finally, to stay true to the part of me that loves a well-told story, read not only reports of ongoing spyware campaigns, but narrative explorations from people involved. “Pegasus: The Story of the World’s Most Dangerous Spyware” by Laurent Richard and Sandrine Rigaud is a good example that documents both the human and the technical aspects. The Shoot the Messenger podcast, by PRX and Exile Content Studio, is also great as it focuses on Pegasus, starting from the brutal murder of Jamal Khashoggi to the recent infection of the device of journalist and founder of Meduza, Galina Timchenko.

We must continue to do this research, however difficult it may be, and to tell the stories of those impacted by these invasive espionage tactics. Without this work we wouldn’t be making the political progress we’ve seen to stem the development and use of this atrocious technology.

https://www.amnesty.org/en/search/Alex/

In the deepfake era, we need to hear the Human Rights Defenders

December 19, 2023

In a Blog Post (Council on Foreign Relations of 18 December 2023) Raquel Vazquez Llorente argues that ‘Artificial intelligence is increasingly used to alter and generate content online. As development of AI continues, societies and policymakers need to ensure that it incorporates fundamental human rights.” Raquel is the Head of Law and Policy, Technology Threats and Opportunities at WITNESS

The urgency of integrating human rights into the DNA of emerging technologies has never been more pressing. Through my role at WITNESS, I’ve observed first-hand the profound impact of generative AI across societies, and most importantly, on those defending democracy at the frontlines.

The recent elections in Argentina were marked by the widespread use of AI in campaigning material. Generative AI has also been used to target candidates with embarrassing content (increasingly of a sexual nature), to generate political ads, and to support candidates’ campaigns and outreach activities in India, the United States, Poland, Zambia, and Bangladesh (to name a few). The overall result of the lack of strong frameworks for the use of synthetic media in political settings has been a climate of mistrust regarding what we see or hear.

Not all digital alteration is harmful, though. Part of my work involves identifying how emerging technologies can foster positive change. For instance, with appropriate disclosure, synthetic media could be used to enhance voter education and engagement. Generative AI could help create informative content about candidates and their platforms, or of wider election processes, in different languages and formats, improving inclusivity or reducing barriers for underdog or outsider candidates. For voters with disabilities, synthetic media could provide accessible formats of election materials, such as sign language avatars or audio descriptions of written content. Satirical deepfakes could engage people who might otherwise be disinterested in politics, bringing attention to issues that might not be covered in mainstream media. We need to celebrate and protect these uses.

As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? When AI can blur the lines between reality and fiction with increasing credibility and ease, discerning truth from falsehood becomes not just a technological battle, but a fight to uphold democracy.

From conversations with journalists, activists, technologists and other communities impacted by generative AI and deepfakes, I have learnt that the effects of synthetic media on democracy are a mix of new, old, and borrowed challenges.

Generative AI introduces a daunting new reality: inconvenient truths can be denied as deep faked, or at least facilitate claims of plausible deniability to evade accountability. The burden of proof, or perhaps more accurately, the “burden of truth” has shifted onto those circulating authentic content and holding the powerful to account. This is not just a crisis of identifying what is fake. It is also a crisis of protecting what is true. When anything and everything can be dismissed as AI-generated or manipulated, how do we elevate the real stories of those defending our democracy at the frontlines?

But AI’s impact doesn’t stop at new challenges; it exacerbates old inequalities. Those who are already marginalized and disenfranchised—due to their gender, ethnicity, race or belonging to a particular group—face amplified risks. AI is like a magnifying glass for exclusion, and its harms are cumulative. AI deepens existing vulnerabilities, bringing a serious threat to principles of inclusivity and fairness that lie at the heart of democratic values. Similarly, sexual deepfakes can have an additional chilling effect, discouraging women, LGBTQ+ people and individuals from minoritized communities to participate in public life, thus eroding the diversity and representativeness that are essential for a healthy democracy.

Lastly, much as with social media, where we failed to incorporate the voices of the global majority, we have borrowed previous mistakes. The shortcomings in moderating content, combating misinformation, and protecting user privacy have had profound implications on democracy and social discourse. Similarly, in the context of AI, we are yet to see meaningful policies and regulation that not only consult globally those that are being impacted by AI but, more importantly, center the solutions that affected communities beyond the United States and Europe prioritize. This highlights a crucial gap: the urgent need for a global perspective in AI governance, one that learns from the failures of social media in addressing cultural and political nuances across different societies.

As we navigate AI’s impact on democracy and human rights, our approach to these challenges should be multifaceted. We must draw on a blend of strategies—ones that address the immediate ‘new’ realities of AI, respond to the ‘old’ but persistent challenges of inequality, and incorporate ‘borrowed’ wisdom from our past experiences.

First, we must ensure that new AI regulations and companies’ policies are steeped in human rights law and principles, such as those enshrined in the Universal Declaration of Human Rights. In the coming years, one of the most important areas in socio-technical expertise will be the ability to translate human rights protections into AI policies and legislation.

While anchoring new policies in human rights is crucial, we should not lose sight of the historical context of these technological advancements. We must look back as we move forward. As with technological advancements of the past, we should remind ourselves that progress is not how far you go, but how many people you bring along. We should really ask, is it technological progress if it is not inclusive, if it reproduces a disadvantage? Technological advancement that leaves people behind is not true progress; it is an illusion of progress that perpetuates inequality and systems of oppression. This past weekend marked twenty-five years since the adoption of the UN Declaration on Human Rights Defenders, which recognizes the key role of human rights defenders in realizing the Universal Declaration of Human Rights and other legally binding treaties. In the current wave of excitement around generative AI, the voices of those protecting human rights at the frontlines have rarely been more vital.

Our journey towards a future shaped by AI is also about learning from the routes we have already travelled, especially those from the social media era. Synthetic media has to be understood in the context of the broader information ecosystem. We are monetizing the spread of falsehoods while keeping local content moderators and third-party fact-checkers on precarious salaries, and putting the blame on platform users for not being educated enough to spot the fakery. The only way to align democratic values with technology goals is by both placing responsibility and establishing accountability across the whole information and AI ecosystem, from the foundation models researchers, to those commercializing AI tools, and those creating content and distributing it.

In weaving together these new, old, and borrowed strands of thought, we create a powerful blueprint for steering the course of AI. This is not just about countering a wave of digital manipulation—it is about championing technology advancement that amplifies our democratic values, deepens our global engagement, and preserves the core of our common humanity in an increasingly AI-powered and image-driven world. By centering people’s rights in AI development, we not only protect our individual freedoms, but also fortify our shared democratic future.

https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines

NGOs express fear that new EU ‘terrorist content’ draft will make things worse for human rights defenders

January 31, 2019

On Wednesday 30 January 2019 Mike Masnick in TechDirt published a piece entitled: “Human Rights Groups Plead With The EU Not To Pass Its Awful ‘Terrorist Content’ Regulation“. The key argument is that machine-learning algorithms are not able to distinguish between terrorist propaganda and investigations of, say, war crimes, It points out that as an example that Germany’s anti-“hate speech” law has proven to be misused by authoritarian regimes. Read the rest of this entry »

Today official launch of AI’s Panic Button – a new App to fight attack, kidnap and torture

June 23, 2014

Amnesty International launches new open source ‘Panic Button’ app to help activists facing imminent danger.

Today, 23 June 2014, Amnesty International launches its open source ‘Panic Button’ app to help human rights defenders facing imminent danger. The aim is to increase protection for those who face the threat of arrest, attack, kidnap and torture. In short:

Read the rest of this entry »

Can ‘big data’ can help protect human rights?

January 5, 2014

Samir Goswami, managing director of AI USA’s Individuals and Communities at Risk Program, and Mark Cooke,  chief innovation officer at Tax Management Associates, wrote a piece about how ‘big data’ can help human rights rather than just violate them. The piece is worth reading but falls short of being convincing. The better prediction of human rights violations which may [!] result from the analysis of a huge amount of data would of course be welcome but I remain unconvinced that it would therefore lead to a reduction of actual violations. Too many of these are planned and willful, while the mobilization of shame and international solidarity would be less forthcoming for violations that MAY occur. The authors are not the first to state that prevention is better than cure but the current problem is no so much a lack of predictive knowledge as a weakness of curing intervention. Still, the article is worth reading as it describes developments that are likely to come about anyway. Read the rest of this entry »

For HRDs digital surveillance can mark the difference between life and death says Mary Lawlor

September 22, 2013

This blog has tried to pay regularly attention to the crucial issue of electronic security and referred to the different proposal that aim to redress the situation in favour of human rights defenders. In a column of Friday 20 September the Director of Front Line, Mary Lawlor, writes about the digital security programme “Security in a Box” which her organisation and the Tactical Technology collective started some years ago. For Sunday reading here the whole text:

Mary Lawlor

ARE YOU AWARE that the recording device on your smartphone can be activated remotely and record sensitive conversations? And that the webcam on your PC can film inside your office without you knowing?

For most people, debates about the snooping NSA and GCHQ are little more than great material for a chat down the pub, but for human rights defenders around the world, digital security is synonymous with personal security. For a gay rights campaigner in Honduras or a trade unionist in Colombia, safety from interception of communications or seizure of data can be the difference between freedom or imprisonment, life or death.

Digital surveillance has been described as “connecting the boot to the brain of the repressive regime”. Governments are developing the capacity to manipulate, monitor and subvert electronic information. Surveillance and censorship is growing and the lack of security for digitally stored or communicated information is becoming a major problem for human rights defenders in many countries.

By hacking into the computer system of a human rights organisation, governments or hostile hackers can access sensitive information, including the details of the organisation’s members and supporters. They can also install spyware or viruses to monitor or disrupt the work of the organisation.

Dangerous in the wrong hands

One of the best-documented cyber attacks on an NGO was the hacking of the Political Prisoner’s Solidarity Committee, a Colombian human rights organisation. The organisation’s email account was hacked and used to send malicious viruses and spam messages, and all employee work email accounts were deleted.

The hacked email account was also used to send threatening emails to a member of the organisation based in a different region. Their offices were broken into and the hard disk of one computer was stolen and replaced with a faulty one. Spyware was found on the computer used to maintain the organisation’s website; this recorded all the information on the computer and sent it via the internet to an unknown location. This cyber attack also coincided with a wave of anonymous phone calls and direct threats to staff members.

In this digital age how can human rights defenders make sure their online communications and their data are safe and that they are not putting themselves or colleagues in danger?

This is where Front Line Defenders is able to give practical help. With a security grant from Front Line Defenders, the Political Prisoner’s Solidarity Committee installed a new secured server and router, and upgraded their whole computer security system. We also organised a workshop on digital security for all the members of their organisation.

This was useful for a seriously at-risk organisation. But there are effective steps all of us can take to stay safe. Most of us have a computer or laptop and most have a password. That password is probably a cat’s name or a daughter’s name – which can be broken in about 10 seconds. Simply by changing your password to a longer one which combines upper case, lower case and digits makes the password virtually unbreakable and is a simple, first step to improve your online security.

“Back doors”

Recent revelations have shown that even encrypted communications that were previously thought to be secure have been built with deliberately included “back doors”, so that organisations like the NSA and GCHQ can access information that people think is secret. One protection against these built-in weaknesses is to use open-source software – this is software not provided by a big-name company like Microsoft or Apple, but one in which the workings of the software are made available for all to see, so that any such intended weakness in the encryption would be spotted and exposed by the global community of digital security experts.

Even if authorities or malicious hackers can’t see what you’re communicating, it can still be possible for them to see when you communicate and with whom. The Tactical Technology Collective has said, “If you use a computer, surf the internet, text your friends via a mobile phone or shop online – you leave a digital shadow.” If you want to find out the size of your digital shadow, and more importantly want to know what you can do about it, visit their award-winning website myshadow.org (now: https://privacy.net/analyzer/)

.

Security in-a-box (available onlineis a collaborative effort of the Tactical Technology collective and Front Line Defenders. It was created to meet the digital security and privacy needs of advocates and human rights defenders, but can also be used by members of the public.Security in-a-box includes a how-to booklet  which addresses a number of important digital security issues.

It also provides a collection of Hands-on Guides, each of which includes a particular freeware or open source software tool, as well as instructions on how you can use that tool to secure your computer, protect your information or maintain the privacy of your internet communication.

A clear understanding of the risks

When we started our Digital Security Programme we only ran one or two trainings per year. Now we are organising workshops on digital security all over the world, sometimes in secret locations for human rights defenders from countries where even to use the word “encryption” in an email would bring you under the eagle eye of the security services.

Electronic communication enables human rights defenders to network and cooperate as never before but survival depends on having a clear understanding of the risks involved and the need for a well thought-out digital security strategy.

Column: For some people, digital surveillance can mark the difference between life and death.

Protection International opens inscription for new e-learning course on Security for HRDs

September 12, 2013

On 7 October 2013, a new course on “Security and protection management for HRD and social organisations” begins on the e-learning platform of Protection International (http://www.e-learning.protectioninternational.org/course/info.php?id=21).

•       Enrolment until 24 September 2013 Read the rest of this entry »