The video of the gruesome murder of George Floyd ignited protests around the world in solidarity against racism and white supremacy supported by the government and enforced by police. But we know for every video of police violence, there are many deaths that were not recorded that still deserve our attention and support.
Founded on the power of video to bring attention to the breach of human rights during the Rodney King arrest, beating, filming, and subsequent uprising 28 years ago, WITNESS continues to train and guide people to use their cell phone video camera to record incidents of human rights abuse, then share it with the media and justice system to prosecute wrongdoers.
Today, the systems and patterns of police abuse are as rampant as ever. What has changed is our collective ability to document these moments.
We help people document state violence, push for accountability, and implement structural change. In the past few weeks, we’ve seen a spike in demand for our guidance on how to shoot and share footage of police violence safely, ethically, and effectively. Our tips continue to inform ethical and strategic filming of police misconduct and protests. Video is a tool to show violence. But more importantly, it’s a tool to show patterns. It forces the broader public to pay attention, and authority to change. We have seen commitments from local and state leaders and we encourage more people around the world to break down military and police power. And to film it. Ambika Samarthya-Howard Head of Communications WITNESS
Tonight, 21 May 21, at 6:00 p.m. ET, the NGO WITNESS has its virtual Un-Gala, a celebration in defense of truth and human rights, live on Facebook and YouTube.
You’ll be able to hear the activist, author and educator Angela Davis, in conversation with media activist and WITNESS’ Senior Program Coordinator Palika Makam. You’ll be hearing a new piece on COVID-19 created by WITNESS’ Founder Peter Gabriel and you can listen to the incredible voice of artist Kimberly Nichole.
The Un-Gala is free of charge but donations to WITNESS are most welcome.
TIME of 11 April 2020 carries a long article by Billy Perrigo entitled “These Tech Companies Managed to Eradicate ISIS Content. But They’re Also Erasing Crucial Evidence of War Crimes” It is a very interseting piece that clearly spells out the dilemma of supressing too much or too little on Facebook, YouTube, etc. Algorithms designed to suppress ISIS content, are having unexpected side effects such as suppressing evidence of human rights violations.
…..Images by citizen journalist Abo Liath Aljazarawy to his Facebook page (Eye on Alhasakah’s) showed the ground reality of the Syrian civil war. His page was banned. Facebook confirmed to TIME that Eye on Alhasakah was flagged in late 2019 by its algorithms, as well as users, for sharing “extremist content.” It was then funneled to a human moderator, who decided to remove it. After being notified by TIME, Facebook restored the page in early February, some 12 weeks later, saying the moderator had made a mistake. (Facebook declined to say which specific videos were wrongly flagged, except that there were several.)The algorithms were developed largely in reaction to ISIS, who shocked the world in 2014 when they began to share slickly-produced online videos of executions and battles as propaganda. Because of the very real way these videos radicalized viewers, the U.S.-led coalition in Iraq and Syria worked overtime to suppress them, and enlisted social networks to help. Quickly, the companies discovered that there was too much content for even a huge team of humans to deal with. (More than 500 hours of video are uploaded to YouTube every minute.) So, since 2017, beg have been using algorithms to automatically detect extremist content. Early on, those algorithms were crude, and only supplemented the human moderators’ work. But now, following three years of training, they are responsible for an overwhelming proportion of detections. Facebook now says more than 98% of content removed for violating its rules on extremism is flagged automatically. On YouTube, across the board, more than 20 million videos were taken down before receiving a single view in 2019. And as the coronavirus spread across the globe in early 2020, Facebook, YouTube and Twitter announced their algorithms would take on an even larger share of content moderation, with human moderators barred from taking sensitive material home with them.
But algorithms are notoriously worse than humans at understanding one crucial thing: context. Now, as Facebook and YouTube have come to rely on them more and more, even innocent photos and videos, especially from war zones, are being swept up and removed. Such content can serve a vital purpose for both civilians on the ground — for whom it provides vital real-time information — and human rights monitors far away. In 2017, for the first time ever, the International Criminal Court in the Netherlands issued a war-crimes indictment based on videos from Libya posted on social media. And as violence-detection algorithms have developed, conflict monitors are noticing an unexpected side effect, too: these algorithms could be removing evidence of war crimes from the Internet before anyone even knows it exists.
….. It was an example of how even one mistaken takedown can make the work of human rights defenders more difficult. Yet this is happening on a wider scale: of the 1.7 million YouTube videos preserved by Syrian Archive, a Berlin-based non-profit that downloads evidence of human rights violations, 16% have been removed. A huge chunk were taken down in 2017, just as YouTube began using algorithms to flag violent and extremist content. And useful content is still being removed on a regular basis. “We’re still seeing that this is a problem,” says Jeff Deutsch, the lead researcher at Syrian Archive. “We’re not saying that all this content has to remain public forever. But it’s important that this content is archived, so it’s accessible to researchers, to human rights groups, to academics, to lawyers, for use in some kind of legal accountability.” (YouTube says it is working with Syrian Archive to improve how they identify and preserve footage that could be useful for human rights groups.)
…..
Facebook and YouTube’s detection systems work by using a technology called machine learning, by which colossal amounts of data (in this case, extremist images, videos, and their metadata) are fed to an artificial intelligence adept at spotting patterns. Early types of machine learning could be trained to identify images containing a house, or a car, or a human face. But since 2017, Facebook and YouTube have been feeding these algorithms content that moderators have flagged as extremist — training them to automatically identify beheadings, propaganda videos and other unsavory content.
Both Facebook and YouTube are notoriously secretive about what kind of content they’re using to train the algorithms responsible for much of this deletion. That means there’s no way for outside observers to know whether innocent content — like Eye on Alhasakah’s — has already been fed in as training data, which would compromise the algorithm’s decision-making. In the case of Eye on Alhasakah’s takedown, “Facebook said, ‘oops, we made a mistake,’” says Dia Kayyali, the Tech and Advocacy coordinator at Witness, a human rights group focused on helping people record digital evidence of abuses. “But what if they had used the page as training data? Then that mistake has been exponentially spread throughout their system, because it’s going to train the algorithm more, and then more of that similar content that was mistakenly taken down is going to get taken down. I think that is exactly what’s happening now.” Facebook and YouTube, however, both deny this is possible. Facebook says it regularly retrains its algorithms to avoid this happening. In a statement, YouTube said: “decisions made by human reviewers help to improve the accuracy of our automated flagging systems.”
……. That’s because Facebook’s policies allow some types of violence and extremism but not others — meaning decisions on whether to take content down is often based on cultural context. Has a video of an execution been shared by its perpetrators to spread fear? Or by a citizen journalist to ensure the wider world sees a grave human rights violation? A moderator’s answer to those questions could mean that of two identical videos, one remains online and the other is taken down. “This technology can’t yet effectively handle everything that is against our rules,” Saltman said. “Many of the decisions we have to make are complex and involve decisions around intent and cultural nuance which still require human eye and judgement.”
In this balancing act, it’s Facebook’s army of human moderators — many of them outsourced contractors — who carry the pole. And sometimes, they lose their footing. After several of Eye on Alhasakah’s posts were flagged by algorithms and humans alike, a Facebook moderator wrongly decided the page should be banned entirely for sharing violent videos in order to praise them — a violation of Facebook’s rules on violence and extremism, which state that some content can remain online if it is newsworthy, but not if it encourages violence or valorizes terrorism. The nuance, Facebook representatives told TIME, is important for balancing freedom of speech with a safe environment for its users — and keeping Facebook on the right side of government regulations.
Facebook’s set of rules on the topic reads like a gory textbook on ethics: beheadings, decomposed bodies, throat-slitting and cannibalism are all classed as too graphic, and thus never allowed; neither is dismemberment — unless it’s being performed in a medical setting; nor burning people, unless they are practicing self-immolation as an act of political speech, which is protected. Moderators are given discretion, however, if violent content is clearly being shared to spread awareness of human rights abuses. “In these cases, depending on how graphic the content is, we may allow it, but we place a warning screen in front of the content and limit the visibility to people aged 18 or over,” said Saltman. “We know not everyone will agree with these policies and we respect that.”
But civilian journalists operating in the heat of a civil war don’t always have time to read the fine print. And conflict monitors say it’s not enough for Facebook and YouTube to make all the decisions themselves. “Like it or not, people are using these social media platforms as a place of permanent record,” says Woods. “The social media sites don’t get to choose what’s of value and importance.”
In the midst of the COVID-19 crisis, many human rights organisations have been formulating a policy response. While I cannot be complete or undertake comparisons, I will try and give some examples in the course of the coming weeks. Here the one by Sam Gregory of
…..The immediate implications of coronavirus – quarantine, enhanced emergency powers, restrictions on sharing information – make it harder for individuals all around the world to document and share the realities of government repression and private actors’ violations. In states of emergency, authoritarian governments in particular can operate with further impunity, cracking down on free speech and turning to increasingly repressive measures. The threat of coronavirus and its justifying power provides cover for rights-violating laws and measures that history tells us may long outlive the actual pandemic. And the attention on coronavirus distracts focus from rights issues that are both compounded by the impact of the virus and cannot claim the spotlight now.
We are appalled by the conduct of Kenya security agents as shown in this citizen video. All perpetrators of this act must be held to account. Police and other law enforcement agents must safeguard human rights even as they enforce govt’s regulation during this time. #CurfewKenyapic.twitter.com/MUfxGGKl8A
In this crisis moment, it is critical that we enhance the abilities and defend the rights of people who document and share critical realities from the ground. Across the three core thematic issues we currently work on, the need is critical. For issues such as video as evidence from conflict zones, these wars continue on and reach their apex even as coronavirus takes all the attention away. We need only look to the current situation in Idlib, Yemen or in other states of conflict in the Middle East.
For other issues, like state violence against minorities, many people already live in a state of emergency.
Coronavirus response in Complexo do Alemão favela, Rio de Janeiro (credit: Raull Santiago)
Favela residents in Brazil have lived with vastly elevated levels of police killings of civilians for years, and now face a parallel health emergency. Meanwhile immigrant communities in the US have lived in fear of ICE for years and must now weigh their physical health against their physical safety and family integrity. Many communities – in Kashmir and in Rakhine State, Burma – live without access to the internet on an ongoing basis and must still try and share what is happening. And for those who fight for their land rights and environmental justice, coronavirus is both a threat to vulnerable indigenous and poor communities lacking health care, sanitation and state support as well as a powerful distraction from their battle against structural injustice.
A critical part of WITNESS’ strategy is our work to ensure technology companies actions and government regulation of technology are accountable to the most vulnerable members of our global society – marginalized populations globally, particularly those outside the US and Europe, as well as human rights defenders and civic journalists. As responses to coronavirus kick-in there are critical implications in how both civic technology and commercial technology are now being deployed and will be deployed.
Already, coronavirus has acted as an accelerant – like fuel on the fire – to existing trends in technology. Some of these have potentially profound negative impacts for human rights values, human rights documentation and human rights defenders; others may hold a silver lining.
My colleague Dia Kayyali has already written about the sudden shift to much broader algorithmic content moderation that took place last week as Facebook, Twitter, Google and YouTube sent home their human moderators. Over the past years, we’ve seen the implications of both a move to algorithmic moderation and a lack of will and resourcing: from hate speech staying on platforms in vulnerable societies, to the removal critical war crimes evidence at scale from YouTube, to a lack of accountability for decisions made under the guise of countering terrorist and violent extremist content. But in civil society we did not anticipate that such a shift to more broad algorithmic control would happen so rapidly in such a short period of time. We must closely monitor and push for this change not to adversely affect societies and critical struggles worldwide in a moment when they are already threatened by isolation and increased government repression. As Dia suggests, now is the moment for these companies to finally make their algorithms and content moderation processes more transparent to critical civil society experts, as well as reset on how they support and treat the human beings who do the dirty work of moderation.
WITNESS’s work on misinformation and disinformation spans a decade of supporting the production of truthful, trustworthy content in war zones, crises and long-standing struggles for rights. Most recently we have focused on the emerging threats from deepfakes and other forms of synthetic media that enable increasingly realistic fakery of what looks like a real person saying or doing something they never did.
We’ve led the first global expert meetings in Brazil, Southern Africa and Southeast Asia on what a rights-respecting, global responses should look like in terms of understanding threats and solutions. Feedback from these sessions has stressed the need for attention to a continuum of audiovisual misinformation including ‘shallowfakes’, the simpler forms of miscontextualized and lightly edited videos that dominate attempts to confuse and deceive. Right now, social media platforms are unleashing a series of responses to misinformation around Coronavirus – from highlighting authoritative health information from country-level and international sources, to curating resources, offering help centers, and taking down a wider range of content that misinforms, deceives or price gouges including even leading politicians, such as President Bolsonaro in Brazil. The question we must ask is what we want to see internet companies continue to do after the crisis: what should they do for a wider range of misinformation and disinformation outside of health – and what do we not want them to do? We’ll be sharing more about this in the coming weeks.
And where can we find a technological silver lining? One area may be the potential to discover and explore new ways to act in solidarity and agency with each other online. A long-standing area of work at WITNESS is how to use ‘co-presence’ and livestreaming to bridge social distances and help people witness snd support one another when physical proximity is not possible.
Our Mobil-Eyes Us project supported favela-based activists to use live video to better engage their audiences to be with them, and provide meaningful support. In parts of the world that benefit from broadband internet access, and the absence of arbitrary shutdowns, and the ability to physically isolate, we are seeing an explosion of experimentation in how to operate better in a world that is both physically distanced, yet still socially proximate. We should learn from this and drive experimentation and action in ensuring that even as our freedom of assembly in physical space is curtailed for legitimate (and illegitimate) reasons, our ability to assemble online in meaningful action is not curtailed but enhanced.
In moments of crisis good and bad actors alike will try and push the agenda that they want. In this moment of acceleration and crisis, WITNESS is committed to ensuring an agenda firmly grounded, and led by a human rights vision and the wants and needs of vulnerable communities and human rights defenders worldwide.
On Wednesday 30 January 2019 Mike Masnick in TechDirt published a piece entitled: “Human Rights Groups Plead With The EU Not To Pass Its Awful ‘Terrorist Content’ Regulation“. The key argument is that machine-learning algorithms are not able to distinguish between terrorist propaganda and investigations of, say, war crimes, It points out that as an example that Germany’s anti-“hate speech” law has proven to be misused by authoritarian regimes. Read the rest of this entry »
Witness’ Asia-Pacific team adapted this video from WITNESS’ tip sheet on Filming Hate – a primer for using video to document human rights abuses. “Filming Hate” guides activists through documenting abuses safely, providing context, verifying footage, and sharing that footage responsibly. It may help millions of bystanders become witnesses, and hence human rights defenders, spurred to combat hatred by wielding a powerful weapon: their smartphone. Published on 6 August 2017. Full tipsheet available on our Library at: https://library.witness.org/product/f… Music credit: ‘India’ — http://www.bensound.com Creative Commons Attribution licence (reuse allowed)
International Human Rights Day is an occasion for many organizations to publish statements on human rights. For those who have not enough time to go through all of them, here a selection of four main statements that focus on human rights defenders: Read the rest of this entry »
What every human rights defender should know about video, images etc.
Instruction video published on 15 October 2014 by Witness.What is a video format? A codec? What do 1080 and 720 refer to? What about “i” and “p”? In this video, archivist and writer of WITNESS’ award winning guide, Yvonne Ng, provides an overview of the key technical characteristics of video for everyday users with visual examples. Comment on Witness blog: http://wp.me/p4j1y7-5J2
Yvette Alberdingk Thijm, the Executive Director of WITNESS, posted an important piece in the Huffington Post of 2 September on how to make sure that the increase in human rights videos uploaded to Witness (and the same for other NGOs) make a real difference. After citing several examples of such footage of violence, conflict, and human rights abuses, she reflects as follows: “When I watch these videos with such potential to transform human rights advocacy, I am concerned about the gaps and the lost opportunities: the videos that cannot be authenticated; the stories that will be denied or thrown out of court — or worse, will never reach their intended audience; a survivor’s account lost in a visual sea of citizen media. Mostly, I worry about the safety of the person who filmed, about her privacy and security.”
…….
“When WITNESS was created, we talked about the power of video to “open the eyes of the world to human rights violations.” Today, our collective eyes have been opened to many of the conflicts and abuses that are going on around us. This creates, for all of us, a responsibility to engage. I am deeply convinced that citizen documentation has the power to transform human rights advocacy, change behaviors, and increase accountability. But let’s make sure that all of us filming have the right tools and capabilities, and that we apply and share the lessons we are learning from citizen witnesses around the world, so that more people filming truly equals more rights.”
(Photo credit: WITNESS, used under Creative Commons)
Kelly Matheson of WITNESS and the New Tactics community organise an online conversation on the Using Video for Documentation and Evidence from 21 to 25 July, 2014. User-generated content can be instrumental in drawing attention to human rights abuses. But many filmers and activists want their videos to do more. They have the underlying expectation that footage exposing abuse can help bring about justice. Unfortunately, the quality of citizen video and other content rarely passes the higher bar needed to function as evidence in a court of law. This online discussion is an opportunity for practitioners of law, technology and human rights to share their experiences, challenges, tools and ideas to help increase the chances that the footage citizens and activists often risk their lives to capture can do more than expose injustice – it can also serve as evidence in the criminal and civil justice processes.