Posts Tagged ‘information technology’

How can the human rights defenders use new information technologies better?

November 28, 2019

(twitter: @mads_gottlieb) wrote in Impakter about Human Rights, Technology and Partnerships and stated that these technologies have the potential to tremendously facilitate human rights defenders in their work, whether they are used to document facts about investigations, or as preventive measures to avoid violations. His main message in this short article is an appeal to the human rights sector at large, to use technology more creatively, to make technology upgrades a top priority, and to engage with the technology sector in this difficult endeavor. The human rights sector will never be able to develop the newest technologies, but the opportunities that technology provides is something they need to make use of now and in collaboration with the technology sector

…Several cases show that human rights are under threat, and that it is difficult to investigate and gather the necessary facts in time to protect them. Duterte in the Philippines, ordered the police to shoot activists who demonstrated against extra-judicial killings. He later tried to reduce the funding of the Philippines National Human Rights Commission to 1 USD a year. This threat followed a period of 15 months of investigating the killings, and Duterte responded with the claim that they were “useless and defended criminal’s rights.” 

Zimbabwe is another country with a difficult environment for human rights defenders. It is not surprising that few people speak out, since the few that dare to demonstrate or voice opposing political views disappear. A famous example is the activist and journalist,  from Occupy Africa Unity Square. He was allegedly beaten in 2014, and in 2015 he went missing and was never found. His disappearance occurred after a period of public demonstrations against Mugabe’s regime. To add to the challenging conditions that call for better tools to defend human rights, is the fact that many European countries digitalise their public services. The newly introduced data platforms store and process sensitive information about the population, such as gender, ethnicity, sexual orientation, past health records, etc. Information that can easily be used for discriminative purposes, whether intentionally or not.

Human rights defenders typically struggle to find adequate resources for their daily operations and as a result, investments in technology often come second. It is rare for human rights defenders to have anything beyond the minimum requirements, such as the internally-facing maintenance of an operational and secure internet connection, a case system, or a website. At the same time, global technology companies develop new technologies such as blockchain, artificial intelligence, and advanced data and surveillance techniques. These technologies have the potential to tremendously facilitate human rights defenders in their work, whether they are used to document facts about investigations, or as preventive measures to avoid violations. It is also important to facilitate and empower rights-holders in setting up and using networks and platforms that can help notify and verify violations quickly. 

Collaboration is an excellent problem-solving approach and human rights organizations are well aware of it. They engage in multiple partnerships with important actors. The concern is therefore not the lack of collaboration, but whether they adequately prioritize what is now the world’s leading sector — technology (the top 5 on Forbes list of most valuable brands are all technology companies; Apple, Google, Microsoft, Amazon, and Facebook). It is not up to the technology sector to engage with the human rights sector (whether they want to or not), but it should be a top priority for the human rights sector to try to reduce their technology gap, in the interest of human rights.

There are several partnership opportunities, and many are easy to get started with and do not require monetary investments. One opportunity is to partner up with tech universities, that have the expertise to develop new types of secure, rapid monitoring systems. Blockchain embraces most of the principles that human rights embraces, such as transparency, equality and accountability, and rapid response times are possible. So why not collaborate with universities? Another opportunity is collaborating with institutions that manage satellite images. Images provide very solid proof regarding changes in landscape, examples include deforestation that threatens indigenous people, and the removal or burning of villages over a short period of time. A third opportunity is to get in dialogue with the technology giants that develop these new technologies, and, rather than asking for monetary donations, ask for input regarding how the human rights sector can effectively leverage technology. 

 

NSO accused of largest attack on civil society through its spyware

October 30, 2019

I blogged about the spyware firm NSO before [see e.g. https://humanrightsdefenders.blog/2019/09/17/has-nso-really-changed-its-attitude-with-regard-to-spyware/], but now WhatsApp has joined the critics with a lawsuit.

On May 13th, WhatsApp announced that it had discovered the vulnerability. In a statement, the company said that the spyware appeared to be the work of a commercial entity, but it did not identify the perpetrator by name. WhatsApp patched the vulnerability and, as part of its investigation, identified more than fourteen hundred phone numbers that the malware had targeted. In most cases, WhatsApp had no idea whom the numbers belonged to, because of the company’s privacy and data-retention rules. So WhatsApp gave the list of phone numbers to the Citizen Lab, a research laboratory at the University of Toronto’s Munk School of Global Affairs, where a team of cyber experts tried to determine whether any of the numbers belonged to civil-society members.

On Tuesday 29 October 2019, WhatsApp took the extraordinary step of announcing that it had traced the malware back to NSO Group, a spyware-maker based in Israel, and filed a lawsuit against the company—and also its parent, Q Cyber Technologies—in a Northern California court, accusing it of “unlawful access and use” of WhatsApp computers. According to the lawsuit, NSO Group developed the malware in order to access messages and other communications after they were decrypted on targeted devices, allowing intruders to bypass WhatsApp’s encryption.

NSO Group said in a statement in response to the lawsuit, “In the strongest possible terms, we dispute today’s allegations and will vigorously fight them. The sole purpose of NSO is to provide technology to licensed government intelligence and law enforcement agencies to help them fight terrorism and serious crime. Our technology is not designed or licensed for use against human rights activists and journalists.” In September, NSO Group announced the appointment of new, high-profile advisers, including Tom Ridge, the first U.S. Secretary of Homeland Security, in an effort to improve its global image.

In a statement to its users on Tuesday, WhatsApp said, “There must be strong legal oversight of cyber weapons like the one used in this attack to ensure they are not used to violate individual rights and freedoms people deserve wherever they are in the world. Human rights groups have documented a disturbing trend that such tools have been used to attack journalists and human rights defenders.”

John Scott-Railton, a senior researcher at the Citizen Lab, said, “It is the largest attack on civil society that we know of using this kind of vulnerability.”

https://www.newyorker.com/news/news-desk/whatsapp-sues-an-israeli-tech-firm-whose-spyware-targeted-human-rights-activists-and-journalists

https://uk.finance.yahoo.com/news/whatsapp-blames-sues-mobile-spyware-192135400.html

How social media companies can identify and respond to threats against human rights defenders

October 15, 2019

global computer threats

Image from Shutterstock.

Ginna Anderson writes in the ABA Abroad of 3

..Unfortunately, social media platforms are now a primary tool for coordinated, state-aligned actors to harass, threaten and undermine advocates. Although public shaming, death threats, defamation and disinformation are not unique to the online sphere, the nature of the internet has given them unprecedented potency. Bad actors are able to rapidly deploy their poisoned content on a vast scale. Social media companies have only just begun to recognize, let alone respond, to the problem. Meanwhile, individuals targeted through such coordinated campaigns must painstakingly flag individual pieces of content, navigate opaque corporate structures and attempt to survive the fallout. To address this crisis, companies such as Facebook, Twitter and Youtube must dramatically increase their capacity and will to engage in transparent, context-driven content moderation.

For human rights defenders, the need is urgent. .. Since 2011, the ABA Center for Human Rights (CHR) has ..noted with concern the coordination of “traditional” judicial harassment of defenders by governments, such as frivolous criminal charges or arbitrary detention, with online campaigns of intimidation. State-aligned online disinformation campaigns against individual defenders often precede or coincide with official investigations and criminal charges.

……

While social media companies generally prohibit incitement of violence and hate speech on their platforms, CHR has had to engage in additional advocacy with social media companies requesting the removal of specific pieces of content or accounts that target defenders. This extra advocacy has been required even where the content clearly violates a social media company’s terms of service and despite initial flagging by a defender. The situation is even more difficult where the threatening content is only recognizable with sufficient local and political context. The various platforms all rely on artificial intelligence, to varying degrees, to identify speech that violates their respective community standards. Yet current iterations of artificial intelligence are often unable to adequately evaluate context and intent.

Online intimidation and smear campaigns against defenders often rely on existing societal fault lines to demean and discredit advocates. In Guatemala, CHR recently documented a coordinated social media campaign to defame, harass, intimidate and incite violence against human rights defenders. Several were linked with so-called “net centers,” where users were reportedly paid to amplify hateful content across platforms. Often, the campaigns relied on “coded” language that hark back to Guatemala’s civil war and the genocide of Mayan communities by calling indigenous leaders communists, terrorists and guerrillas.

These terms appear to have largely escaped social media company scrutiny, perhaps because none is a racist slur per se. And yet, the proliferation of these online attacks, as well as the status of those putting out the content, is contributing to a worsening climate of violence and impunity for violence against defenders by specifically alluding to terms used to justify violence against indigenous communities. In 2018 alone, NPR reports that 26 indigenous defenders were murdered in Guatemala. In such a climate, the fear and intimidation felt by those targeted in such campaigns is not hyperbolic but based on their understanding of how violence can be sparked in Guatemala.

In order to address such attacks, social media companies must adopt policies that allow them to designate defenders as temporarily protected groups in countries that are characterized by state-coordinated or state-condoned persecution of activists. This is in line with international law that prohibits states from targeting individuals for serious harm based on their political opinion. To increase their ability to recognize and respond to persecution and online violence against human rights defenders, companies must continue to invest in their context-driven content moderation capacity, including complementing algorithmic monitoring with human content moderators well-versed in local dialects and historical and political context.

Context-driven content moderation should also take into account factors that increase the risk that online behavior will contribute to offline violence by identifying high-risk countries. These factors include a history of intergroup conflict and an overall increase in the number of instances of intergroup violence in the past 12 months; a major national political election in the next 12 months; and significant polarization of political parties along religious, ethnic or racial lines. Countries where these and other risk factors are present call for proactive approaches to identify problematic accounts and coded threats against defenders and marginalized communities, such as those shown in Equality Labs’ “Facebook India” report.

Companies should identify, monitor and be prepared to deplatform key accounts that are consistently putting out denigrating language and targeting human rights defenders. This must go hand in hand with the greater efforts that companies are finally beginning to take to identify coordinated, state-aligned misinformation campaigns. Focusing on the networks of users who abuse the platform, instead of looking solely at how the online abuse affects defenders’ rights online, will also enable companies to more quickly evaluate whether the status of the speaker increases the likelihood that others will take up any implicit call to violence or will be unduly influenced by disinformation.

This abuser-focused approach will also help to decrease the burden on defenders to find and flag individual pieces of content and accounts as problematic. Many of the human rights defenders with whom CHR works are giving up on flagging, a phenomenon we refer to as flagging fatigue. Many have become fatalistic about the level of online harassment they face. This is particularly alarming as advocates targeted online may develop skins so thick that they are no longer able to assess when their actual risk of physical violence has increased.

Finally, it is vital that social media companies pursue, and civil society demand, transparency in content moderation policy and decision-making, in line with the Santa Clara Principles. Put forward in 2018 by a group of academic experts, organizations and advocates committed to freedom of expression online, the principles are meant to guide companies engaged in content moderation and ensure that the enforcement of their policies is “fair, unbiased, proportional and respectful of users’ rights.” In particular, the principles call upon companies to publicly report on the number of posts and accounts taken down or suspended on a regular basis, as well as to provide adequate notice and meaningful appeal to affected users.

CHR routinely supports human rights defenders facing frivolous criminal charges related to their human rights advocacy online or whose accounts and documentation have been taken down absent any clear justification. This contributes to a growing distrust of the companies among the human rights community as apparently arbitrary decisions about content moderation are leaving advocates both over- and under-protected online.

As the U.N. special rapporteur on freedom of expression explained in his 2018 report, content moderation processes must include the ability to appeal the removal, or refusal to remove, content or accounts. Lack of transparency heightens the risk that calls to address the persecution of human rights defenders online will be subverted into justifications for censorship and restrictions on speech that is protected under international human rights law.

A common response when discussing the feasibility of context-driven content moderation is to compare it to reviewing all the grains of sand on a beach. But human rights defenders are not asking for the impossible. We are merely pointing out that some of that sand is radioactive—it glows in the dark, it is lethal, and there is a moral and legal obligation upon those that profit from the beach to deal with it.

Ginna Anderson, senior counsel, joined ABA CHR in 2012. She is responsible for supporting the center’s work to advance the rights of human rights defenders and marginalized dommunities, including lawyers and journalists at risk. She is an expert in health and human rights, media freedom, freedom of expression and fair trial rights. As deputy director of the Justice Defenders Program since 2013, she has managed strategic litigation, fact-finding missions and advocacy campaigns on behalf of human rights defenders facing retaliation for their work in every region of the world

http://www.abajournal.com/news/article/how-can-social-media-companies-identify-and-respond-to-threats-against-human-rights-defenders

Russian human rights defenders try technology and gaming innovations

September 13, 2019

Tatiana Tolsteneva has written in Global Rights of 12 September, 2019 a very interesting piece about wether technology and gaming innovations can bring new life to Russian NGOs and appeal to younger audiences. Tatiana Tolsteneva has 10 years of managing experience in the Russian non-profit sector, with a focus on human rights defenders initiatives. She has a Master’s degree in Law from Lobachevsky State University of Nizhni Novgorod (UNN) and is finalizing her Master’s Degree in Social Innovation and Entrepreneurship at the London School of Economics. It is long read but contains some fascinating insights:

While there is significant debate over foreign funding issues and closing civic space in Russia, a key problem of the Russian non-profit sector is its “catch-up” form of development. Due to limited resources, this sector develops much more slowly than media or information technologies, for example. In Team 29, an informal association of lawyers and journalists, we are trying to change this, primarily by introducing new media technologies in the non-profit sector.

Lawyers of Team 29 are known not only for taking up cases considered hopeless in which the state accuses people of crimes against national security, but also for seeking so-called “justice in Russian.” That is, fighting for a sentence below the lower limit established by the Criminal Code or for a pardon by the president. In a country in which acquittals account for only 0.02% of total cases, this is considered a success.

In addition, our journalists have developed a niche media resource covering a wide range of issues regarding the relationships of citizens and the Russian government. The Team advises citizens on what actions to take if subjected to searches or questioning, how to find information in governmental databases, and how to protect one’s private data. Through this work, Team 29 is changing the concept of what a human rights activist in Russia can be, and we seek to explain the complexities of this work. The main problem of human rights defenders in Russia for a long time was separation from “ordinary people”. The positioning, language, and public image of human rights defenders were such that average citizens did not understand what human rights workers were doing and how it related to them. Team 29 was one of the first human rights organizations to adopt modern explanatory journalism techniques to strengthen communication with its target audience. In other words, we started to translate from “legal” to “human” language, and to make our materials more engaging to win the online struggle for reader attention.

The positioning, language, and public image of human rights defenders were such that average citizens did not understand what human rights workers were doing and how it related to them. 

In 2015, we joined our legal skills with explanatory journalism technologies in order to develop what are now called “legal handouts”. These are texts providing legal advice, in plain language, mostly on how to deal with unexpected clashes with Russian law enforcement. For example, the handouts explain a person’s rights and how citizens can protect themselves from mistakes often related to lack of knowledge. Each handout has had an average of 100,000 views, and work on these handouts resulted in the subsequent creation of Team 29’s online mini-media resource. Its average monthly attendance amounts to at least 50,000 unique visitors.

The problem in these developments was that the major audience of Team 29’s media projects was people between 25-44 years old, while it is the Y generation—people younger than 25—that has been a driving force of socio-political processes in Russia. For example, this younger age category of Russian citizens has been the one most actively involved in the public mass protests of recent years.

We made it a goal to reach out to that audience with mobile games, which have a huge audience in that demographic and can be played offline. In fact, pro-social games—games with grounded social impact—are an advanced tool in media and non-profit fields abroad. But until now, there have been no such games in Russia.

To develop this new game in Russia, we had to decide what software could be developed with limited resources. We chose “text quests” since they are the least expensive for production and easy in their mechanics. Text quests are a type of game in which interaction with the player is through textual information. The plot of the quest is not rigidly fixed and can change depending on the actions of the player. An important aspect of a text quest is story-telling; we tried to make the plot of our quest fascinating for the player, based on real events, and causing empathy for the main character.

Gebnya is a mobile text quest game that tells users how to communicate with the police and security services in Russia.

The result is Gebnya, a mobile text quest game that tells users how to communicate with the police and security services in Russia, and how to protect oneself, one’s family, and one’s information. The Android version of the app was released on October 6, 2017, and the iOS version on April 18, 2018. At present, the game has been downloaded more than 70,000 times, and the majority of its audience (57%) are people younger than 24. However, less than 15% of users are women.

We also have found that mobile apps can be a part of an alternative business model for human rights NGOs. We have received $1,020 through in-game payments, with most of this revenue (87%) being micro-payments ($1 or 100 rubles).

In the first version of the game, through the in-game payments, it was possible to take part in the crowdfunding of the development of new scenarios. In later versions, we added the ability to pay for the game without ads, as well as for additional gaming options, a standard business model for so-called free-to-play mobile games.

We believe that it can be more important to experiment with something new than to continue with traditional methods that may not be working. 

Once we established the demand for this type of game, we decided to expand it. First, we held a hackathon called “More Games Needed”, which helped non-profit projects of St.-Petersburg to create game software products of their own. A project dedicated to preventing domestic violence called Where Can Couplehood Lead won the hackathon and received mentorship from our experts. We expect the game to release in October 2019. We also intend to release another project together with the educational project Teplitsa (Greenhouse) – Technologies for Social Good.

Second, since Gebnya has currently attracted very few women, we decided to develop a game on problems important for women in Russia and the post-Soviet space. The game dedicated specifically to women’s issues is now under development, and its beta version should be released in November 2019. We decided to focus on three of the many problems faced by women in Russia: cyberbullying, stalking, and intimate partner violence. The game’s plot is designed to help recognize these phenomena, help build personal boundaries, and to get acquainted with legal and psychological defense tools and relevant professional assistance centers.

Team 29 plans to continue this pro-social game development as a project separate from our journalistic and legal work, and we are currently working on additional games with a number of other Russian NGOs.

While developing Gebnya in 2017, we were in fact rather skeptical about the project’s prospects, but we decided to pursue it anyway. We believe that it can be more important to experiment with something new than to continue with traditional methods that may not be working. After all, the non-profit sector cannot survive without innovations.

https://www.openglobalrights.org/technology-and-gaming-innovations-bring-new-life-to-russian-ngos/

See also other posts on communication: https://humanrightsdefenders.blog/tag/communication/

Progress with the TrialWatch app of the Clooney Foundation

September 10, 2019

Illegitimate judicial proceedings are increasingly being used as a ‘rule-of-law-shield’ to fend off legitimate criticism,” says David Pressman, the Executive Director of the Clooney Foundation for Justice (CFJ). No overall system exists to monitor the fairness of trials around the world: some cases receive media attention and are well documented, whereas others are only followed by local activists. To bridge this gap, the CFJ, founded in June 2016, set up TrialWatch, an international monitoring program. Launched in April 2019, TrialWatch trains individuals in the basics of trial-monitoring, and equips them with the TrialWatch app, developed with Microsoft, to help them collect information about trials of interest in their areas. That information is then passed on to legal experts, such as international human rights lawyers, who assess it and write fairness reports. In time, this will contribute to a global justice index, ranking countries by the fairness of their legal system.

By early May 2019, TrialWatch was already monitoring 18 trials around the world, from Nigeria to Belarus, a number which the organisation wants to increase. “TrialWatch aims to solve the challenge of scaling trial-monitoring,” says Pressman. Trial-monitoring has been used by legal experts and lawyers for many years, because it increases transparency, creates a simplified record of the trial, and can facilitate reform. To make it easier to become a monitor, the CFJ developed a new set of guidelines accessible to non-experts, which were approved by the Office of the United Nations High Commissioner for Human Rights, the American Bar Association and Columbia Law School.

The TrialWatch smartphone app gives trial-monitors the tools to collect essential information, and store it securely in one place. The training that trial-monitors receive helps ensure that they record the right information, and straightforward yes/no questionnaires help them speed up collection. Within the app, trial-monitors can also take photos, shoot videos, and record audio – which is useful, given that many of the monitored trials happen in languages which aren’t widely spoken. Audio files are transcribed in the original language and then translated into English by Microsoft’s Azure Cognitive Services. All that is securely uploaded to the cloud, to be pored over by the CFJ’s legal experts.

Our hope is that TrialWatch can help expose states when they fall short,” Pressman says . “It can demonstrate the ways that states are instrumentalising the courts in an effort to legitimise human rights abuses.

https://www.wired.co.uk/article/amal-clooney-trialwatch-app

Human Rights HACKATHON in Kosovo: Equalitech

September 8, 2019

Hackathon in Kosovo

Civil Rights Defenders, in partnership with Innovation Centre Kosovo (ICK) is hosting the first ever regional hackathon to tackle human rights issues – ‘EqualiTECH 2019’ – on 27-29 September 2019.

..there is a clear shortage in the interplay of technological investments around human rights issues, frequently materialising as a roadblock for its advocates. In an effort to reduce this gap, the organisers invite participants with various backgrounds, skill sets, and creative abilities to form multidisciplinary teams and invent unique digital products to hack Human Rights challenges pertaining to 3 thematic areas:

1). Justice and Equality; 2). Freedom of Expression; 3). Access to Information.

This signature event challenges participants to place humanity at the forefront of design thinking and innovation. It aims to fuse the power of technological innovation with the generative capacities of human rights defenders and activists, in building ICT solutions as part of diverse teams, to support human rights work in the Western Balkan countries. Under expert mentoring, the competitors of different backgrounds will have 40 hours to design innovative products that will elevate the work for human rights protection and advocacy. ‘EqualiTech 2019’ kicks off on the 27- 29 September, taking place at ICK’s event hub. All interested candidates can apply here. The deadline for application is 17 September, 11:59 pm.

The challenges

Justice and equality

Design a solution that helps increase justice and equality. Conceptualize and develop a digital product that will help increase justice and equality as well as promote inclusiveness for all. For example, think of tools (i.e. platform) that can connect state bodies responsible for providing free legal aid, private pro-bono lawyers/law firms, legal aid organizations and citizens in need of legal aid and advice; or tools that can help identify public and private places of interest and service providers (bars, restaurants, hotels, parks, etc) that are friendly, inclusive and non-discriminatory, particularly to vulnerable and marginalized communities in the Western Balkans.

Freedom of expression To complete this challenge, you should design a tool that will help facilitate and/or increase freedom of expression and reduce various forms of online harassment. The objective is to invent digital products (i.e. platforms) that can enable citizens, activists and journalists from the Western Balkans to connect with each other; identify and report violations of human rights; enable user-friendly reporting mechanisms that help increase their safety and security, etc.

Access to information is increasingly limited in the Western Balkans. Proliferation of unprofessional media, increasing number of fake and manipulative information, limits citizens abilities to make informed decisions. Conceptualize and design a digital product that will help increase access to reliable and useful information sources. This product (i.e. platform) should support citizens, progressive media outlets and independent journalists, fact-checking and other issues relating to ‘fake news’.

(Please note that this is not an exhaustive list.

Competition eligibility criteria?

To participate, you must meet the following eligibility criteria:

  • All individuals must be between 18-35 years of age.
  • Must work (HR activists or advocates) or have an interest (tech candidates) in combating discrimination, upholding human rights for minorities and underrepresented groups, and ensuring freedom of expression.
  • Tech candidates must be skilled in using programming language or tools and/or graphic design software.
  • All candidates must be able to collaborate within a team.
  • Must have a passion for problem-solving and analytical thinking.
  • Preference will be given to individuals with proven experience or passion in combating human rights violations.

Awards for the winning products

We will award three cash prizes, each in the amount of 1000 euros for the winning product prototypes in the respective challenge category.

For a similar event see: https://humanrightsdefenders.blog/2016/02/24/diplohack-event-on-human-rights-to-be-held-in-geneva-on-26-27-february/

EqualiTECH 2019 Human Rights Hackathon to Launch in Kosovo

Call for applications for new on-line course Strategic Effectiveness Method

August 6, 2019

New Tactics in Human Rights has developed a process to help activists become more focused, more creative, and ultimately more likely to succeed in their advocacy efforts. It is called the Strategic Effectiveness Method.

The New Tactics in Human Rights program is pleased to invite applications for our newly-launched online course. The course will provide you with a foundation for conducting human rights-based advocacy using the Strategic Effectiveness Method, and prepare you to integrate technology into your advocacy work using our innovative online Tactical Mapping Tool (TMT). The course is being offered free of charge and 20 applicants will be selected to take the course.

To apply for the online course, please complete the application form on the following link https://bit.ly/2Otn8JJ before August 30, 2019.

If you have any questions please email newtactics@cvt.org.

Course Details and Features

  • Timing: The course will begin on Monday, September 30, 2019 and contains five (5) modules. You can proceed through the five modules of the online course at your own pace. However, you will need to complete the course by the closure date of Friday, November 22, 2019 (8 weeks from course starting date) in order to receive your certificate of completion. You will choose how much time and effort you put into the course.
  • TMT Use: This course provides you with an exciting opportunity to learn and use the TMT to conduct human-rights based advocacy. The TMT makes it possible to collectively develop and monitor community actions to address your identified community problem of concern. New Tactics trainers will engage with you by using communication features within the TMT. This will provide you with skills and practice in using these features when implementing your own advocacy campaigns to address your identified problem.
  • Case Study: The course uses a case study example to demonstrate and build your skills in using the Strategic Effectiveness Method and the corresponding features in the TMT. You will use the TMT to develop a tactical map of the case study example. This will enhance your ability to share ideas and experiences with others in the course for your learning and skill development.
  • Skills Application: Following the case study video demonstrations, you will apply your understanding and skills in using both the Strategic Effectiveness Method and the online TMT to your own identified problem. By the end of the course, you will use the Method and the TMT to create a second tactical map on your own identified community problem of concern. The TMT will help you and your community group to collectively gather the information you need to develop and track your own advocacy campaigns.
  • Community Group Engagement: To gain the most benefit from this course, we highly recommend that you engage others from your organization, community, or group to work with you while developing your own advocacy campaign. While you can individually complete the course assignments, the added benefit of gathering a community group is the opportunity to immediately apply the Strategic Effectiveness Method, online Tactical Mapping Tool (TMT) and other resources of the course directly to a community problem of concern.
  • Peer Interaction: Throughout the course, you will participate in an exchange “Forum.” Based on the case study used in the course, you will share what you are learning and gain from the ideas and experiences of other course participants as well.
  • Language and Accessibility. This course will be conducted in English. If you are interested in taking the course in Arabic, please complete this application form: https://forms.gle/s8Jo9Qpx3hAwhiXw7 by August 12, 2019. We chose CANVAS Studio as the learning management system (LMS) for this course due in part to the internal supports offered by the platform. These supports help to maximize accessibility for participants with a range of disabilities.

https://www.newtactics.org/invitation-apply-new-tactics-online-course

Social media councils – an answer to problems of content moderation and distribution??

June 17, 2019

In the running debate on the pros and cons of information technology, and it complex relation to freedom of information, the NGO Article 19 comes on 11 june 2019 with an interesting proposal “Social Media Councils“.

Social Media Councils: Consultation - Digital

In today’s world, dominant tech companies hold a considerable degree of control over what their users see or hear on a daily basis. Current practices of content moderation on social media offer very little in terms of transparency and virtually no remedy to individual users. The impact that content moderation and distribution (in other words, the composition of users’ feeds and the accessibility and visibility of content on social media) has on the public sphere is not yet fully understood, but legitimate concerns have been expressed, especially in relation to platforms that operate at such a level of market dominance that they can exert decisive influence on public debates.

This raises questions in relation to international laws on freedom of expression and has become a major issue for democratic societies. There are legitimate motives of concern that motivate various efforts to address this issue, particularly regarding the capacity of giant social media platforms to influence the public sphere. However, as with many modern communication technologies, the benefits that individuals and societies derive from the existence of these platforms should not be ignored. The responsibilities of the largest social media companies are currently being debated in legislative, policy and academic circles across the globe, but many of the numerous initiatives that are put forward do not sufficiently account for the protection of freedom of expression.

In this consultation paper, ARTICLE 19 outlines a roadmap for the creation of what we have called Social Media Councils (SMCs), a model for a multi-stakeholder accountability mechanism for content moderation on social media. SMCs aim to provide an open, transparent, accountable and participatory forum to address content moderation issues on social media platforms on the basis of international standards on human rights. The Social Media Council model puts forward a voluntary approach to the oversight of content moderation: participants (social media platforms and all stakeholders) sign up to a mechanism that does not create legal obligations. Its strength and efficiency rely on voluntary compliance by platforms, whose commitment, when signing up, will be to respect and execute the SMC’s decisions (or recommendations) in good faith.

With this document, we present these different options and submit them to a public consultation. The key issues we seek to address through this consultation are:

  • Substantive standards: could SMCs apply international standards directly or should they apply a ‘Code of Human Rights Principles for Content Moderation’?
  • Functions of SMCs: should SMCs have a purely advisory role or should they be able to review individual cases?
  • Global or national: should SMCs be created at the national level or should there be one global SMC?
  • Subject-matter jurisdiction: should SMCs deal with all content moderation decisions of social media companies, or should they have a more specialised area of focus, for example a particular type of content?

The consultation also seeks input on a number of technical issues that will be present in any configuration of the SMC, such as:

  1. Constitution process
  2. Structure
  3. Geographic jurisdiction (for a national SMC)
  4. Rules of procedure (if the SMC is an appeals mechanism)
  5. Funding

An important dimension of the Social Media Council concept is that the proposed structure has no exact precedent: the issue of online content moderation presents a new and challenging area. Only with a certain degree of creativity can the complexity of the issues raised by the creation of this new mechanism be solved.

ARTICLE 19’s objective is to ensure that decisions on these core questions and the solutions to practical problems sought by this initiative are compatible with the requirements of international human rights standards, and are shaped by a diverse range of expertise and perspectives.

Read the consultation paper

Complete the consultation survey

https://www.article19.org/resources/social-media-councils-consultation/

Information technology and human rights: 4 vacancies at HURIDOCS

June 12, 2019

As reported, HURIDOCS received a grant from the Google Artificial Intelligence Impact Challenge [https://humanrightsdefenders.blog/2019/05/08/excellent-news-huridocs-to-receive-1-million-from-google-for-ai-work/ Read their blog to learn more >>]

For those who are passionate about using technology and information for human rights there are the following vacancies:

Learn more about these vacancies >>

https://mailchi.mp/huridocs.org/feb2019-1444105?e=04f22f4e7f

Excellent news: HURIDOCS to receive 1 million $ from Google for AI work

May 8, 2019

Google announced on 7 May 2019 that the Geneva-based NGO HURIDOCS is one of 20 organizations that will share 25 million US dollars in grants from the Google Artificial Intelligence Impact Challenge. The Google Artificial Intelligence Impact Challenge was an open call to nonprofits, social enterprises, and research institutions to submit their ideas to use artificial intelligence (AI) to help address societal challenges. Over 2600 organizations from around the world applied.

Geneva-based HURIDOCS will receive a grant of 1 million US dollars to develop and use machine learning methods to extract, explore and connect relevant information in laws, jurisprudence, victim testimonies, and resolutions. Thanks to these, the NGO will work with partners to make documents better and freely accessible. This will benefit anyone interested in using human rights precedents and laws, for example to lawyers representing victims of human rights violations or students researching non-discrimination.

The machine learning work to liberate information from documents is grounded in more than a decade of work that HURIDOCS has done to provide free access to information. Through pioneering partnerships with the Institute for Human Rights and Development in Africa (IHRDA) and the Center for Justice and International Law (CEJIL), HURIDOCS has co-created some of the most used public human rights databases. A key challenge in creating these databases has been the time-consuming and error-prone manual adding of information – a challenge the machine learning techniques will be used to overcome.

“We have been experimenting with machine learning techniques for more than two years”, said Natalie Widmann, Artificial Intelligence Specialist at HURIDOCS. “We have changed our approach countless times, but we see a clear path to how they can be leveraged in groundbreaking ways to democratise access to information.” HURIDOCS will use the grant from Google to work with partners to co-create the solutions, carefully weighing ethical concerns of automation and focusing on social impact. All the work will be done in the open, including all code being released publicly.

We are truly excited by the opportunity to use these technologies to address a problem that has been holding the human rights movement back”, said Friedhelm Weinberg, Executive Director of HURIDOCS. “We are thankful to Google for the support and look forward to be working with their experts and what will be a fantastic cohort of co-grantees.”

We received thousands of applications to the Google AI Impact Challenge and are excited that HURIDOCS was selected to receive funding and expertise from Google. AI is at a nascent stage when it comes to the value it can have for the social impact sector, and we look forward to seeing the outcomes of this work and considering where there is potential for use to do even more.” – Jacquelline Fuller, President of Google.org

Next week, the HURIDOCS team will travel to San Francisco to work with the other grantees, Google AI experts, Project Managers and the startup specialists from Google’s Launchpad Accelerator for a program that will last six months, from May to November 2019. Each organization will be paired a Google expert who will meet with them regularly for coaching sessions, and will also have access to other Google resources and expert mentorship.

Download the press release in English, Spanish. Learn more about the other Google AI Impact grantees at Google’s blog.

Fo more on HURIDOCS history: https://www.huridocs.org/tag/history-of-huridocs/ and for some of my other posts: https://humanrightsdefenders.blog/tag/huridocs/

HURIDOCS NEWS