Posts Tagged ‘human rights documentation’

A new gateway to human rights information being launched: awards and their laureates

January 25, 2021
THF

As this blog has abundantly shown, Human Rights Awards have become an increasingly important tool in the protection of Human Rights Defenders. They give HRDs visibility and provide support and protection for those at risk. [see e.g. https://humanrightsdefenders.blog/tag/human-rights-awards/].

On February 2nd 2021 a new one-stop resource will allow to find and search human rights awards and their laureates.

The Digest of International Human Rights Awards and their Laureates, a unique centralised resource for the human rights community, gives visibility, strengthens legitimacy of human rights defenders’ work, and could influence authorities to better apply human rights. There are now 200 awards and over 2400 HRDs/laureates in the digests.

It will give researchers, students, activists, the media and the public a searchable overview on who has won which awards and their short profiles. The digest will allow people to filter (re)searches on laureates by, e.g. theme, prize, profession, country or region, gender, etc.         

On February 2nd, 2021 True Heroes Films will be launching the new platform to the public.  See the clip below:

Please forward this post to whom you think might be interested. Twitter: https://twitter.com/TrueHeroesFilms

https://mailchi.mp/7176a72bfc91/digest-of-international-human-rights-awards-and-their-laureates

Algorithms designed to suppress ISIS content, may also suppress evidence of human rights violations

April 11, 2020
Facebook and YouTube designed algorithms to suppress ISIS content. They're having unexpected side effects.

Illustration by Leo Acadia for TIME
TIME of 11 April 2020 carries a long article by Billy Perrigo entitled “These Tech Companies Managed to Eradicate ISIS Content. But They’re Also Erasing Crucial Evidence of War Crimes” It is a very interseting piece that clearly spells out the dilemma of supressing too much or too little on Facebook, YouTube, etc.  Algorithms designed to suppress ISIS content, are having unexpected side effects such as suppressing evidence of human rights violations.
…..Images by citizen journalist Abo Liath Aljazarawy to his Facebook page (Eye on Alhasakah’s) showed the ground reality of the Syrian civil war. His page was banned. Facebook confirmed to TIME that Eye on Alhasakah was flagged in late 2019 by its algorithms, as well as users, for sharing “extremist content.” It was then funneled to a human moderator, who decided to remove it. After being notified by TIME, Facebook restored the page in early February, some 12 weeks later, saying the moderator had made a mistake. (Facebook declined to say which specific videos were wrongly flagged, except that there were several.)The algorithms were developed largely in reaction to ISIS, who shocked the world in 2014 when they began to share slickly-produced online videos of executions and battles as propaganda. Because of the very real way these videos radicalized viewers, the U.S.-led coalition in Iraq and Syria worked overtime to suppress them, and enlisted social networks to help. Quickly, the companies discovered that there was too much content for even a huge team of humans to deal with. (More than 500 hours of video are uploaded to YouTube every minute.) So, since 2017, beg have been using algorithms to automatically detect extremist content. Early on, those algorithms were crude, and only supplemented the human moderators’ work. But now, following three years of training, they are responsible for an overwhelming proportion of detections. Facebook now says more than 98% of content removed for violating its rules on extremism is flagged automatically. On YouTube, across the board, more than 20 million videos were taken down before receiving a single view in 2019. And as the coronavirus spread across the globe in early 2020, Facebook, YouTube and Twitter announced their algorithms would take on an even larger share of content moderation, with human moderators barred from taking sensitive material home with them.

But algorithms are notoriously worse than humans at understanding one crucial thing: context. Now, as Facebook and YouTube have come to rely on them more and more, even innocent photos and videos, especially from war zones, are being swept up and removed. Such content can serve a vital purpose for both civilians on the ground — for whom it provides vital real-time information — and human rights monitors far away. In 2017, for the first time ever, the International Criminal Court in the Netherlands issued a war-crimes indictment based on videos from Libya posted on social media. And as violence-detection algorithms have developed, conflict monitors are noticing an unexpected side effect, too: these algorithms could be removing evidence of war crimes from the Internet before anyone even knows it exists.

…..
It was an example of how even one mistaken takedown can make the work of human rights defenders more difficult. Yet this is happening on a wider scale: of the 1.7 million YouTube videos preserved by Syrian Archive, a Berlin-based non-profit that downloads evidence of human rights violations, 16% have been removed. A huge chunk were taken down in 2017, just as YouTube began using algorithms to flag violent and extremist content. And useful content is still being removed on a regular basis. “We’re still seeing that this is a problem,” says Jeff Deutsch, the lead researcher at Syrian Archive. “We’re not saying that all this content has to remain public forever. But it’s important that this content is archived, so it’s accessible to researchers, to human rights groups, to academics, to lawyers, for use in some kind of legal accountability.” (YouTube says it is working with Syrian Archive to improve how they identify and preserve footage that could be useful for human rights groups.)

…..

Facebook and YouTube’s detection systems work by using a technology called machine learning, by which colossal amounts of data (in this case, extremist images, videos, and their metadata) are fed to an artificial intelligence adept at spotting patterns. Early types of machine learning could be trained to identify images containing a house, or a car, or a human face. But since 2017, Facebook and YouTube have been feeding these algorithms content that moderators have flagged as extremist — training them to automatically identify beheadings, propaganda videos and other unsavory content.

Both Facebook and YouTube are notoriously secretive about what kind of content they’re using to train the algorithms responsible for much of this deletion. That means there’s no way for outside observers to know whether innocent content — like Eye on Alhasakah’s — has already been fed in as training data, which would compromise the algorithm’s decision-making. In the case of Eye on Alhasakah’s takedown, “Facebook said, ‘oops, we made a mistake,’” says Dia Kayyali, the Tech and Advocacy coordinator at Witness, a human rights group focused on helping people record digital evidence of abuses. “But what if they had used the page as training data? Then that mistake has been exponentially spread throughout their system, because it’s going to train the algorithm more, and then more of that similar content that was mistakenly taken down is going to get taken down. I think that is exactly what’s happening now.” Facebook and YouTube, however, both deny this is possible. Facebook says it regularly retrains its algorithms to avoid this happening. In a statement, YouTube said: “decisions made by human reviewers help to improve the accuracy of our automated flagging systems.”

…….
That’s because Facebook’s policies allow some types of violence and extremism but not others — meaning decisions on whether to take content down is often based on cultural context. Has a video of an execution been shared by its perpetrators to spread fear? Or by a citizen journalist to ensure the wider world sees a grave human rights violation? A moderator’s answer to those questions could mean that of two identical videos, one remains online and the other is taken down. “This technology can’t yet effectively handle everything that is against our rules,” Saltman said. “Many of the decisions we have to make are complex and involve decisions around intent and cultural nuance which still require human eye and judgement.”

In this balancing act, it’s Facebook’s army of human moderators — many of them outsourced contractors — who carry the pole. And sometimes, they lose their footing. After several of Eye on Alhasakah’s posts were flagged by algorithms and humans alike, a Facebook moderator wrongly decided the page should be banned entirely for sharing violent videos in order to praise them — a violation of Facebook’s rules on violence and extremism, which state that some content can remain online if it is newsworthy, but not if it encourages violence or valorizes terrorism. The nuance, Facebook representatives told TIME, is important for balancing freedom of speech with a safe environment for its users — and keeping Facebook on the right side of government regulations.

Facebook’s set of rules on the topic reads like a gory textbook on ethics: beheadings, decomposed bodies, throat-slitting and cannibalism are all classed as too graphic, and thus never allowed; neither is dismemberment — unless it’s being performed in a medical setting; nor burning people, unless they are practicing self-immolation as an act of political speech, which is protected. Moderators are given discretion, however, if violent content is clearly being shared to spread awareness of human rights abuses. “In these cases, depending on how graphic the content is, we may allow it, but we place a warning screen in front of the content and limit the visibility to people aged 18 or over,” said Saltman. “We know not everyone will agree with these policies and we respect that.”

But civilian journalists operating in the heat of a civil war don’t always have time to read the fine print. And conflict monitors say it’s not enough for Facebook and YouTube to make all the decisions themselves. “Like it or not, people are using these social media platforms as a place of permanent record,” says Woods. “The social media sites don’t get to choose what’s of value and importance.”

See also: https://humanrightsdefenders.blog/2019/06/17/social-media-councils-an-answer-to-problems-of-content-moderation-and-distribution/

https://time.com/5798001/facebook-youtube-algorithms-extremism/

New academic study of UN human rights treaty system calls for online databases on impact

February 16, 2020

Christof Heyns (University of Pretoria; Member of the UN Human Rights Committee.) and Frans Viljoen (Director, Centre for Human Rights, University of Pretoria) reported on 11 February 2020 in Global Rights on the progress being made in a new, global academic study to answer the question “What difference does the UN human rights treaty system make, and why?”.

An comprehensive research project on the impact of the treaty system, which started some years ago, is now being expanded into a global study….The first steps of the study were taken two decades ago by a team of researchers coordinated from the University of Pretoria, in collaboration with the UN Human Rights Office (OHCHR). …..The researchers documented numerous instances of impact, and we were in a position to draw general conclusions, published as a book and an article. This included that the evidence showed that the treaty system has had an enormous impact on the protection of human rights on the ground, in particular through the—recognized or unrecognized—incorporation of treaty norms into domestic law.

The following factors were found to be among those that have enhanced its impact: a strong domestic constituency for specific treaties; national action plans; and the windows of opportunity that comes with a change to democracy. We also laid strong emphasis on a greater emphasis on the role of national human rights institutions in mediating impact, and for them to do follow-up.

Factors found to have limited the impact of the system included the following: concerns for State sovereignty; a lack of knowledge of the system; the absence of a robust domestic human right culture; ineffective coordination between governmental departments; an ad-hoc approach to reporting; federalism; reprisals against human rights defenders; a preference for regional systems; and weak follow-up by treaty bodies.

We reported a rallying cry from many far-flung countries that ‘Geneva is very far’—not only in terms of geography but also in terms of accessibility and psychological ownership. And we proposed that the treaty bodies should consider holding some of their meetings away from  UN headquarters in Geneva.

Now, twenty years later, we are reviewing the same 20 countries, again with the help of researchers based in the respective countries, and again in collaboration with the OHCHR. We are asking the same questions. This study is now nearing completion, and we plan to publish it in the middle of next year, this time, with Professor Rachel Murray from Bristol University as co-editor. The data from the more recent study is still coming in. So far, the results provide further evidence of the strong impact of the system in most countries. However, a systematic analysis will only be possible once all the data has been gathered.

In the meantime, some of the issues identified up in the earlier study have been taken up within the system. There is for example a much stronger recognition of the role of national implementation and monitoring mechanisms. The Disability Rights Convention adopted in 2007, explicitly calls for creation of national ‘focal points’ and the designation of national human rights institutions to promote, protect and monitor implementation of the Convention….

The need to ‘bring the system closer to the ground’  is now recognized by a range of NGOs in preparation for the 2020 review of treaty bodies. The idea of treaty body meetings outside Geneva was advanced again by Heyns and Gravett in a blog two years ago, also on the basis of the regional experience, and the first such meeting for a UN treaty body is now being planned for 2020.

During the course of these two studies, we became very aware of the importance of getting a clear picture of the impact of the system, but also of the limitations of what we were doing. With only 20 countries covered, the sample size is quite limited; and, providing a snapshot at a particular moment in those countries means they are quickly overtaken by events. Following wide consultation, we are currently in the process of setting up an online database, where information on the impact of the system in all UN member states will be posted. The 20 country studies mentioned above, as well as the supporting documentation, will for a start be posted on a website. In the meantime, clinical groups are being formed at universities around the world, where international students are gathering the relevant information on their home countries, to be posted on the website. We anticipate that up to 50 new countries will be covered per year and ones covered earlier will be updated. In an era of crowd-sourcing, contributions from all interested parties—NGOs, individual researchers etc.—will be solicited.

This will be a large-scale and long-term research project, but hopefully it will help to allow the collective wisdom of people anywhere in the world to ensure that the treaty system remains as effective and as responsive to the needs of our time as is possible. It is also intended, in some way, to be a response to the lament that ‘Geneva is very far’ and to ensure that the treaty system is brought closer to the actual rights-holders, even if only virtually.

The treaty system has played a pivotal role in developing the substantive norms of the global human rights project over the last six decades. The future of the treaty system depends on whether it will continue to lead the way on substance, but more is required: it will have to enhance its visibility and broaden its ownership to a global audience, and treaty norms will have to find their way into domestic law and practices. This is the gap that the new study aims to help fill.

See also: https://humanrightsdefenders.blog/2015/02/17/treaty-bodies-case-law-database-saved-and-resurrected-by-un/

https://www.openglobalrights.org/what-difference-does-un-human-rights-treaty-system-make/

Magnitsky law spawns cottage industry of sanctions lobbying

February 13, 2020
Congress passed the Magnitsky Act in 2012 to punish Russian officials accused of beating to death a whistleblower who publicized government corruption. [see also: https://humanrightsdefenders.blog/2019/08/29/european-court-rules-on-sergei-magnitskys-death/]

A decade later, the law has unwittingly spawned a multimillion-dollar lobbying cottage industry. Predictably, a number of lobbyists are gunning to remove Magnitsky penalties on their questionable clients, just as with other such sanctions laws. President Donald Trump’s impeachment lawyer, Alan Dershowitz, for example, is defending an Israeli billionaire accused of pillaging Africa, while Trump’s 2016 Tennessee state director, Darren Morris, has joined with New York law firm Pillsbury Winthrop Shaw Pittman in representing an Iraqi businessman sanctioned for allegedly bribing politicians.

But a unique facet of the Magnitsky law and subsequent amendments has created a whole new opening for more creative lobbying. Unlike similar laws blocking sanctioned parties’ US assets and banning travel to the United States, Magnitsky requires that US officials consider information from credible human rights organizations when weighing whether to apply sanctions. “That’s a pretty revolutionary provision,” said Rob Berschinski, the senior vice president for policy at Human Rights First. “Effectively, the US government has created an open inbox in which literally anyone can petition for sanctions — no matter what their motive is, no matter what the credibility of their information is.

Berschinski’s organization is among those taking advantage of the provision, lobbying for additional Magnitsky sanctions on Saudi officials responsible for the murder of Jamal Khashoggi. The Trump administration designated 17 Saudi officials in November 2018, but not Crown Prince Mohammed bin Salman, who is believed by the CIA and UN investigators to have ordered the crime.

Global Magnitsky Human Rights Accountability Act (click above to read the law)

“The point here is, yes, 17 people were designated under Global Magnitsky,” said Berschinski, who served as deputy assistant secretary of state for democracy, human rights, and labor under President Barack Obama. “No, they are not the people who were ultimately responsible for directing the crime, and the people who were ultimately responsible need to be held accountable.”

Saudi Arabia isn’t the only Gulf target of sanctions lobbying. In recent months, lawyers for Kuwaiti private equity firm KGL Investment and its former CEO, Marsha Lazareva, have launched a multimillion-dollar campaign to threaten Kuwait with Magnitsky sanctions if it does not drop embezzlement charges against her. Working on the account are big names, including President George H.W. Bush’s son, Neil Bush; former House Foreign Affairs Committee Chairman Ed Royce, R-Calif.; former FBI Director Louis Freeh; and ex-Florida Attorney General Pam Bondi, until she joined Trump’s impeachment team. But the Lazareva camp has also consistently sought to portray her defenders as “human rights activists,” notably working with Washington nonprofit In Defense of Christians and former human rights lawyer Cherie Blair, the wife of ex-British Prime Minister Tony Blair, in its efforts.

Recent Magnitsky Act lobbying
Lobbying to remove sanctions Lobbying to add sanctions
Freeh Sporkin & Sullivan for Israeli businessman Dan Gertler Crowell & Moring and others on behalf of KGL Investment (sanctions on Kuwait)
Pillsbury Winthrop Shaw Pittman / Morris Global Strategies for Iraqi businessman Khamis Khanjar Human Rights First (sanctions for killers of Jamal Khashoggi)
Venable / Sonoran Policy Group for Serbian arms dealer Slobodan Tesic (Sonoran terminated December 2018) Schmitz Global Partners / Jefferson Waterman International (JWI) on behalf of fugitive Bulgarian businessman Tzvetan Vassilev (JWI terminated August 2019)
Source: Department of Justice / Congress

Lazareva’s champions insist she was railroaded by a corrupt judicial system and that lobbying for human rights sanctions — even if it’s spearheaded by corporate interests with deep pockets — is perfectly legitimate. To date, at least five US lawmakers have also joined the call for an investigation into Kuwait under the Magnitsky law.

“The global Magnitsky sanctions are a critical tool available to human rights NGOs to hold foreign governments accountable in cases of corruption and injustice,” said Peter Burns, government relations director for In Defense of Christians, or IDC. “IDC has advocated for their implementation in a variety of human rights and religious freedom contexts. One such case is that of Orthodox Christian businesswoman Marsha Lazareva, who is imprisoned in Kuwait on bogus corruption charges. The United States must become more effective at holding our friends, like Saudi Arabia, Egypt and Kuwait, accountable for religious freedom violations.”

“Are there actors out there that I’m aware of that may not have kind of the purest motives in bringing case files? Sure. But I have confidence in the integrity of the underlying decision-making system within the US government.”

IDC said it’s not getting paid for its Lazareva advocacy. But the army of lobbyists urging sanctions on Kuwaiti officials has, however, raised concerns about the integrity of the Magnitsky process.

“Are there actors out there that I’m aware of that may not have kind of the purest motives in bringing case files? Sure,” Berschinski told Al-Monitor. “But I have confidence in the integrity of the underlying decision-making system within the US government.”

This isn’t the first time lobbyists have sought to use Magnitsky in such a fashion. Back in 2017, lobbyists for fugitive Bulgarian businessman Tzvetan Vassilev sought sanctions on Bulgaria after being charged with money laundering and embezzlement. At the time, Lloyd Green, a Justice Department official under President George H. W. Bush, warned against potential abuses of the law. The Magnitsky Act … was not designed to become a sword and shield for those alleged to have committed crimes in systems that afford due process,” he wrote in an op-ed for The Hill at the time. It “should not be allowed to become a cudgel wielded by non-citizens as they seek to beat our allies into submission.

Berschinski said Human Rights First was aware of both the Vassilev and Lazareva campaigns and had declined to get involved. He declined to speculate, however, on whether such lobbying campaigns undermine the voices of traditional human rights organizations. “My sense is that at the end of the day, the US government officials who are actually making the call are making the decision on whether to designate or not on the basis of a solid evidentiary basis,” he said.

Read more: https://www.al-monitor.com/pulse/originals/2020/01/magnitsky-sanctioned-lobbying-hire-cottage-industry.html#ixzz6Cc6LK5Tp

EU’s Fundamental Rights Agency has new website to serve mobile users better

February 5, 2020

It prominently highlights useful tools like FRA’s EU Fundamental Rights Information System (EFRIS). This section steers users to key resources, such as promising practices from across the EU on how to combat hate crime or collect equality data, which they could use in their own work. In addition, country-specific information is more prominent so users can find local information from their country. It also flags which information is available in other EU languages. Users can also sign up for project updates via email so they can keep abreast of the latest agency developments. The site reflects FRA’s convening power as a hub for all human rights defenders which they can draw on for their work. It also aims to mirror FRA’s communicating rights mantra to maximise impact and outreach, helping to make a difference for people across the EU.

Accessibility remains a key consideration in the new design of the site.

https://fra.europa.eu/en/news/2020/new-modern-fra-website-promises-better-user-experience

Annual Reports 2019: HURIDOCS harnessing the power of human rights information

December 28, 2019

The second annual report [for yesterday’s, see: https://humanrightsdefenders.blog/2019/12/27/annual-reports-2019-civicus-global-report/] comes from HURIDOCS which – before turning the page on 2019 – wants to share some highlights from the last several months:

Towards an ecosystem of interoperable human rights tools

Social media posts can contain critical evidence of abuses that will one day help deliver justice. That’s why legal advocacy group Global Legal Action Network (GLAN) and their partners are saving copies of online content that show attacks targeting civilians in Yemen. How? They’re using a new integration between Digital Evidence Vault and our Uwazi platform. Read more >>>

Using machine learning to help defenders find what they need

Machine learning could have an enormous impact on the accessibility of human rights information. How? By automating parts of the time-intensive process of adding documents to a collection. In collaboration with some of our partners and Google.org Fellows, we’re working on doing just that. Check it out >>>

How to research human rights law for advocacy

International law can be a powerful tool for local changemakers to advance protections for human rights. But there’s no central place for finding relevant legislation, commitments and precedents. So together with Advocacy Assembly, we created a free 30-minute course to help human rights defenders navigate the information landscape. Learn more >>>

A database to magnify personal stories and identify trends

Pakistan has one of the world’s largest death rows. At the same time, 85% of death sentences are overturned on appeal. Who are the people convicted? Juveniles, people with disabilities or mental illness, and those from economically disadvantaged backgrounds. We partnered with Justice Project Pakistan to launch a database to shine a light on the situation. Take a look >>>

Improvements to our info management platform Uwazi

We rolled out several new features to Uwazi. CSV import allows for the quick creation of collections without the need to manually input large amounts of data. The activity log gives a comprehensive overview of all additions, edits and deletions (or lack thereof). And two-factor verification offers an extra layer of protection. Speaking of security, we also had Uwazi audited by a third party and made improvements based on their findings. Explore the Uwazi portfolio >>>

growing, moving team and a heartfelt ‘thank you’ to Bert

We welcomed several new members to our team: two project managers, a UX designer, two software developers, and a communications coordinator. And we’re currently seeking an info management intern (deadline: 20 December 2019). We gave a warm farewell to Project Manager Hyeong-sik Yoo and Software Developer Clément Habinshuti, and said “thank you” to Senior Documentalist Bert Verstappen, who retired after 32 incredible years.

(see also: https://humanrightsdefenders.blog/2019/09/27/bertxit-bert-verstappen-leaves-huridocs-after-32-years/)

Executive Director Friedhelm Weinberg  goes on parental leave. For the first three months of 2020 while he’s off, Director of Programmes Kristin Antin will be stepping in.

Home

How can the human rights defenders use new information technologies better?

November 28, 2019

(twitter: @mads_gottlieb) wrote in Impakter about Human Rights, Technology and Partnerships and stated that these technologies have the potential to tremendously facilitate human rights defenders in their work, whether they are used to document facts about investigations, or as preventive measures to avoid violations. His main message in this short article is an appeal to the human rights sector at large, to use technology more creatively, to make technology upgrades a top priority, and to engage with the technology sector in this difficult endeavor. The human rights sector will never be able to develop the newest technologies, but the opportunities that technology provides is something they need to make use of now and in collaboration with the technology sector

…Several cases show that human rights are under threat, and that it is difficult to investigate and gather the necessary facts in time to protect them. Duterte in the Philippines, ordered the police to shoot activists who demonstrated against extra-judicial killings. He later tried to reduce the funding of the Philippines National Human Rights Commission to 1 USD a year. This threat followed a period of 15 months of investigating the killings, and Duterte responded with the claim that they were “useless and defended criminal’s rights.” 

Zimbabwe is another country with a difficult environment for human rights defenders. It is not surprising that few people speak out, since the few that dare to demonstrate or voice opposing political views disappear. A famous example is the activist and journalist,  from Occupy Africa Unity Square. He was allegedly beaten in 2014, and in 2015 he went missing and was never found. His disappearance occurred after a period of public demonstrations against Mugabe’s regime. To add to the challenging conditions that call for better tools to defend human rights, is the fact that many European countries digitalise their public services. The newly introduced data platforms store and process sensitive information about the population, such as gender, ethnicity, sexual orientation, past health records, etc. Information that can easily be used for discriminative purposes, whether intentionally or not.

Human rights defenders typically struggle to find adequate resources for their daily operations and as a result, investments in technology often come second. It is rare for human rights defenders to have anything beyond the minimum requirements, such as the internally-facing maintenance of an operational and secure internet connection, a case system, or a website. At the same time, global technology companies develop new technologies such as blockchain, artificial intelligence, and advanced data and surveillance techniques. These technologies have the potential to tremendously facilitate human rights defenders in their work, whether they are used to document facts about investigations, or as preventive measures to avoid violations. It is also important to facilitate and empower rights-holders in setting up and using networks and platforms that can help notify and verify violations quickly. 

Collaboration is an excellent problem-solving approach and human rights organizations are well aware of it. They engage in multiple partnerships with important actors. The concern is therefore not the lack of collaboration, but whether they adequately prioritize what is now the world’s leading sector — technology (the top 5 on Forbes list of most valuable brands are all technology companies; Apple, Google, Microsoft, Amazon, and Facebook). It is not up to the technology sector to engage with the human rights sector (whether they want to or not), but it should be a top priority for the human rights sector to try to reduce their technology gap, in the interest of human rights.

There are several partnership opportunities, and many are easy to get started with and do not require monetary investments. One opportunity is to partner up with tech universities, that have the expertise to develop new types of secure, rapid monitoring systems. Blockchain embraces most of the principles that human rights embraces, such as transparency, equality and accountability, and rapid response times are possible. So why not collaborate with universities? Another opportunity is collaborating with institutions that manage satellite images. Images provide very solid proof regarding changes in landscape, examples include deforestation that threatens indigenous people, and the removal or burning of villages over a short period of time. A third opportunity is to get in dialogue with the technology giants that develop these new technologies, and, rather than asking for monetary donations, ask for input regarding how the human rights sector can effectively leverage technology. 

 

‘Bertxit’: ..Bert Verstappen leaves HURIDOCS after 32 years

September 27, 2019

The SIM team in 1984, Bert Verstappen on the right

It is usually not a compliment when somebody is described as ‘furniture’. But Bert Verstappen, senior documentalist at HURIDOCS, is the exception. And the furniture in mind is an expensive, solid oak Dutch cupboard where all valuables are kept. Bert Verstappen – an historian by education – started working as a conscientious objector doing alternative service in the Netherlands Institute for Human Rights (SIM) in approximately 1983, soon after I became the founding director.

He proved his value immediately by working on themes such as documenting human rights violations and together we started a long term research project on the practice of fact finding by NGOs, which resulted in the first-of-its-kind publication of Human Rights Missions, a  Study of the Fact‑Finding Practice of Non‑governmental Organizations, published in 1986 by Martinus Nijhoff Publishers (ISBN: 90 247 3355 3). Morever, as SIM had become the initial ‘secretariat’ of the new HURIDOCS network created in 1982, Bert gave a lot of support to the fledgling unit. In 1987 HURIDOCS moved to Oslo and Bert moved with it, learned Norwegian very quickly and kept the flame burning for many years. He moved to Geneva with HURIDOCS after the big Crete conference in 1992.

 

 

 

 

 

 

 

 

Bert (l.) 1993 in Geneva with Theo van Boven and Oldrich Andrysek

There he continued to coordinate the work of different task forces and co-authored essential HURIDOCS publications. He was involved in many capacity building projects, providing expertise mainly from the documentation angle.  As from 1 October 2019 he goes into retirement but will remain involved in some HURIDOCS projects on a part-time basis until the end of the year, ensuring a “soft bertxit”.

The development of new tools deeply changed HURIDOCS’ work throughout the years. We renew ourselves constantly. I have committed my career to this exciting challenge because I want to feel useful to human right defenders. Their courage is a source of inspiration for all of us” says Bert Verstappen on the HURIDOCS website

If you want to know more about the history of HURIDOCS in which Bert has played such an important role, see:

“We were breaking new ground”

and about the organnisation today: https://www.huridocs.org/who-we-are/

Information technology and human rights: 4 vacancies at HURIDOCS

June 12, 2019

As reported, HURIDOCS received a grant from the Google Artificial Intelligence Impact Challenge [https://humanrightsdefenders.blog/2019/05/08/excellent-news-huridocs-to-receive-1-million-from-google-for-ai-work/ Read their blog to learn more >>]

For those who are passionate about using technology and information for human rights there are the following vacancies:

Learn more about these vacancies >>

https://mailchi.mp/huridocs.org/feb2019-1444105?e=04f22f4e7f

Excellent news: HURIDOCS to receive 1 million $ from Google for AI work

May 8, 2019

Google announced on 7 May 2019 that the Geneva-based NGO HURIDOCS is one of 20 organizations that will share 25 million US dollars in grants from the Google Artificial Intelligence Impact Challenge. The Google Artificial Intelligence Impact Challenge was an open call to nonprofits, social enterprises, and research institutions to submit their ideas to use artificial intelligence (AI) to help address societal challenges. Over 2600 organizations from around the world applied.

Geneva-based HURIDOCS will receive a grant of 1 million US dollars to develop and use machine learning methods to extract, explore and connect relevant information in laws, jurisprudence, victim testimonies, and resolutions. Thanks to these, the NGO will work with partners to make documents better and freely accessible. This will benefit anyone interested in using human rights precedents and laws, for example to lawyers representing victims of human rights violations or students researching non-discrimination.

The machine learning work to liberate information from documents is grounded in more than a decade of work that HURIDOCS has done to provide free access to information. Through pioneering partnerships with the Institute for Human Rights and Development in Africa (IHRDA) and the Center for Justice and International Law (CEJIL), HURIDOCS has co-created some of the most used public human rights databases. A key challenge in creating these databases has been the time-consuming and error-prone manual adding of information – a challenge the machine learning techniques will be used to overcome.

“We have been experimenting with machine learning techniques for more than two years”, said Natalie Widmann, Artificial Intelligence Specialist at HURIDOCS. “We have changed our approach countless times, but we see a clear path to how they can be leveraged in groundbreaking ways to democratise access to information.” HURIDOCS will use the grant from Google to work with partners to co-create the solutions, carefully weighing ethical concerns of automation and focusing on social impact. All the work will be done in the open, including all code being released publicly.

We are truly excited by the opportunity to use these technologies to address a problem that has been holding the human rights movement back”, said Friedhelm Weinberg, Executive Director of HURIDOCS. “We are thankful to Google for the support and look forward to be working with their experts and what will be a fantastic cohort of co-grantees.”

We received thousands of applications to the Google AI Impact Challenge and are excited that HURIDOCS was selected to receive funding and expertise from Google. AI is at a nascent stage when it comes to the value it can have for the social impact sector, and we look forward to seeing the outcomes of this work and considering where there is potential for use to do even more.” – Jacquelline Fuller, President of Google.org

Next week, the HURIDOCS team will travel to San Francisco to work with the other grantees, Google AI experts, Project Managers and the startup specialists from Google’s Launchpad Accelerator for a program that will last six months, from May to November 2019. Each organization will be paired a Google expert who will meet with them regularly for coaching sessions, and will also have access to other Google resources and expert mentorship.

Download the press release in English, Spanish. Learn more about the other Google AI Impact grantees at Google’s blog.

Fo more on HURIDOCS history: https://www.huridocs.org/tag/history-of-huridocs/ and for some of my other posts: https://humanrightsdefenders.blog/tag/huridocs/

HURIDOCS NEWS