Posts Tagged ‘information technology’

Tribute to James Lawson of the Council of Europe

February 17, 2024

Bert Verstappen – retired from HURIDOCS – wrote the following tribute to a person who – from an intergovernmental position – contributed greatly to the development of the NGO network:

It is with deep sadness that HURIDOCS has to announce the unexpected passing away on 11 February of James Lawson. James had a leading function in the field of information management in the Council of Europe. In addition, he devoted a huge amount of time and energy to HURIDOCS.

James was a visionary who introduced new tools and techniques for human rights information handling to the HURIDOCS network. He was keen to share his enthusiasm for developments in the field of information management that could and should benefit human rights organisations. He was convinced that, in the age of the Internet, librarians continue to play an important role as information sharers.

A major initiative taken by James was the multilingual human rights search engine HuriSearch. He planned HuriSearch as from 1998. HuriSearch provided a singular point of access to information published on over 5’000 websites of human rights organisations worldwide. HuriSearch indexed and crawled about 8 to 10 million web pages. It was publicly available from 2003 until 2016.

James was an active and committed member of the HURIDOCS Continuation Committee – as its Board was called for many years. He served on the CC from 1992 to 2009. He was HURIDOCS Treasurer and as leader of the Task Force on Software Development oversaw the development of HURIDOCS’ tools and techniques.

James was also Coordinator and Chairperson of a large number of meetings of the European Co-ordination Committee on Human Rights Documentation https://www.ecchrd.org/. During these meetings, he introduced advances in technology such as the use of metadata and the eXtended Markup Language XML.

James also provided various trainings on human rights documentation on behalf of HURIDOCS. Among other activities, he trained NGOs preparing evidence for Truth Commission in South Africa and was the main resource person in a training for French-speaking African trainers in Senegal. He also held HURIDOCS trainings with local human rights organisations in Burkina Faso, the DR Congo, Georgia, Ghana, Haiti, Indonesia and other countries.

HURIDOCS thanks James for his engagement and expertise. We wish strength to Hanne and his daughters.

In the deepfake era, we need to hear the Human Rights Defenders

December 19, 2023

In a Blog Post (Council on Foreign Relations of 18 December 2023) Raquel Vazquez Llorente argues that ‘Artificial intelligence is increasingly used to alter and generate content online. As development of AI continues, societies and policymakers need to ensure that it incorporates fundamental human rights.” Raquel is the Head of Law and Policy, Technology Threats and Opportunities at WITNESS

The urgency of integrating human rights into the DNA of emerging technologies has never been more pressing. Through my role at WITNESS, I’ve observed first-hand the profound impact of generative AI across societies, and most importantly, on those defending democracy at the frontlines.

The recent elections in Argentina were marked by the widespread use of AI in campaigning material. Generative AI has also been used to target candidates with embarrassing content (increasingly of a sexual nature), to generate political ads, and to support candidates’ campaigns and outreach activities in India, the United States, Poland, Zambia, and Bangladesh (to name a few). The overall result of the lack of strong frameworks for the use of synthetic media in political settings has been a climate of mistrust regarding what we see or hear.

Not all digital alteration is harmful, though. Part of my work involves identifying how emerging technologies can foster positive change. For instance, with appropriate disclosure, synthetic media could be used to enhance voter education and engagement. Generative AI could help create informative content about candidates and their platforms, or of wider election processes, in different languages and formats, improving inclusivity or reducing barriers for underdog or outsider candidates. For voters with disabilities, synthetic media could provide accessible formats of election materials, such as sign language avatars or audio descriptions of written content. Satirical deepfakes could engage people who might otherwise be disinterested in politics, bringing attention to issues that might not be covered in mainstream media. We need to celebrate and protect these uses.

As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? When AI can blur the lines between reality and fiction with increasing credibility and ease, discerning truth from falsehood becomes not just a technological battle, but a fight to uphold democracy.

From conversations with journalists, activists, technologists and other communities impacted by generative AI and deepfakes, I have learnt that the effects of synthetic media on democracy are a mix of new, old, and borrowed challenges.

Generative AI introduces a daunting new reality: inconvenient truths can be denied as deep faked, or at least facilitate claims of plausible deniability to evade accountability. The burden of proof, or perhaps more accurately, the “burden of truth” has shifted onto those circulating authentic content and holding the powerful to account. This is not just a crisis of identifying what is fake. It is also a crisis of protecting what is true. When anything and everything can be dismissed as AI-generated or manipulated, how do we elevate the real stories of those defending our democracy at the frontlines?

But AI’s impact doesn’t stop at new challenges; it exacerbates old inequalities. Those who are already marginalized and disenfranchised—due to their gender, ethnicity, race or belonging to a particular group—face amplified risks. AI is like a magnifying glass for exclusion, and its harms are cumulative. AI deepens existing vulnerabilities, bringing a serious threat to principles of inclusivity and fairness that lie at the heart of democratic values. Similarly, sexual deepfakes can have an additional chilling effect, discouraging women, LGBTQ+ people and individuals from minoritized communities to participate in public life, thus eroding the diversity and representativeness that are essential for a healthy democracy.

Lastly, much as with social media, where we failed to incorporate the voices of the global majority, we have borrowed previous mistakes. The shortcomings in moderating content, combating misinformation, and protecting user privacy have had profound implications on democracy and social discourse. Similarly, in the context of AI, we are yet to see meaningful policies and regulation that not only consult globally those that are being impacted by AI but, more importantly, center the solutions that affected communities beyond the United States and Europe prioritize. This highlights a crucial gap: the urgent need for a global perspective in AI governance, one that learns from the failures of social media in addressing cultural and political nuances across different societies.

As we navigate AI’s impact on democracy and human rights, our approach to these challenges should be multifaceted. We must draw on a blend of strategies—ones that address the immediate ‘new’ realities of AI, respond to the ‘old’ but persistent challenges of inequality, and incorporate ‘borrowed’ wisdom from our past experiences.

First, we must ensure that new AI regulations and companies’ policies are steeped in human rights law and principles, such as those enshrined in the Universal Declaration of Human Rights. In the coming years, one of the most important areas in socio-technical expertise will be the ability to translate human rights protections into AI policies and legislation.

While anchoring new policies in human rights is crucial, we should not lose sight of the historical context of these technological advancements. We must look back as we move forward. As with technological advancements of the past, we should remind ourselves that progress is not how far you go, but how many people you bring along. We should really ask, is it technological progress if it is not inclusive, if it reproduces a disadvantage? Technological advancement that leaves people behind is not true progress; it is an illusion of progress that perpetuates inequality and systems of oppression. This past weekend marked twenty-five years since the adoption of the UN Declaration on Human Rights Defenders, which recognizes the key role of human rights defenders in realizing the Universal Declaration of Human Rights and other legally binding treaties. In the current wave of excitement around generative AI, the voices of those protecting human rights at the frontlines have rarely been more vital.

Our journey towards a future shaped by AI is also about learning from the routes we have already travelled, especially those from the social media era. Synthetic media has to be understood in the context of the broader information ecosystem. We are monetizing the spread of falsehoods while keeping local content moderators and third-party fact-checkers on precarious salaries, and putting the blame on platform users for not being educated enough to spot the fakery. The only way to align democratic values with technology goals is by both placing responsibility and establishing accountability across the whole information and AI ecosystem, from the foundation models researchers, to those commercializing AI tools, and those creating content and distributing it.

In weaving together these new, old, and borrowed strands of thought, we create a powerful blueprint for steering the course of AI. This is not just about countering a wave of digital manipulation—it is about championing technology advancement that amplifies our democratic values, deepens our global engagement, and preserves the core of our common humanity in an increasingly AI-powered and image-driven world. By centering people’s rights in AI development, we not only protect our individual freedoms, but also fortify our shared democratic future.

https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines

Amnesty International website now accessible even in repressive countries

December 5, 2023
Amnesty International Logotype
A person browsing information on a laptop.

On 5 December 2023 Amnesty International launched its global website as an .onion site on the Tor network, giving users greater access to its work exposing and documenting human rights violations in areas where government censorship and digital surveillance are rife.

In recent years, a number of countries including Algeria, China, Iran, Russia and Viet Nam have blocked Amnesty International websites.

By making Amnesty International’s website available as a secure .onion site on Tor, more people will be able to read our human rights research and engage with the vital work of speaking truth to power, and defending human rights.”Donncha Ó Cearbhaill, Head of Security Lab at Amnesty Tech.

However, audiences accessing the Amnesty.org website through Tor will be able to bypass attempts at censorship.

An .onion site is a website that is only accessible through Tor, a volunteer-run network of servers which encrypt and route internet traffic through multiple servers around the world, providing users with an added layer of privacy and anonymity.

The onion site provides a means for individuals around the world to exercise their rights to privacy, freedom of expression, freedom of peaceful assembly, and freedom of association in a safe and secure online environment,” said Donncha Ó Cearbhaill, Head of Security Lab at Amnesty Tech.

The new Amnesty onion site can be accessed using the Tor Browser through our secure onion address at: https://www.amnestyl337aduwuvpf57irfl54ggtnuera45ygcxzuftwxjvvmpuzqd.onion.

The browser must be downloaded and installed through the official Tor Project website.

How to access Amnesty websites using Tor

The Tor Project has a version of the Tor Browser for many common platforms, including Windows, Mac, Linux, and Android. Onion sites can also be accessed on iPhone through the Onion Browser app. In countries where the Tor network is blocked, visitors will also need to configure Tor bridges which help bypass attempts to block connections to the network.

Amnesty International is also making language-specific content published in Chinese, Farsi and Russian available on the Amnesty International Tor onion website.

We are thrilled that one of the most recognized human rights organizations has adopted an onion service to provide greater online protections for those seeking information, support and advocacy. Amnesty International’s choice to offer an onion version of their website underlines the critical role of this open-source privacy technology as an important tool in our shared work of advancing human rights,” said Isabela Fernandes, Executive Director, the Tor Project.

What are .onion sites?

Onion services never leave the Tor network. Their location and IP addresses are hidden, making it difficult to censor them or identify their operators. In addition, all traffic between users and onion services is end-to-end encrypted. As a result, users leave no metadata trail making it impossible for their identity or internet activity to be tracked.

Both Tor and virtual private networks (VPNs) can help internet users bypass website blocking and censorship.

Tor routes connection through a number of volunteer run and randomly assigned servers preventing anyone individual or organization from being able to track both the identity and internet activity of users while a VPN connects through a single privately owned server.

The Tor software was first released more than 20 years ago and is now developed and maintained by the Tor Project, a US-registered not-for-profit organization which is focused on advancing human rights and freedoms by creating and deploying free and open-source anonymity software and privacy technologies.

https://www.amnesty.org/en/latest/news/

HURIDOCS – who will continue Friedhelm Weinberg’s excellent leadership?

December 12, 2022

After more than 10 years, Friedhelm Weinberg will be leaving HURIDOCS in early 2023. Having worked with him in person on many occasions, I can testify that his leadership has been most impressive, for the NGO itself [see e.g. https://humanrightsdefenders.blog/category/organisations/huridocs/] and the in the area of networking with others, such as the MEA and THF [see e.g. his: https://youtu.be/zDxPbd9St9Y]. In his own announcement, he modestly refers to all his colleagues:

It has been an incredible decade with HURIDOCS, working with amazing colleagues and partners at the intersection of human rights and technology. Together, we have drastically increased support to activists to leverage technology for documentation, litigation and advocacy work. We have pioneered flexible, reliable and robust software tools such as Uwazi, while responsibly sunsetting the past generation of open source software.

None of this would have been possible without the team we have built, and that was collaborating remotely across the globe well before 2020. It’s a committed, humorous and professional bunch, and I have learned so much with every single one of them, as we made things happen and as we hit walls and then picked each other up. I am also grateful to our board that brings together wisdom from leading NGOs, technology companies, the financial sector, but, more importantly, people that were generous with guidance, encouragement and critique.

It has also been a decade of many heartbreaks. From partners whose offices have been raided, that have been declared foreign agents, threatened, attacked. From wars and conflicts breaking out, affecting people we work with. From the difficulties of all we’re doing sometimes not being enough. From worrying how to raise the money to sustain and grow a team that can rise to these challenges.

It is a bittersweet departure, because it has been life-affirming – and yet it is for a perspective that fills me with warmth and excitement. For a while, I will be with our children, with the second one due to arrive in early 2023. 

As I have made the decision to leave HURIDOCS, I also have felt really down and much of the stress built up over a decade manifested physically. Seeking treatment, I have been diagnosed with burnout and depression, and have been recovering with the support from specialists, friends and family. This is neither a badge of honor nor something I want to be shy about, it’s just the reason you haven’t seen much of me recently in professional circles. It’s getting better and I am grateful to have the time and space for healing.

Currently, Nancy Yu is leading HURIDOCS as Interim Executive Director, as Lisa Reinsberg as the Board Chair holds the space and directs the succession process. I am grateful to both of them to step up and step in, as well as the team, our partners and funders for a decade of working together to advance human rights.

As the search for his successor has started, please have a look at the recruitment announcement and consider applying or sharing it with suitable candidates: https://lnkd.in/e7Y7smqT

https://www.linkedin.com/feed/update/urn:li:activity:7005479545189322752/

To Counter Domestic Extremism, Human Rights First Launches Pyrra

December 26, 2021

New enterprise uses machine learning to detect extremism across online platforms

On 7 December 2021, Human Rights First announced a new enterprise, originally conceived in its Innovation Lab as Extremist Explorer, that will help to track online extremism as the threats of domestic terrorism continue to grow.

Human Rights First originally developed Extremist Explorer to monitor and challenge violent domestic actors who threaten all our human rights. To generate the level of investment needed to quickly scale up this tool, the organization launched it as a venture-backed enterprise called Pyrra Technologies.

“There is an extremist epidemic online that leads to radical violence,” said Human Rights First CEO Michael Breen. “In the 21st century, the misuse of technology by extremists is one of the greatest threats to human rights. We set up our Innovation Lab to discover, develop, and deploy new technology to both protect and promote human rights.  Pyrra is the first tool the lab has launched.”

Pyrra’s custom AI sweeps sites to detect potentially dangerous content, extremist language, violent threats, and harmful disinformation across social media sites, chatrooms, and forums.

 “We’re in the early innings of threats and disinformation emerging from a proliferating number of smaller social media platforms with neither the resources nor the will to remove violative content,Welton Chang, founding CEO of Pyrra and former CTO at Human Rights First, said at the launch announcement.  “Pyrra takes the machine learning suite we started building at Human Rights First, greatly expands on its capabilities and combines it with a sophisticated user interface and workflow to make the work of detecting violent threats and hate speech more efficient and effective.”

The Anti-Defamation League’s Center on Extremism has been an early user of the technology. 
“To have a real impact, it’s not enough to react after an event happens, it’s not enough to know how extremists operate in online spaces, we must be able to see what’s next, to get ahead of extremism,” said Oren Segal, Vice President, Center on Extremism at the ADL. “That’s why it’s been so exciting for me and my team to see how this tool has evolved over time.  We’ve seen the insights, and how they can lead to real-world impact in the fight against hate.”   

 “It really is about protecting communities and our inclusive democracy,” said Heidi Beirich, PhD, Chief Strategy Officer and Co-Founder, Global Project Against Hate and Extremism.  “The amount of information has exploded, now we’re talking about massive networks and whole ecosystems – and the threats that are embedded in those places. The Holy Grail for people who work against extremism is to have an AI system that’s intuitive, easy to work with, that can help researchers track movements that are hiding out in the dark reaches of the internet. And that’s what Pyrra does.”

Moving forward, Human Rights First will continue to partner with Pyrra to monitor extremism while building more tools to confront human rights abuses. 

Kristofer Goldsmith, Advisor on Veterans Affairs and Extremism, Human Rights First and the CEO of Sparverius, researches extremism. “We have to spend days and days and days of our lives in the worst places on the internet to get extremists’ context.  But we’re at a point now where we cannot monitor all of these platforms at once. The AI powering Pyrra can,” he said.

Pyrra’s users, including human rights defenders, journalists, and pro-democracy organizations can benefit from using the tool as well as additional tools to monitor extremism that are coming from Human Rights First’s Innovation Lab.

“This is a great step for the Innovation Lab,” said Goldsmith. “We’ve got many other projects like Pyrra that we hope to be launching that we expect to have real-world impact in stopping real-world violent extremism.”   

https://www.humanrightsfirst.org/press-release/counter-domestic-extremism-human-rights-first-launches-pyrra

EU Council approves conclusions on the EU Action Plan on Human Rights and Democracy 2020-2024

November 20, 2020

The Council has approved conclusions on the EU Action Plan on Human Rights and Democracy 2020-2024. The Action Plan sets out the EU’s level of ambition and priorities in this field in its relations with all third countries.

See: https://humanrightsdefenders.blog/2020/03/27/new-eu-action-plan-for-human-rights-and-democracy-2020-2024/

The conclusions acknowledge that while there have been leaps forward, there has also been a pushback against the universality and indivisibility of human rights. The ongoing COVID-19 pandemic and its socio-economic consequences have had an increasingly negative impact on all human rights, democracy and rule of law, deepening pre-existing inequalities and increasing pressure on persons in vulnerable situations.

In 2012, the EU adopted the Strategic Framework on Human Rights and Democracy which set out the principles, objectives and priorities designed to improve the effectiveness and consistency of EU policy in these areas. To implement the EU Strategic Framework of 2012, the EU has adopted two EU Action Plans (2012-2014 and 2015-2019).

The new Action Plan for 2020-2024 builds on the previous action plans and continues to focus on long-standing priorities such as supporting human rights defenders and the fight against the death penalty.

By identifying five overarching priorities: (1) protecting and empowering individuals; (2) building resilient, inclusive and democratic societies; (3) promoting a global system for human rights and democracy; (4) new technologies: harnessing opportunities and addressing challenges; and (5) delivering by working together, the Action Plan also reflects the changing context with attention to new technologies and to the link between global environmental challenges and human rights.

https://www.consilium.europa.eu/en/press/press-releases/2020/11/19/council-approves-conclusions-on-the-eu-action-plan-on-human-rights-and-democracy-2020-2024/

DefendDefenders seeks TECHNOLOGY PROGRAMME MANAGER for Kampala office

October 29, 2020

DefendDefenders (the East and Horn of Africa Human Rights Defenders Project) seeks to strengthen the work of human rights defenders (HRDs) in the East and Horn of Africa sub-region by reducing their vulnerability to the risk of persecution and by enhancing their capacity to effectively defend human rights. DefendDefenders focuses its work on Burundi, Djibouti, Eritrea, Ethiopia, Kenya, Rwanda, Somalia (and Somaliland), South Sudan, Sudan, Tanzania, and Uganda.

DefendDefenders is recruiting a Technology Programme Manager for its work in supporting HRDs. Under the overall supervision of the Director of Programmes and Administration, the Executive Director, and in direct partnership with other staff members, the Technology Programme Manager shall be responsible, but not limited to the following duties:

Key Responsibilities

  • Manage and give direction to the Technology Programme and projects;
  • Empower and mentor the team to take responsibility of their tasks and encourage a spirit of teamwork within the team;
  • Manage overall operational and financial responsibilities of the team against project plans and manage the team’s day-to-day activities;
  • Participate in management meetings and contribute grants, proposal design and implementation for the Technology Programme;
  • Ensure proper adoption and usage of internal IT tools and organisation systems by designing training programmes for staff and streamlining /recommending systems that can improve operational efficiency;
  • Communicate regularly with other managers, Director of Programmes & Administration and the Executive Director within the organisation. Ensure that the team works closely with other departments;
  • Plan budgets and work plans from inception to completion;
  • Work with partners, consultants, and service providers to ensure delivery of project goals;
  • Design and implement the IT policy, security protocols and best practice guides for the organisation and partner organisations; and
  • Represent DefendDefenders and the Technology Programme externally, develop partnerships, and attract funding and resources

Working conditions

  • Full-time position based in Kampala, Uganda;
  • The selected applicant must be able to relocate to Kampala immediately, or within a short timeframe; and
  • Health insurance (in Uganda) and travel insurance are provided.

Requirements

  • Previous experience in managing a team;
  • Strong communication and presentation skills;
  • Strong interpersonal skills and the ability to establish and maintain effective working relationships in a culturally diverse environment;
  • Willingness to travel;
  • Self-motivated, organised, and the ability to meet deadlines with minimal supervision;
  • Resourcefulness and problem-solving aptitude; and
  • Bachelor’s degree in Computer Science, Information Technology or related discipline, plus professional certifications.

Languages

Fluency in English is a must (spoken and written). Fluency in French is a strong asset, and in Arabic an asset.

Location

The position will be based in Kampala, Uganda, with frequent travels within and out of the country. Applicants should be eligible to work in Uganda without restriction.

Applicants should send a letter of motivation, CV, and contacts of three references to: jobs@defenddefenders.org by 15 November 2020. Do not send scanned copies of certificates. Interviews will be held in person (in Kampala, Uganda), or online late in November.

The subject line of the email should read “Application for Technology Programme Manager position.”

Questions about the position can be directed to jobs@defenddefenders.org

Frontline’s Guide to Secure Group Chat and Conferencing Tools

July 21, 2020

With teams increasingly working remotely during COVID-19, we are all facing questions regarding the security of our communication with one another: Which communication platform or tool is best to use? Which is the most secure for holding sensitive internal meetings? Which will have adequate features for online training sessions or remote courses without compromising the privacy and security of participants?

Front Line Defenders presents this simple overview which may help you choose the right tool for your specific needs.

FLD Secure Group Chat Flowchart

Download PDF of the flow chart

Note:

  • With end-to-end encryption (e2ee), your message gets encrypted before it leaves your device and only gets decrypted when it reaches the intended recipient’s device. Using e2ee is important if you plan to transmit sensitive communication, such as during internal team or partners meetings.
  • With encryption to-server, your message gets encrypted before it leaves your device, but is being decrypted on the server, processed, and encrypted again before being sent to recipient(s). Having encryption to-server is OK if you fully trust the server.

Why Zoom or other platforms/tools are not listed here: There are many platforms which can be used for group communication. In this guide we focused on those we think will deliver good user experiences and offer the best privacy and security features. Of course none of the platforms can offer 100% privacy or security as in all communications, there is a margin of risk. We have not included tools such as Zoom, Skype, Telegram etc. in this guide, as we believe that the margin of risk incurred whilst using them is too wide, and therefore Front Line Defenders does not feel comfortable recommending them.

Surveillance and behaviour: Some companies like Facebook, Google, Apple and others regularly collect, analyse and monetize information about users and their online activities. Most, if not all, of us are already profiled by these companies to some extent. If the communication is encrypted to-server owners of the platform may store this communication. Even with end-to-end encryption, communication practices such as location, time, whom you connect with, how often, etc. may still be stored. If you are uncomfortable with this data being collected, stored and shared, we recommended refraining from using services by those companies.

The level of protection of your call depends not only on which platform you choose, but also on the physical security of the space you and others on the call are in and the digital protection of the devices you and others use for the call.

See also:

Caution: Use of encryption is illegal in some countries. You should understand and consider the law in your country before deciding on using any of the tools mentioned in this guide.

Criteria for selecting the tools or platforms

Before selecting any communication platform, app or program it is always strongly recommended that you research it first. Below we list some important questions to consider:

  • Is the platform mature enough? How long has it been running for? Is it still being actively developed? Does it have a large community of active developers? How many active users does it have?
  • Does the platform provide encryption? Is it end-to-end encrypted or just to-server encrypted?
  • In which jurisdiction is the owner of the platform and where are servers located? Does this pose a potential challenge for your or your partners?
  • Does the platform allow for self-hosting?
  • Is the platform open source? Does it provide source code to anyone to inspect?
  • Was the platform independently audited? When was the last audit? What do experts say about the platform?
  • What is the history of the development and ownership of the platform? Have there been any security challenges? How have the owners and developers reacted to those challenges?
  • How do you connect with others? Do you need to provide phone number, email or nickname? Do you need to install a dedicated app/program? What will this app/program have access to on your device? Is it your address book, location, mic, camera, etc.?
  • What is stored on the server? What does the platform’s owner have access to?
  • Does the platform have features needed for the specific task/s you require?
  • Is the platform affordable? This needs to include potential subscription fees, learning and implementing, and possible IT support needed, hosting costs, etc.

The document then proceeds to give more detailed information related to each tool/service listed in this guide

Signal – https://signal.org/

Delta Chat – https://delta.chat/

Wire – https://wire.com/

Jitsi Meet – https://jitsi.org/jitsi-meet/

BigBlueButton – https://bigbluebutton.org/

Whereby – https://whereby.com

Blue Jeans – https://www.bluejeans.com/

GoToMeeting – https://www.gotomeeting.com/

Facetime / iMessage –https://www.apple.com/ios/facetime

Google Meet – https://meet.google.com/

Duo – https://duo.google.com/

WhatsApp – https://www.whatsapp.com/

Video calls, webinar or online training recommendations

Video calls recommendations: In the current situation you will undoubtedly find yourself organizing or participating in many more video calls than before. It may not be obvious to everyone how to do it securely and without exposing yourself and your data to too much risk:

  • Assume that when you connect to talk your camera and microphone may be turned on by default. Consider covering your camera with a sticker (making sure it doesn’t leave any sticky residue on the camera lens) and only remove it when you use the camera.
  • You may not want to give away too much information on your house, family pictures, notes on the walls or boards, etc. Be mindful of the background, who and what is also in the frame aside from yourself? Test before the call by, for example, opening meet.jit.si and click on GO button to get to a random empty room with your camera switched on to see what is in the picture. Consider clearing your background of clutter.
  • Also be mindful who can be heard in the background. Maybe close the door and windows, or alert those sharing your space about your meeting.
  • Video call services may collect information on your location and activity, consider using a VPN (see Physical, emotional and digital protection while using home as office in times of COVID-19 guide).
  • It is best to position your face so your eyes are more or less at the upper third of the picture without cutting off your head. Unless you do not want to reveal your face, do not sit with your back to a light or a window. Daylight or a lamp from the front is the best. Stay within the camera frame. You may want to look into the lens from time to time to make “eye contact” with others. If you are using your cellphone, rest it against a steady object (e.g. a pile of books) so that the video picture remains stable.
  • You may want to mute your microphone to prevent others hearing you typing notes or any background noise as it can be very distracting to others on the call.
  • If the internet connection is slow you may want to switch off your camera, pause other programs, mute the microphone and ask others to do same. You may also want to try sitting closer to the router, or connecting your computer directly to the router with an ethernet cable. If you share internet connection with others, you may ask them to reduce extensive use of internet for the duration of your call.
  • It it very tempting to multitask especially during group calls. But you may very soon realise that you are lost in the meeting and others may realize this.
  • If this is a new situation for you or you are using a new calling tool, you may want to give yourself a few extra minutes to learn and test it prior to the scheduled meeting to get familiar with options like turning on/off the camera and the microphone, etc.
  • If possible, prepare and test a backup communication plan in case you will have trouble connecting with others. For example, adding them to a Signal group so you can still text chat or troubleshoot problems on the call. Sometimes it helps to have an alternate browser installed on your computer or app on the phone to try connecting with those.

If you would like to organise a webinar or online training, you can use tools outlined above in the group communication. Some of best practices include:

  • Make sure that you know who is connected. If this is needed check the identities of all people participating by asking them to speak. Do not assume you know who is connected only by reading assigned names.
  • Agree on ground-rules, like keeping cameras on/off, keeping microphone on/off when one is not speaking, flagging when participants would like to speak, who will be chairing the meeting, who will take notes – where and how will those notes be written and then distributed, is it ok to take screenshots of a video call, is it ok to record the call, etc.
  • Agree on clear agendas and time schedules. If your webinar is longer than one hour, it is probably best to divide it into clear one-hour sessions separated by some time agreed with participants, so they have time to have a short break. Plan for the possibility that not all participants will return after a break. Have alternative methods to reach out to them to remind them to return, like Signal/Wire/DeltaChat contacts for them.
  • It is easiest to use a meeting service that participants connect to using a browser without a need to register or install a special program, one that also gives the webinar organiser the ability to mute microphones and close cameras of participants.
  • Prior to the call, check with all participants whether they have particular needs, such as if they are deaf or hard of hearing, if they are visually impaired or blind, or any other conditions which would affect their participation in the call. With this in mind, ensure that the selected platform will accommodate these needs and to be sure, test the platform beforehand. Simple measures can also improve inclusion and participation in your calls, such as turning on cameras when possible, as it can allow for lip-reading.
  • Encourage all participants to speak slowly and to avoid jargon where possible, as the working language of the call is most likely not everyone’s mother tongue language. Naturally, there will be moments of silences and pauses, embrace them. They can help to support understanding and can be helpful for participants who are hard of hearing, interpreters and will also aid assistive technology to pick up words correctly.

https://www.frontlinedefenders.org/en/resource-publication/guide-secure-group-chat-and-conferencing-tools

After NSO, now Indian based hacking group targets NGOs

June 10, 2020

A multi-year investigation by Citizen Lab has unearthed a hack-for-hire group from India that targeted journalists, advocacy groups, government officials, hedge funds, and human rights defenders.

A lot has been written about the NSO group and human rights defenders [see: https://humanrightsdefenders.blog/tag/nso-group/], now another case of cyber insecurity has come up:

Jay Jay – a freelance technology writer – posted an article in Teiss on 9 June 2020 stating that Citizen Lab revealed in a blog post published Tuesday that the hack-for-hire group’s identity was established after the security firm investigated a custom URL shortener that the group used to shorten the URLs of phishing websites prior to targeting specific individuals and organisations. Citizen Lab has named the group as “Dark Basin“.

“Over the course of our multi-year investigation, we found that Dark Basin likely conducted commercial espionage on behalf of their clients against opponents involved in high profile public events, criminal cases, financial transactions, news stories, and advocacy,” the firm said.

It added that the hack-for-hire group targeted thousands of individuals and organisations in six continents, including senior politicians, government prosecutors, CEOs, journalists, and human rights defenders, and is linked to BellTroX InfoTech Services, an India-based technology company.

….The range of targets, that included two clusters of advocacy organisations in the United States working on climate change and net neutrality, made it clear to Citizen Lab that Dark Basin was not state-sponsored but was a hack-for-hire operation.

…As further proof of Dark Basin’s links with BellTroX, researchers found that several BellTroX employees boasted capabilities like email penetration, exploitation, conducting cyber intelligence operations, pinging phones, and corporate espionage on LinkedIn. BellTroX’s LinkedIn pages also received endorsements from individuals working in various fields of corporate intelligence and private investigation, including private investigators with prior roles in the FBI, police, military, and other branches of government.

The list of organisations targeted by Dark Basin over the past few years includes Rockefeller Family Fund, Greenpeace, Conservation Law Foundation, Union of Concerned Scientists, Oil Change International, Center for International Environmental Law, Climate Investigations Center, Public Citizen, and 350.org. The hack-for-hire group also targeted several environmentalists and individuals involved in the #ExxonKnew campaign that wanted Exxon to face trial for hiding facts about climate change for decades.

A separate investigation into Dark Basin by NortonLifeLock Labs, which they named “Mercenary.Amanda”, revealed that the hack-for-hire group executed persistent credential spearphishing against a variety of targets in several industries around the globe going back to at least 2013…

https://www.teiss.co.uk/indian-hack-for-hire-group-phishing/

https://thewire.in/tech/spyware-rights-activists-lawyers-citizen-lab

https://scroll.in/latest/964803/nine-activists-most-of-them-working-to-release-bhima-koregaon-accused-targets-of-spyware-amnesty

Also: Hack-for-hire firms spoofing WHO accounts to target organisations worldwide

Annual Reports 2019: HURIDOCS harnessing the power of human rights information

December 28, 2019

The second annual report [for yesterday’s, see: https://humanrightsdefenders.blog/2019/12/27/annual-reports-2019-civicus-global-report/] comes from HURIDOCS which – before turning the page on 2019 – wants to share some highlights from the last several months:

Towards an ecosystem of interoperable human rights tools

Social media posts can contain critical evidence of abuses that will one day help deliver justice. That’s why legal advocacy group Global Legal Action Network (GLAN) and their partners are saving copies of online content that show attacks targeting civilians in Yemen. How? They’re using a new integration between Digital Evidence Vault and our Uwazi platform. Read more >>>

Using machine learning to help defenders find what they need

Machine learning could have an enormous impact on the accessibility of human rights information. How? By automating parts of the time-intensive process of adding documents to a collection. In collaboration with some of our partners and Google.org Fellows, we’re working on doing just that. Check it out >>>

How to research human rights law for advocacy

International law can be a powerful tool for local changemakers to advance protections for human rights. But there’s no central place for finding relevant legislation, commitments and precedents. So together with Advocacy Assembly, we created a free 30-minute course to help human rights defenders navigate the information landscape. Learn more >>>

A database to magnify personal stories and identify trends

Pakistan has one of the world’s largest death rows. At the same time, 85% of death sentences are overturned on appeal. Who are the people convicted? Juveniles, people with disabilities or mental illness, and those from economically disadvantaged backgrounds. We partnered with Justice Project Pakistan to launch a database to shine a light on the situation. Take a look >>>

Improvements to our info management platform Uwazi

We rolled out several new features to Uwazi. CSV import allows for the quick creation of collections without the need to manually input large amounts of data. The activity log gives a comprehensive overview of all additions, edits and deletions (or lack thereof). And two-factor verification offers an extra layer of protection. Speaking of security, we also had Uwazi audited by a third party and made improvements based on their findings. Explore the Uwazi portfolio >>>

growing, moving team and a heartfelt ‘thank you’ to Bert

We welcomed several new members to our team: two project managers, a UX designer, two software developers, and a communications coordinator. And we’re currently seeking an info management intern (deadline: 20 December 2019). We gave a warm farewell to Project Manager Hyeong-sik Yoo and Software Developer Clément Habinshuti, and said “thank you” to Senior Documentalist Bert Verstappen, who retired after 32 incredible years.

(see also: https://humanrightsdefenders.blog/2019/09/27/bertxit-bert-verstappen-leaves-huridocs-after-32-years/)

Executive Director Friedhelm Weinberg  goes on parental leave. For the first three months of 2020 while he’s off, Director of Programmes Kristin Antin will be stepping in.

Home