How AI Could Help the Fight for Accountability and Justice

This time last year, I was in Hong Kong, meeting with human rights activists and documenting the large pro-democracy demonstrations. The police were cracking down on protestors, using excessive force in the streets, and perpetrating abuse behind closed doors. An independent inquiry into police abuses was, and is, crucial – and I proposed just that. […]

This time last year, I was in Hong Kong, meeting with human rights activists and documenting the large pro-democracy demonstrations. The police were cracking down on protestors, using excessive force in the streets, and perpetrating abuse behind closed doors. An independent inquiry into police abuses was, and is, crucial – and I proposed just that.

Gathering evidence of police violence against peaceful protesters wasn’t difficult. On a typical Saturday evening on Nathan Road, thousands of young people would be marching and singing, and many, if not most, would be using their phones to film police using tear gas, batons, and other weapons throughout the night. At the same time, camerapersons both amateur and professional captured footage that circulated around the world via news broadcasts and social media.

A generation ago, the problem was finding photographs or footage to expose human rights violations. Now the problem is the reverse; there’s so much visual evidence of potential abuses that the volume can be overwhelming. We’ve recently heard from human rights defenders in Nigeria and Belarus who have mountains of footage of protests and clashes.

“Protests that stretch over weeks and months, where you have multiple team members deployed, will result in a daunting amount of video to review,” says Rob Godden of Rights Exposure, who was part of a team videoing protests in Hong Kong.

Artificial intelligence, specifically the computer vision subfield, could offer solutions. It should be possible to train machine-learning algorithms to pick out relevant pieces of footage.

Work on developing such a capability has been underway for some years. There’s substantial academic research on detecting fights or violent acts in videos, but one challenge lies in recognizing different types of violence in the context of law enforcement. AI researchers also note that complicated crowd scenes present another difficulty.

The technology is prospective yet promising. Before long, computers should be able to process huge amounts of material quickly, weeding out irrelevant footage, while recognizing police uniforms, weapons, and deployment. A more advanced application could recognize specific forms of use-of-force by police against protesters, allowing researchers to focus on the context of the interaction. Whether it’s a baton swung, a gun fired, or a water cannon unleashed, the moments that precede an incident provide information that can help researchers determine if the footage reveals what it appears to.

“Machine learning would be very useful in shortening the time needed to process footage, helping identify the most important events for use immediately, and facilitating retrieval in the future,” said Godden.

Being able to tag such footage automatically would be a godsend for human rights researchers and lawyers, allowing us to find and present evidence quickly. Across the world, including in the United States, it would help us successfully prosecute police and security forces that use illegal violence.

And the impact could be felt beyond the courtroom; as shown by the impact of the police’s videotaped killing of George Floyd, certain footage can galvanize movements and spur political change.

Blog

Author:

  • Brian Dooley

Published on December 11, 2020

Share

Related Posts

Seeking asylum?

If you do not already have legal representation, cannot afford an attorney, and need help with a claim for asylum or other protection-based form of immigration status, we can help.