Human rights activists seek to use artificial intelligence (AI) to assist them prove war crimes in court.

Saudi Arabia launched an air campaign against Yemen in 2015, terrified by the country’s expanding civil war, in order to stop what it saw as a dangerous ascent of Shia authority. Saudi authorities stated the operation, which was initiated with eight other mostly Sunni Arab states, was only supposed to last a few weeks. It hasn’t ceased nearly five years later.

According to some estimates, the coalition has carried out over 20,000 air strikes since then, with many of them killing Yemeni civilians and destroying their property, reportedly in violation of international law. Since then, human rights organizations have worked to chronicle such war crimes in the hopes of putting an end to them through legal means. However, on-the-ground verification by journalists and activists, which is the gold standard, is frequently too risky to be practical. Instead, to better comprehend the violence, groups are increasingly turning to crowdsourcing smartphone photos and videos, which they are now submitting to courts to complement eyewitness testimony.

However, as digital documentation of battle scenes has become more common, the time it takes to examine it has increased dramatically. Investigators who must search through and view the footage may be traumatized by the awful imagery. Now, a machine-learning alternative is being tested as part of a project that will soon be challenged in the UK courts. It could serve as a model for making crowdsourced evidence more accessible and assisting human rights organizations in accessing more diverse sources of data.

The programme, headed by Swansea University in the United Kingdom and a number of human rights organisations, is part of a larger effort to track alleged war crimes in Yemen and increase judicial responsibility. Yemeni Archive started building a database of recordings and images documenting the atrocities in 2017. Content was gathered from thousands of sources, including journalist and civilian submissions, as well as open-source footage from social media platforms like YouTube and Facebook, and stored on a blockchain to prevent tampering.

The investigators then began curating evidence of specific human rights violations into a separate database and mounting legal cases in various domestic and international courts, in collaboration with the Global Legal Action Network (GLAN), a nonprofit that legally challenges states and other powerful actors for human rights violations.

Yvonne McDermott Rees, a professor at Swansea University and the initiative’s lead says “If things are coming through courtroom accountability processes, it’s not enough to show that this happened.” 

“You have to say, ‘Well, this is why it’s a war crime.’ That might be ‘You’ve used a weapon that’s illegal,’ or in the case of an air strike, ‘This targeted civilians’ or ‘This was a disproportionate attack.’”

In this example, the partners are concentrating on the BLU-63, a cluster munition made in the United States. Cluster munitions, which are explosive weapons that spew out smaller bombs on impact, are prohibited in 108 nations, including the United Kingdom. If the partners can show that they were used to commit war crimes in a UK court, it will be added to the growing body of evidence that the Saudi-led coalition has a history of breaking international law, and it will be used to argue that the UK should stop selling weapons to Saudi Arabia or bring criminal charges against individuals involved in the sales.

As a result, they decided to create a machine-learning system to detect all BLU-63 instances in the database. However, because photographs of BLU-63s are rare because they are illegal, the scientists had insufficient real-world data with which to train their algorithm. As a workaround, the team produced a synthetic data collection by simulating 3D models of the BLU-63.

The partners worked with Adam Harvey, a computer vision expert, to make the reconstructions using the few prior samples they possessed, including a shot of the munition saved by the Imperial War Museum. Harvey built the model from the ground up, including photorealistic texturing, several sorts of damage, and many decals. He then rendered the data in a variety of lighting and scene settings to create hundreds of still photos that simulated how the munition may be located in the wild. He created synthetic data of things that could be mistaken for the munition, such as a green baseball, to lower the false positive rate.

While Harvey is currently working on further training examples (he expects he’ll need over 2,000), the current system is already working well: over 90% of the films and photographs it pulls from the database have been validated by human specialists to contain BLU-63s. He’s now working on a more realistic validation data set by 3D printing and painting replicas of the weapons to make them look as real as possible. The team hopes to run the system across the whole Yemeni Archive, which has 5.9 billion video frames of material, once it has been thoroughly tested.According to Harvey, combing through that much data would take 2,750 days at 24 hours every day. On a standard workstation, the machine-learning system, on the other hand, would take about 30 days.

Human specialists would still need to check the tape once it has been filtered by the technology, but the increase in efficiency changes the game for human rights groups seeking to file legal challenges. It’s not uncommon for these groups to collect and retain large volumes of video from eyewitnesses. According to McDermott Rees, Amnesty International possesses on the order of 1 terabyte of film documenting possible breaches in Myanmar. Machine-learning techniques could allow them to comb through these archives and show a pattern of human rights breaches on a previously unimaginable scale, making it far more difficult for courts to dismiss the evidence.

“When you’re looking at, for example, the targeting of hospitals, having one video that shows a hospital being targeted is strong; it makes a case,” Jeff Deutch said, the lead researcher at Syrian Archive, a human rights group responsible for launching Yemeni Archive. “But if you can show hundreds of videos of hundreds of incidents of hospitals being targeted, you can see that this is really a deliberate strategy of war. When things are seen as deliberate, it becomes more possible to identify intent. And intent might be something useful for legal cases in terms of accountability for war crimes.”

Evidence of this magnitude will be especially important when the Yemen collaborators prepare to present their case. Previous charges of war crimes, which the UK government accepts as the official record, have been disputed by the Saudi-led air-strike coalition. An earlier action brought by a different organisation to block the government from selling arms to Saudi Arabia was similarly dismissed by a UK court because the open-source video evidence was deemed insufficiently credible.Despite the fact that a different court later overturned some of these criticisms on appeal, the collaborators are hoping that the additional amount of evidence will prevent any disputes this time. According to McDermott Rees, cases involving open-source videos in a Syrian context have before resulted in convictions.

This isn’t the first time that machine learning has been used to screen evidence in a human rights situation. The E-Lamp system, a video analysis toolbox for human rights work built by Carnegie Mellon University, was used to examine Syrian conflict archives. Harvey has also worked with some of his current collaborators in the past to track down munitions used in Syria. The Yemen effort, on the other hand, will be one of the first to face legal challenges. It could serve as a model for other human rights groups.

Sam Gregory, the program director of human rights nonprofit Witness and cochair of the Partnership on AI’s working group says “Although this is an emerging field, it’s a tremendous opportunity, “[It’s] also about leveling the playing field in access to AI and utilization of AI so as to turn both eyewitness evidence and perpetrator-shot footage into justice.”

Source: MIT technology review 

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img