Uncanny Valley: Forensic Architecture

Forensic Architecture uses machine learning to research human rights violations around the world. One challenge they face is ensuring the fairness of the technology that they use. Since many machine-learning processes are inscrutable—originating in military contexts and operating according to concealed mechanisms—it can be difficult to determine exactly how they reach their outcomes.

The group uses synthetic images—digitally created photorealistic images—to train machine-learning classifiers—algorithmic processes that are able to identify types of objects, such as cats or bridges. It trains its classifiers to recognize objects used in human rights offenses. By doing this in a controlled training process, Forensic Architecture hopes to increase accountability not only for human rights violators but also within the machine-learning systems it uses.

Research has shown that training classifiers using a wide range of synthetic images—showing the object in question within a wide variation of environments—increases the algorithms’ ability to identify those objects in their natural environments. Forensic Architecture took this approach to the extreme, using fantastical, unreal patterns, backdrops, and deformations in its synthetic images.

This practice encouraged the group to consider analogous questions relating to the use of machine learning by social-media companies. If extreme variations have been shown to improve classifiers’ predictive performance, isn’t it in the best interest of social-media companies to encourage extreme variations of online political and social behaviors on their platforms? Wouldn’t such activity improve the predictive capacity of their machine-learning algorithms?

 

\ Model Zoo, 2020

Video; duration: 15 min.
Courtesy of the artist

Computer vision relies on classifiers—algorithmic processes that identify types of objects. The process of training a classifier to recognize an object involves feeding it thousands of images of that object in different conditions and contexts.

However, certain objects, such as particular kinds of banned munitions, are not compatible with this process. There are often too few images of such objects available, and the process of collecting and annotating them can be extremely labor-intensive.

Since 2018, Forensic Architecture has used synthetic images—photorealistic digital renderings based on 3-D models—to train classifiers to identify such munitions. Using automated processes to deploy those classifiers has the potential to save months of manual, human-directed research.

Model Zoo includes a collection of munitions and weapons and the classifiers trained to identify them—constituting a catalogue of some of the most horrific weapons used in conflict today.

 

\ Synthetic Images: Incremental Variation, 2020

Vinyl
Courtesy of the artist

The 37–40mm projectiles featured in this wallpaper are some of the most common tear-gas munitions deployed against protesters in places around the world, including Hong Kong, Chile, the United States, Venezuela, and Sudan. Forensic Architecture is developing techniques to automate the search for and identification of such projectiles within a huge number of videos posted online.

On this wall are one thousand renderings of such munitions, with different degrees of deformation, scratches, charring, and labels. Used to train machine-learning classifiers, they are part of a much larger archive of variations of this commonly found object.

 

\ Synthetic Images: Extreme Objects, 2020

Fifteen 3-D prints in full-color Vero
Courtesy of the artist

Machine-learning classifiers that are trained using rendered images of 3-D models, known as “synthetic data,” have been shown to improve in accuracy when also trained on extreme variations of the modeled object. The modeled projectile here appears both in realistic forms and textured with random patterns and images. These extreme variations of the projectile’s appearance help the classifier better recognize its shape, contours, and edges. These extreme objects also challenge the threshold of machine perception and object identifiability.

 

\ Triple-Chaser, 2019

Video; duration: 10:35 min.
Courtesy of the artist and Praxis Films
Commissioned by the Whitney Museum of American Art, New York

The Safariland Group is one of the world’s major manufacturers of tear gas and other so-called “less-lethal” munitions. The company is owned by Warren B. Kanders, formerly the vice chair of the board of trustees of the Whitney Museum of American Art.

In response to its invitation to the 2019 Whitney Biennial—and in solidarity with museum staff members who had for months been protesting Kanders—Forensic Architecture began a project using machine-learning classifiers to identify Safariland tear gas in images found online.

Working with New York–based Praxis Films, the group presented a video explaining its methods of machine learning, synthetic-image generation, and photorealistic modeling, techniques it used to detect Safariland canisters among the millions of images shared online by activists, protesters, and other allies.

Its research linked Safariland to human rights abuses committed by a wide spectrum of settler colonial, liberal-democratic, and authoritarian states. The group’s work also exposed Kanders’s connection, through US bullet manufacturer Sierra Bullets, to the sniper violence committed by Israeli occupation forces against Palestinians in Gaza. On July 25, 2019, after months of protests and a growing artist boycott pushed forward by anti-colonial and anti-poverty activist groups local to New York, Kanders resigned from the Whitney’s board.

Image
Uncanny Valley: Forensic Architecture