Overview
An op-ed that called on digital forensics vendors to recognize technology’s role as both a beneficiary and a driver of inequities, and work towards more transparency to customers and the public at large.
Excerpt
#TechforGood has trended for some time, but the tech world’s attempts to solve some problems have resulted in what some call “technology solutionism” — a type of bias that suggests tech can fix everything, the assumption that as professor and author Ruha Benjamin is quoted as saying, “technology itself is a do-gooding field.”
Technology, however, is not neutral because the people who fund, build, and deploy it are not neutral. Perhaps the most notorious example of this is the finding that the algorithms underpinning facial recognition (a subset of artificial intelligence) are poor at identifying people of color.
These research outcomes were significant enough to lead Amazon, Microsoft, and IBM to put a hold on selling facial recognition technology to police. However, the research focused on deployment in public settings: on the street to identify suspects of crimes. In digital forensics — where the tools are purported to support impartial justice — the problem is compounded.
Consider child exploitation detection. These algorithms rely on a blend of age estimation and nudity detection. But the methods used to train the algorithms are highly opaque. Datasets are sanitized and hashed: because only certain organizations can legally “possess” the contraband images, there is no way to validate what they consisted of, or in other words, whether they include an appropriate demographic blend of ages and races, much less whether they are accurately estimating what they are supposed to.
Thus vendors can start by committing to exploring the ways in which tech isn’t neutral and owning their part in that, transparently.