Racism in Amazon’s Facial Recognition?
Reading the article by Thrift, Tickell et al got me interested in the extent to which media and the public in general have a tendency to attribute heavily object-mediated processes to human agency. I think this is particularly pertinent with the increasing prevalence of AI, which seems to straddle a line that creates a deep ambivalence.
One instance related to this theme is this article comparing the facial recognition technologies offered by tech giants such as Microsoft, IBM, and Amazon. It seems that when analyzing the faces of people with darker skin, Amazon’s technology in particular had trouble determining the person’s gender.
Article here: https://www.bbc.co.uk/news/technology-47117299
How do we respond to features that, when displayed by people, would be judged discriminatory or offensive? Should companies be implicated as embodying those negative qualities based on an object’s insufficiency, or should these things be considered a technical hiccup? The fact that the article was written at all and that Amazon characterized the claims of racism as ‘misleading’ seem to imply that both BBC and Amazon buy into the notion that technologies point back to people, at least to some extent.
Another incident with Amazon involved an employee recruitment algorithm’s tendency to choose male candidates as preferable, though in that case, some have claimed that the system was based on data submitted largely by men.
https://www.bbc.co.uk/news/technology-45809919
Contributed by JosephBommarito on 04/02/2019