The conversation | AI technologies — like police facial recognition — discriminate against people of colour

Jane Bailey, Jacquelyn Burkell et Valerie Steeves mettent de l’avant dans cet article publié sur le site The Conversation les biais racistes des technologies de l’intelligence artificielle telle que la reconnaissance faciale qui discrimine les personnes de couleurs.

Facial recognition technology that is trained on and tuned to Caucasian faces systematically misidentifies and mislabels racialized individuals: numerous studies report that facial recognition technology is “flawed and biased, with significantly higher error rates when used against people of colour.” This undermines the individuality and humanity of racialized persons who are more likely to be misidentified as criminal. The technology — and the identification errors it makes — reflects and further entrenches long-standing social divisions that are deeply entangled with racism, sexism, homophobia, settler-colonialism and other intersecting oppressions.

Pour lire intégralité de cet article

Disponible en anglais seulement.

Ce contenu a été mis à jour le 26 octobre 2020 à 10 h 00 min.