Coded Bias and the Ethics of Facial Recognition 

Photo by Markus Spiske on Pexels.com

The documentary Coded Bias taught us about facial recognition policies that have been implemented in various countries. These policies are controversial, largely due to the broad margin of error American facial recognition AIs experience when recognizing faces belonging to people of color. The coders creating these AI algorithms are predominantly white and male, so the algorithms recognize white and male faces to a much more accurate percentage than they recognize faces of color. In essence, the coders have coded their lived experience – in a field that is predominantly white and male – into the algorithms. Their unconscious bias is now ingrained in a resource that may be distributed nationwide. 

 It was revealed that the UK used facial recognition to assist their police force in regular law management. UK AIs are experiencing the same problem– the AIs recognize white faces on a much more regular basis than they recognize faces of color. The UK has since stopped using facial recognition software to monitor their citizens.

China, on the other hand, uses CCTV (closed-circuit television, or video surveillance) to monitor what their citizens purchase, from groceries to designer clothing, and what they browse online today. This includes requiring citizens to confirm their identity before purchasing food or browsing online through facial recognition. China’s policy is much more strict than the UK’s, but interestingly, China has only gotten so much backlash for it because they are so transparent. 

American companies like Facebook, Google, and Amazon have the technological capacity to implement the same facial recognition policies, as was proved by a statement Facebook made in 2017 announcing that they would stop their use of facial recognition to file faces into their company database. With their announcement, they proceeded to delete over a billion people’s faces from their database. The difference between America and China is that the organizations with the resources to implement organization-wide data collection and facial recognition are private – whereas China’s is the Chinese government. But, if private companies in the US have access to advanced facial recognition technology, Coded Bias argues that there is nothing stopping the US Government from obtaining and using the same resources. 

This brings up a couple of ethical questions:

  1. Is it ethical to be using facial recognition for surveillance/crime management because these AIs are inherently biased against faces of color?
  2. Is it ethical for large American tech companies – Facebook, Google, Amazon, etc. – to be using facial recognition in its apps?
  3. Think about Apple. Is it ethical for them to be using facial recognition as a gateway to purchasing power/unlocking a device due to the high degree of inaccuracy surrounding faces of color?
  4. Does Apple have a policy in place if a face is misidentified, and a person’s iPhone/iPad/credit card information is stolen?

Coded Bias brings up a lot of good ethical questions. If you’re interested in learning more, Coded Bias is available on Netflix – and there is also a version available for free on YouTube

Thanks for reading! 

Leave a comment