Facial recognition (FR) technologies have become an increasingly common AI used in our day-to-day lives. Simply put, this refers to technologies that can identify or group individuals from image frames – both still and video[1]. Used by governments, businesses, and consumers, examples of FR include CCTV, airport security screening, and device authentications.
In the 1960s, Woodrow Bledsoe pioneered various computer-based pattern recognition technologies, including one of the earliest FR systems. His “Man-Machine System for Pattern Recognition” manually measured facial features to identify people in photographs[2]. While FR was limited by a lack of computational power and algorithmic accuracy back then, we have since seen huge innovative improvements in the field.
With increasing academic focus and investment in R&D since the 1990s, FR has become more sophisticated, benefiting from advancements in image processing and feature extraction algorithms in the early 2000s[2].
Ultimately, the culmination of progress in computer vision, machine learning, and biometric authentication in the 2010s has now brought FR to be a commonly integrated and adopted technology across many commercial and social applications.
Today, FR technologies play pivotal roles in society, utilized by sectors like security, law enforcement, retail, and consumer electronics. There are benefits to be seen especially in security and law enforcement, such as assistance in missing persons cases, enhanced surveillance and public safety, and crime prevention and deterrence[3].
However, as with all AI, there is a fine line between FR causing net societal good and net bad. With the potential to exacerbate issues with racial profiling, misidentification, and confirmation biases, we must also consider how to steer the future of FR in an assuredly ethical direction.
Of commonly used biometric identification technologies (fingerprint, palm, iris, voice, and face), FR presents the most problems as the least accurate and most prone to perpetuating bias[4]. This can occur from data bias, algorithmic bias, and societal or human biases.
***Data Bias: ***
***Algorithmic Bias: ***
***Societal Bias: ***
In August 2020, Robert Williams was wrongly arrested for a shoplifting charge in Detroit[8]. Taken from his home in front of his wife and children, police issued the arrest based on a partial facial recognition match. It turned out to be a false positive, and yet Williams was still held overnight in jail for nearly 30 hours – despite him having an alibi.
When he told police “I hope y’all don’t think all black people look alike” their response, rather than to look deeper into how the FR had made the match and confirm that CCTV matches Williams’ driver’s license, was, “The computer says it’s you”[8].
As of September 2023, there has been a total of 7 wrongful arrests publicly recorded using FR technologies, all of whom have been black – 6 males, and one female[9]. While there have not been any wrongful convictions using FR yet, these victims of wrongful arrest have been subjected to days in jail following the incorrect matches.
Tech companies providing FR software (mainly Apple with regards to misidentification) and police departments using them incorrectly (especially predominantly black cities: Louisiana, Maryland, Michigan, and New Jersey) have all been sued at some point as part of legal action taken against misidentification and false arrests caused by FR technologies[9].
Not only does overreliance on FR technologies therefore erode public trust in AI capabilities, but it also further diminishes public trust in legal and law enforcement institutions.
As of March 2024, no concrete or quantifiable regulations exist surrounding algorithmic bias. However, since these technologies possess the ability to further entrench systemic biases, it is a topic that governments recognize needs discussion[10].
In December 2023, the “Eliminating Bias in Algorithmic Systems Act of 2023” was introduced to the US Senate (following the “Algorithmic Accountability Act of 2022”), and similar endeavors like blueprints for an “AI Bill of Rights” and discussions on algorithmic discrimination protections continue to ramp up in the right direction[11].
While these are positive efforts towards protecting society’s best interest and reducing harm in the long term, accountability to steer FR innovation in an unbiased direction still falls on the companies producing them, and the corporations using them.
With a lack of legal protection, victims of wrongful arrests using FR have understandably lobbied for a complete ban on these technologies[8]. Nonetheless, FR does possess the potential to benefit and enhance society. In their current state, even adopting simple ‘best practice’ procedures could help reduce the impact of technological and algorithmic inaccuracy.
For example, having human checks and oversight to confirm any positive matches before trusting and acting on an AI’s results can reduce the unfairness felt by the likes of Robert Williams.
In terms of technological improvements, the diversification of data sets used to train these algorithms can help mitigate data biases[6]. Better transparency and understanding of how complex algorithms arrive at these decisions will not only increase trust in the technologies but reduce algorithmic biases too.
IBM and Microsoft are two examples of tech giants who have announced roadmaps for reducing bias in their FR technologies[4]. These align with our expectations; they plan to modify and improve training data, collection thereof, and testing cohorts, especially concerning underrepresented demographics and social groups.
However, this will also need to be coupled with improved training and development processes of the algorithms themselves. This includes prioritizing the likes of “Fairness-Aware Algorithms”[12], i.e., those that are built on machine learning models that respect human principles of fairness and actively mitigate biases.