By Arden Benner
Using data compiled from 194 different countries, The Facial Recognition World Map was created by SurfShark as a way to visualize the ongoing spread of facial recognition technology. Perhaps most commonly used to unlock cellphones, facial recognition technology has revolutionized the security sector. Facial matches were an acceptable form of identity verification during the Australian forest fires, ATMs in Brazil scan your face rather than your PIN, and photos uploaded to FindMeBahamas helped reconnect displaced persons in the wake of Hurricane Dorian.
This isn’t to say that facial recognition has been universally welcomed with open arms—Belgium became the first country to make the technology illegal in 2019, and the ethical debate continues to run rampant. In the United States, it’s up to the local and state governments to monitor usage—San Francisco became the first city in the country to outlaw facial recognition for local agencies (including transportation and law enforcement). China leads the world in consumption and distribution of the technology. Chinese firms have already deployed the technology to sixteen countries outside Southeast Asia (including Venezuela, Germany, and Ecuador), and use within China is expected to account for 45% of the global facial recognition market by 2023.
Those who oppose the ongoing expansion are often in one of two camps: people who see it as an invasion of privacy and civil liberties, and people who think the technology has not yet advanced enough to be able to effectively do what we are asking it to. On June 8th, IBM CEO Arvind Krishna announced the company’s dispersal from the facial recognition market and called for a national dialog surrounding whether or not the technology should be used at all. In a statement of support for the Justice in Policing Act, he said: “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values…” IBM’s announcement comes in the wake of law enforcement reportedly using facial recognition technology to identify participants of the Black Lives Matter/ George Floyd Solidarity protests across the United States, matching them to databases of mugshots, DMV photos, even images from social media networks.
What should be made clear is that these systems are not infallible and can be as susceptible to bias as their human creators. The efficacy of a facial recognition system is heavily dependent on its algorithm and the data fed into it. A study by the National Institute of Standards and Technology that evaluated 189 software algorithms from 99 different developers found varying levels of accuracy and error between developers. Some of the broader findings included:
· Higher rates of false positives (saying two different people were a match) for Asian and African Americans relative to Caucasians
· Similar rates of high false positives for U.S. developed systems when considering Asians, African Americans (especially women), and Native groups
Biases in the technology amplify the impacts of policing on those communities who are already disproportionately targeted by law enforcement.
As the technology behind facial recognition continues to advance, the guidelines will become clearer. For now, countries around the world are figuring it out on their own, some embracing, and others banning, the rapidly advancing technology.