Many people worry about the use of facial recognition – especially for surveillance.
This is understandable.
When a system misidentifies a person, he or she could be wrongly detained by law enforcement.
And if a system is worse at identifying particular groups than others (perhaps because of insufficient training data), people belonging to these groups will be more likely to fall victim to wrongful identification.
There are fundamental questions of privacy, consent, and function creep.
- How can people be sure their facial data is not being shared?
- What protections are there for people that are of no interest to the police?
- Could ‘potential’ criminals be detained before committing a crime?
These concerns have been long recognized within the industry.
6 guiding principles for face recognition
The Biometrics Institute was set up in 2001 to promote the responsible and ethical use of biometrics. “Often, legislation can’t keep up,” explains Chief Executive Isabelle Moeller. “The technology is moving so fast that it’s complicated to provide the right framework in time.”
With standards taking a long time to develop, many companies recognize the need to test their systems rigorously.
Meanwhile, bodies such as the Biometrics Institute are working hard to help organizations grapple with the big questions.
In 2019, the Institute updated its privacy guidelines to factor in the growth of artificial intelligence, drones, and more sophisticated facial recognition systems. It continues to build on the work being done by organizations across the board.
In 2018, Microsoft published a relevant post.
It said regulation is needed to prevent tech companies alone having to choose between “social responsibility and market success.”
It shared six guiding principles:
1. Fairness
Facial recognition technology should treat all people fairly.
2. Transparency
Tech companies should document the capabilities and limitations of technology.
3. Accountability
There should be an appropriate level of human control for uses that may affect people in meaningful ways.
4. Non-discrimination
Terms of service should prohibit unlawful discrimination.
5. Notice and consent
Companies should provide notice and secure consent when they deploy facial recognition.
6. Lawful surveillance
There should be safeguards for people’s democratic freedoms in law enforcement surveillance scenarios.
In some countries, there is already regulation that tackles some of these issues.
Facial data protection and regulations
The European Union introduced the General Data Protection Regulation (GDPR) on 25 May 2018. GDPR puts measures in place to limit how enterprises can gather, store, and share personal data.
It classifies facial data as a ‘special category’ of data because it reveals the racial or ethnic origin, genetics, biometrics, and more. As such, it prohibits processing it unless an exemption applies.
One of these exemptions is consent (specific, informed, and unambiguous).
However, the regulation does not define consent in great detail and has yet to be tested thoroughly.
There are fundamental questions of privacy, consent and function creep. How can people be sure their facial data is not being shared?
In public places, where the tech is used for surveillance or targeted advertising, it is hard to see how consent can be explicit.
A venue might display a sign explaining that facial recognition will be used.
- But is consent ‘freely given’ in this scenario?
- Can anyone opt out?
- What if a person enters without seeing the sign?
Industry insiders believe that transparency is necessary to the ethical development of facial recognition – and that regulation, such as GDPR, provides it.
ABI Research’s Dimitrios Pavlakis says: “Data protection is vital for face recognition, citizens need to know how their data is being used. Innovation and technological progress should not exclude responsibility."
Frederic Trojani, chairman of the Security Identity Alliance, agrees with Pavlakis: “The way biometric data will be used should be explicitly explained to the people. Regulations need to set clear rules on individual privacy and data protection. People need answers to questions such as: will my data be stored? For what reason? For how long? Do I have the right to erase it? GDPR is a good example of how to do this.”
The Security Identity Alliance published a set of best practices and recommendations around civil liberties and facial recognition in June 2019.
It should also be noted that GDPR is not restricted to EU-based companies.
It covers the processing of personal data belonging to EU citizens – and therefore applies to organizations based on other continents.
In the US, the federal government has not introduced regulation on facial recognition. However, many states are considering the issue.
Regulate or ban?
In May 2019, San Francisco’s Board of Supervisors voted by eight to ban the use of the technology by local agencies, such as law enforcement.
The move was seen as a gesture because the police department does not currently deploy facial identification.
Nevertheless, industry insiders believe regulators should reserve judgment before outlawing the tech outright. Joseph Hoellerer, Senior Manager, Government Relations, at the Security Industry Association, says: “We view any moves to ban facial recognition as premature and problematic."
“Lawmakers need to look at the issue holistically. They need the full picture before acting. The best way to characterize face recognition in law enforcement is to see it as one of many available tools. It should not be used as the sole basis for apprehending someone. But it should not be rejected either.”
Melissa Doval, CEO of Kairos, agrees that there are important ethical questions to address facial recognition.
US-based Kairos offers technology that companies can use to apply facial identification to their own databases. Developers can use Kairos’s APIs to match the same face or even detect whether a face is present.
Does facial recognition have an ethics issue? Melissa Doval, Kairos
Used ethically, facial recognition can improve citizen security by helping the police detect and catch criminals faster.
It might also help prevent crime before it has occurred.
But regulations need to be put in place to limit its use to well-identified and legitimate cases.
Completely banning this technology seems hasty, which is why cities such as London have established expert panels to examine the issues.
Doval believes any debate should consider factors beyond the technology alone. “These are human questions,” she says. “Facial recognition systems are not like connected cars, for example, where the vehicle itself might need to make a potentially life or death decision. Facial recognition merely feeds the information that a person acts on.”
She is confident that society will eventually settle on an ethical framework for the tech.
But in the meantime, Kairos is selective about its client base.
It turns away customers daily and works with those that apply facial recognition to matching/authentication and not surveillance.
More resources on facial recognition research, ethics and applications
Thales Statement Paper Facial Recognition (Oct 2021)
Thales addresses the main concerns around facial recognition, and highlights our vision for the ethical, socially accountable use of the technology.
Facial Recognition Statement Paper