By Edward Santow, University of Technology Sydney
Many smartphones click open each time we glance at the screen and some airports automate passport checks with face scans: it’s tempting to think we’re on the cusp of unlocking the full potential of facial recognition technology. Visions of cutting-edge law enforcement databases, helping pick out a criminal sitting in a packed football stadium spring to mind, like a science-fiction crime film come to life.
But despite some innovations, facial recognition technology is far from ready to tackle these society-shaping challenges. This hasn’t exactly stopped states from a rush to surveillance. To understand what is holding back this technology, it is vital to understand how it works in the first place.
How does facial recognition work?
Facial recognition is a type of image identification technology. These technologies rely on many of the processes and techniques associated with artificial intelligence (AI). In particular, applications tend to use machine learning to classify subjects at speed and scale.
Any image recognition application starts with a ‘training’ process: a powerful computer is fed a large dataset of labelled pictures and learns to recognise the characteristics associated with a subject (for instance, a car) and discern it from non-subjects (i.e. a human or a dog). With a big enough set of diverse, labelled images, a computer will be able to tell the difference between different subgroups within the subject class (e.g. a Toyota Corolla from a Ford Focus).
The same idea applies to facial recognition: by feeding the computer enough photographs of people, it will learn how to tell them apart, effectively recognising and identifying humans at scale.
The effectiveness of any facial recognition application depends on the quality of this training process. The starting point is the dataset — that is, the stock of labelled pictures. The more pictures in the training dataset, the more accurate the facial recognition application is.
But too often the training datasets contain a disproportionate amount of images of white men. This means applications are often poorer at identifying anyone who is not a white man, such as women or people with dark skin. That inaccuracy intensifies where two such factors are present—for example, women with dark skin.
The labelling process is also crucial to the success rate of any facial recognition application. Each picture that the AI-powered computer learns from will be accompanied by a label, which states the subject’s name and other information about them. Any errors — and indeed any subjectivity — in this labelling process are then taken on by the computer, and will affect the resulting accuracy or objectivity.
The most common forms of facial recognition currently in use are one-to-one facial verification and one-to-many facial identification. Facial verification involves a computer checking whether a single headshot photograph matches a different headshot picture of the same person. It is particularly useful as a way of verifying whether an individual is who they claim to be, acting like a key or a password.
This technology is widely in use to unlock smartphones, tablets and other such devices. It is also used by some countries to verify someone’s identity at border control. Provided the core human rights protections are followed, one-to-one facial identification has a relatively low risk profile with its current usage.
One-to-many facial identification is vastly different. Like facial verification, facial identification also matches a single headshot with a stored image of the subject. However, the matching headshot with facial identification will be stored somewhere in a larger database that has headshots of many others as well.
This makes the task of one-to-many facial identification much harder, and can be like finding a needle in a haystack. But it’s also more useful. Facial identification doesn’t just determine whether an individual is who they claim to be. It can answer a harder question: who is this person?
A unique problem that needs unique safeguards
The potential application of one-to-many facial identification is limited only by one’s imagination. We have already seen dangerous displays of how facial identification can unleash havoc.
In China, facial identification has been central in creating ‘social credit’ schemes that automatically detects and penalises citizens for petty crimes such as jaywalking. More worrying, it has been linked to systems of control and repression of certain ethnic groups, such as Uighur people in Xinjiang province.
Even in liberal democracies, companies have used facial identification to make life-altering decisions, from whether to recruit someone for a job or deciding home loan applications. The greatest risk comes when the state collaborates with technology companies for high-stakes surveillance, such as monitoring and identifying criminal suspects.
In policing, the accuracy of facial identification must be impeccable: a mistake leading to an innocent person being arrested would be a catastrophe. The current technology isn’t ready for that challenge: a 2018 trial by the London Metropolitan Police used facial recognition to identify 104 previously unknown people who were suspected of committing crimes. Of those 104 identifications, 102 were wrong, amounting to a false positive rate of approximately 98%.
The promise of facial recognition technology is enormous, but we must grapple with the risks that it brings as well. Its potential to bring economic and many other benefits is similarly vast. As AI moves from the laboratory and into the real world, the context in which facial recognition technology is used becomes more important.
This year, the Australian Human Rights Commission identified gaps in existing law that could allow the technology to be used in ways that result in an unjustified intrusion on human rights. The Commission called for a moratorium on high-risk use of this technology, at least until a stronger legal framework is introduced to protect against harm associated with misuse and overuse.
As governments and companies around the world press forward with the development and use of facial recognition, the need for clearer legal protections is growing more urgent. There is a dire need to have open public debate about how this technology is and can be used, and the contextual risks associated with different types of facial recognition. Governments should listen to the community in setting clear boundaries regarding the permissible use of facial recognition.
Originally published under Creative Commons by 360info™.
Edward Santow is Industry Professor – Responsible Technology at the University of Technology Sydney (UTS). He was previously Australia’s Human Rights Commissioner (2016-2021). Mr Santow declared no conflict of interest in relation to this article.