The first facial recognition system (FRS) was developed in 1960’s by the Mormon bishop Woody Bledsoe, co-founder of Panoramic Research, in Palo Alto, California. He wrote an algorithm which could read manually input facial features of a person and search for matches in a given database.
In the ’80s and ’90s, thanks to technological improvements, the process of identifying a face in a picture and extracting its main characteristics became fully automatic. From 2010, machine learning methods based on deep neural networks and fast graphical processing units have been used to boost the performance of the technology. Today, in ideal conditions such as in a well-lit shop equipped with good resolution cameras, facial recognition systems can identify a person with a 99.8% accuracy.
That is why FRS’s have become so popular and widespread; being used in airports for passport controls, to unlock your smartphone, to pay in an AmazonGo shop, to find lost people or kidnapped children, to track terrorists and thieves, to record people’s interests in shopping centres, and so on.
However, as opposed to other biometrics data that uniquely identify people (like fingerprints or DNA) there are no specific rules or laws, in Europe or in the USA, on the use and retention of facial images. This results in a chaotic situation, in which the police or a private company like a shopping centre can decide on its own when and where to use facial recognition technologies, and what to do with the images collected. They are not even obliged to inform people or obtain their consent.
In Sweden, a school was fined after it was found out it was using facial recognition to keep attendance. In Britain, FRS’s have been used privately at King’s Cross between 2016 and 2018 and secretly trialled by police forces since 2015 at concerts, stadiums, protests and shopping centres.
Civil liberty groups have raised concerns about the potential abuse of a technology so powerful, if left unregulated. Indeed, in some authoritarian regimes facial recognition is already used to track people based on their religion, sex or race.
The technology is also very easy to obtain. Companies like Google, Amazon, IBM and Microsoft already commercialise their version of the product. The New York Times showed how it is possible, using cameras publicly accessible in parks and buying the face recognition technology developed by Amazon at only £100, to easily recognise and track people around the city, just by comparing their images with public pictures accessible at universities’ sites.
Researchers also found that facial recognition algorithms are so far still biased against women and people of colour, essentially because neural network algorithms are trained mainly on pictures of white men. This could result in mis-identification and more people of minority groups being wrongly accused of crimes.
That is why some U.S. cities like San Francisco decided to ban the technology, at least until it has been properly developed and regulated. San Francisco’s city supervisor also underlined that, even with a perfectly functional FRS, it is psychologically unhealthy for people to constantly being tracked and watched in real time.
Facial recognition is not the only biometric that is in development and unregulated. Skin texture can be analysed to better recognise partly covered faces and distinguish twins, which cannot be done with simple facial recognition. Gait analysis, which categories people based on their stride, is also being used, for example at Westquay in Southampton, as recently reported by Wessex Scene.
So far, no laws have been proposed to regulate the acquisition and storing of new emerging biometric data. Only the European Union at the end of August, as reported by the Financial Times, has started working on a regulation that will give EU citizens explicit rights on their facial recognition data.