Apple’s iPhone X leads the market for phones nowadays. When it came out, it had many new and exciting features. The Super Retina display is just beautiful, the removal of the home button intuitive and its new camera stunning.
On top of this, it was the first iPhone to feature Face ID. For the first time, iPhone users could unlock their phone and even purchase items using just their face.
Millions of iPhone enthusiasts were desperate to try out this new feature and some did so, without the expected effects. Many people took to social media very quickly after finding a very common flaw with this new technology: it struggled to recognise people of colour, and it also reportedly failed to differentiate between Chinese people. Within weeks following the iPhone X’s release, Twitter was flooded with not only tweets about the new technology’s flaws, but videos clearly showing Face ID failing to recognise black people. On top of this, Apple had complaints sent directly to them concerning the issue. AsiaOne reported that one woman from Nanjing, China discovered that her colleague, who isn’t related to her, could unlock her phone. When she and her colleague took the phone into an Apple store, the same issue was replicated on a different iPhone X.
It begged a question to many: could this technology be racist? What was going on? Well, any bias technology has is created originally by its programmers. Although these flaws were highly publicised, it wasn’t the first piece of technology that could not distinguish between different races of people. Amazon’s new technology Recognition reportedly confused 28 members of US Congress with police suspects from prison mugshots, a disproportionate number of these people of colour. One person noticed that Google Photos was confusing photos of their black friends with photos of gorillas. In this instance, Google were criticised for their response. Instead of attempting to fix their flawed facial recognition technology, they simply blocked their algorithms from recognising gorillas altogether.
Many people have called for more ethnic diversity in technology firms to ensure that these errors don’t happen. This does make sense – if the customers are people of all different races, so should those responsible for creating the product reflect this, particularly when that product has to distinguish between different racial features. MIT researchers Joy Buolamwini and Timing Gebru found that one of the ways facial recognition technology is biased against people of colour is that they tend to be underrepresented in the data sets used to train this technology. For example, in their study it emerged that researchers for one major US technology company claimed a 97% accuracy rate for a face-recognition system they had designed, yet this was based on a data set which was more than 77% male and 83% White. Put simply, often facial recognition is more inaccurate when looking at darker skin tones because it has had less teaching in the ability to discern darker faces, and more distinguishing between lighter faces. Brian Brackeen, CEO of AI startup Kairos, revealed recently that he rejected a contract between Kairos and body camera manufacturer Axon. This is due to Brackeen’s assertion that face recognition enhances the capabilities of the police, which in turn exacerbates the bias of policing.
Facial recognition technology carries through any bias that programmers have unwittingly placed in there. And, after all, they are only human.