Allowing facial recognition technology to spread without understanding its impact could have serious consequences.
In the last few years facial recognition has been gradually introduced across a range of different technologies.
Some of these are relatively modest and useful; thanks to facial recognition software you can open you smartphone just by looking at it, and log into your PC without a password. You can even use your face to get cash out of an ATM, and increasingly it’s becoming a standard part of your journey through the airport now.
And facial recognition is still getting smarter. Increasingly it’s not just faces that can be recognised, but emotional states too, if only with limited success right now. Soon it won’t be too hard for a camera to not only recognise who you are, but also to make a pretty good guess at how you are feeling.
But one of the biggest potential applications of facial recognition on the near horizon is, of course, for law and order. It is already being used by private companies to deter persistent shoplifters and pickpockets. In the UK and other countries police have been testing facial recognition in a number of situations, with varying results.
There’s a bigger issue here, as the UK’s Information Commissioner Elizabeth Denham notes: “How far should we, as a society, consent to police forces reducing our privacy in order to keep us safe?”
She warns that when it comes to live facial recognition “never before have we seen technologies with the potential for such widespread invasiveness,” and has called for police, government and tech companies to work together to eliminate bias in the algorithms used; particularly those associated with (…)