How to get around facial recognition

From colourful make-up to invisibility cloaks, attempts to fool facial recognition software are widespread, but can these really help to keep our biometric data private?

A decade ago, it was possible to attend a protest in relative anonymity. Unless a person was on a police database or famous enough to be identified in a photograph doing something dramatic, there would be little to link them to the event. That’s no longer the case.

Thanks to a proliferation of street cameras and rapid advances in facial recognition technology, private companies and the police have amassed face data or faceprints of millions of people worldwide. According to Big Brother Watch, a UK-based civil liberties campaign group, this facial biometric data is as sensitive as a fingerprint and has been largely harvested without public consent or knowledge.

Face off: The quest to mess with facial recognition technology

In response, designers and privacy activists have sought to make clothing and accessories that can thwart facial recognition technology. According to Garfield Benjamin, a post-doctoral researcher at Solent University, who specialises in online privacy, they rely on two main techniques.

“Either they disrupt the shape of the face so that recognition software can’t recognise a face is there or can’t identify the specific face,” he says. “Or they confuse the algorithm with different patterns that make it seem like there are either hundreds or no faces present.”

Public attitude to facial recognition

At the University of Maryland, Tom Goldstein, associate professor in the Department of Computer Science, is working on the second technique. He’s created a so-called invisibility cloak, though in reality it looks more like an incredibly garish hoodie. The cloak, a research tool which is also sold online, works by fooling facial recognition software into thinking there isn’t a face above it.

In 2015, when Scott Urban, founder of Chicago-based privacy eyewear brand Reflectacles, saw facial recognition becoming “more popular and intrusive”, he set out to make glasses that would “allow the wearer to opt out of these systems”.

He created a model designed to block 3D infrared facial scanning, used by many security cameras, by turning the lenses black. While another model reflects light to make it harder to identify a user’s face data from a phone picture.

Other anti-surveillance designs include a wearable face projector, which superimposes another face over that of the person wearing the device, a transparent mask with a series of curves that attempts to block the facial recognition software while still showing the user’s facial expressions, balaclavas with a magnified pixel design and scarves covered in a mash up of faces.

IRepair glasses
The IRpair glasses by Reflectacles are designed to block 3D infrared facial scanning

The anti-spoofers strike back

Benjamin says the problem with all these techniques is that the companies making the facial recognition technology are always trying to improve their systems and overcome the tricks, often boasting in their promotional literature about the anti-spoofing mechanisms they are working on. “They want to show they’re thwarting the ‘rebels’ or ‘hackers’ and this has led to further developments in the technologies,” he says.

This was the case with CV Dazzle, which uses face paint to trick or dazzle the computer vision by disrupting the expected contours, symmetry and dimensions of a face. The technique was invented by the American artist and activist Adam Harvey in the early-2010s and it proved to be effective at confusing the software that was emerging at the time, though it’s creator has noted it doesn’t always fool present-day tech.

It’s about making that invisible tech visible… especially as the Met Police are starting to deploy these cameras in the city

Yet, it does still disrupt the facial tagging of some social media, according to Georgina Rowlands of The Dazzle Club, a UK-based privacy activist group inspired by Harvey. “We know the technique is still effective versus Facebook, Snapchat and Instagram’s algorithms,” says Rowlands, whose group lead monthly walks, adorned in their rather striking Bowie-esque face paint, around London to explore privacy and public space in the 21st century. “But we haven’t been able to access more advanced systems such as the Metropolitan Police’s, so we can’t say if it’s effective there.”

Awareness around facial recognition issues

But evading the tech is only part of the story for The Dazzle Club. It’s as much about raising awareness of the pervasiveness of facial recognition software. As another member of the group Emily Roderick says: “It’s about making that invisible technology visible and bringing out those discussions, especially as the Met Police are starting to deploy these cameras in the city.”

The real goal for many of these creators is regulation of facial recognition technology companies and those who use the faceprints, to protect the privacy rights of the individual. So whether someone is at a protest or simply walking down the street, they can trust that their face, and all the data contained within it, remains their own and theirs alone.

Black Lives Matter shaping the future of facial recognition

BLM and facial recognition

In the wake of the Black Lives Matter protests, IBM, Microsoft and Amazon announced they would no longer be allowing US police departments to access their facial recognition technology, for at least a year.

The tech is arguably a tool of racial oppression. In 2018, Joy Buolamwini, a researcher at the MIT Media Lab, and Timnit Gebru, a member at Microsoft Research, showed that some facial analysis algorithms misclassified Black women almost 35 per cent of the time, while nearly always getting it right for white men.

A further study by Joy Buolamwini and Deborah Raji demonstrated that Amazon’s Rekognition tool had major issues identifying the gender of darker-skinned individuals, but made almost no errors with lighter-skinned people.

Raji, who is a tech fellow at the AI Now Institute at New York University and an expert in computer vision bias, explains there are many ways in which facial recognition technology can be biased. “It could involve having a higher error rate for a minority group,” she says. “Or it could label members of a particular group with a problematic label, so for example predicting people of colour are angrier than white or other people.”

Algorithmic flaws, which can be caused by a poor and narrow dataset, or inherent in the algorithm design itself, can have major repercussions for an individual. “Once you’re in the system, it’s very easy for the system to identify you in a variety of poses and angles, but the threat of being misidentified is quite large and, should that happen, you’re going to face real-world consequences.”

This was the case for Robert Julian-Borchak Williams, who was wrongly arrested in front of his children and detained for 30 hours due to a faulty facial recognition match. Even without such high-profile mistakes, several studies have shown there is no compelling evidence that facial recognition technology is actually effective in policing.

The backlash to facial recognition software chimes with a public weariness about how much they can trust police institutions, according to Raji. “Because of that, we’re thinking should we be giving them this power to monitor and target people? Will they act responsibly with these tools?”

Raji says the decisions on how to use the tech must be discussed and regulated, especially since it was found to have been used by the Hong Kong government to track and identify protestors. “Even if they did build it to find missing children, they now have that power and could easily re-orientate it. There are no safeguards in place to assure a certain amount of community input, or elective or democratic decision-making, before they use the tech for each different purpose,” she says.