Face ID has become a popular way to unlock smartphones, and it likely won’t be long until it bleeds over into other applications. Unfortunately, that means we’ll also probably see more facial spoofing, which could mean attackers don’t need access to complex software to access your account.
In how many places is there a picture of your face online? At a minimum, you probably have at least one social media account, and then there’s also your employer’s website or your church’s directory — and likely several places you don’t know about. Thanks to this, attackers can simply print off a picture of your face in an attempt to fool facial recognition software.
Why Is Facial Spoofing Bad?
Unfortunately, humans aren’t always great at identifying spoofed faces. And even when they are, they’re fairly slow. A recent study from ID R&D showed that even when humans can identify a facial spoof, computers are up to 10 times faster to do so.
According to Alexey Khitrov, CEO of ID R&D, “What’s interesting is that the report showed that some of the most difficult attacks for the human eye are also some of the cheapest and easiest to do. The most difficult attack for the human eye is the simple photo [in front of the attacker’s face].”
When attacks don’t have to be high tech to be effective, this lowers the barrier to entry for attackers. And unless your facial recognition program has some way to detect liveness, pixelation, or lens glare, you’re leaving your customers open to attack.
Tell-Tale Signs of a Facial Spoof
When humans look for a facial spoof, they may look for a hand just barely in the frame holding up a photo, or the pixelated lines that sometimes occur when you take a picture of a screen using your smartphone.
While both are good indicators, they’re fairly easy to work around. The ID R&D report showed the error rate for humans attempting to identify a spoof where someone was holding up a printed photo was above 30%.
Anti-spoofing face recognition software with artificial intelligence (AI) can look for deeper indicators of fraud that might not be visible to the human eye. Depth of field and distortions within the image itself may not be noticeable for humans, but they’re clear signs to AI that a photo isn’t real.
Additionally, liveness detections, like eye blink tests, can block basic photo attacks if the identification system records a short video, rather than a single image.
Protections Against Facial Spoofing
Khitrov says the technology to protect against facial spoofing is available and luckily, doesn’t need human intervention to be effective. “The key is to make sure that the technology that you’re deploying to do the onboarding and authentication can answer two questions: Is this the right person? And then is this a person?” AI can not only do this more accurately than a human, but also much faster — making the process scalable.
“Although it’s bad news for humans’ capability to identify spoofs, it’s actually good news for the humans as consumers of all this technology, because we know that our biometric access can be protected in a very user-friendly way, very quickly, and without any additional steps in the user experience,” says Khitrov.
Businesses using a facial recognition system need to understand what the system is testing for and make sure it has some way to tell a human face from a spoof.
Facial Spoofing Could Become the Next Phishing
Like phishing, facial spoofing doesn’t necessarily require any complex technology, making it easily accessible for attackers. Because of the simplicity, we could see facial spoofing deployed on the same scale that we currently do with phishing. Unless we put protections in place with our facial recognition systems, they’re going to cause a lot of problems for businesses and consumers alike.
Businesses that currently use facial recognition software should test their system to see how well it handles simple spoofing attacks. And for those not yet using this software, consider how it could improve the user experience, while still keeping access secure.