Keeping up with the fraudsters: How to combat generative AI and document-based biometric attacksMay 17, 2023
Digital transformation has significantly expanded how individuals interact with banks and financial service firms. Technology has made onboarding from a consumer’s couch at home just as viable as their local in-person branch.
On one hand, the development of remote verification and authentication technologies has opened the door for online banking and other digital experiences that are simpler and more convenient for individuals. On the other hand, it also increased fraud liability.
Additionally, sophisticated technology with the power to create synthetic online identities has spawned new challenges. Deepfakes, synthetic identity documents (IDs), and digital injection attacks have given bad actors the tools to wreak havoc at scale.
To keep up with fraudsters, financial service providers must strengthen their onboarding and authentication workflows with effective, accurate verification technology.
New unregulated technologies introduce synthetic identity compromises
While traditional identity theft is on the rise, accounting for $52 billion in losses and affecting 42 million adults in the US alone in 2021, banks also face a new and more complicated threat: synthetic identity fraud (SIF).
Whereas traditional identity fraud typically relies on using stolen information, synthetic identity fraud involves the creation of a “person” – an entirely new identity – who doesn’t exist by mixing stolen, fictitious, or manipulated personally identifiable information (PII).
How synthetic identities are created:
- Fraudsters create a detailed fake identity document. This information could be a mix and match of real, stolen, and fake information – for example, a stolen Social Security Number combined with an entirely falsified name or slightly modified address.
- They then create synthetic imagery that matches the photo on the illegitimate identity document. They’ll use this combination to try to bypass an organization’s ID verification process.
Fraudsters usually build up some form of credit or banking history as part of the process. This could involve maxing out their credit lines, for example, making it nearly impossible for banks or financial institutions to tell whether these individuals are simply facing financial challenges (e.g. job loss) or if they’re a bad actor committing illegal activities – until it’s too late.
This emerging variation of fraud specifically works to circumvent identity documents and biometric verification technology – the current standard for digital identity verification – and is nearly impossible to catch via data checks (e.g. verifying an identity with a credit bureau) alone. Combatting this type of fraud requires a “one-two punch” – an approach of identity document verification and biometric technologies working together to prevent these sophisticated threat types.
What is synthetic imagery? Why are synthetic threats, like generative AI and deepfakes, a growing concern?
As mentioned, one common method of making synthetic identities look real is to use synthetic imagery. Criminals can use technology to create realistic photos or videos that have been digitally manipulated to replace one person’s likeness with another, or to even “create” people that don’t exist. Generative AI and deep fakes are a hugely powerful tool in boosting the success of synthetic identity fraud.
A deepfake is a video recording that has been distorted, manipulated, or synthetically created using deep learning techniques to present an AI-generated representation of someone – like a digital mask. Some of the most sophisticated variations of deepfakes are nearly indistinguishable from a real face, including natural eye and mouth movement. The use of deepfake technology in synthetic identity fraud spans from presentation attacks to digital injection attacks – both of which attempt to circumvent facial verification.
Banks across the globe have rapidly adopted facial verification as it emerged as the most secure method of securing an online identity – over commoditised, weaker, inconvenient methods such as passwords and OTPs. Face verification has become intertwined with the digital banking experience, and according to an iProov survey, 64% of global consumers who use mobile banking either use face verification to access their accounts already or would do so if they could.
That’s why it’s essential that any digital onboarding solution can robustly bind digital identities with real-world individuals. The Microblink-iProov solution confirms that a genuine human is verifying against their trusted identity document in real-time and that the document has not been tampered with. This thwarts synthetic identities during onboarding, before they enter the system.
Presentation attacks vs. digital injection attacks
There are a variety of presentation attacks that criminals can deploy to try and gain unlawful access to a user’s account or privileges. Alongside physically attempting to impersonate a genuine user, presentation attacks can also involve an artifact being held up to a user-facing camera. A bad actor could also create a deepfake and then show that video, via another screen, to the device completing facial verification.
Digital injection attacks leverage the same level of deepfake technology but involve the fraudster either rerouting the feed of verification video to a software-based camera, injecting a deepfake into the data stream of the application, or even leveraging an emulator to mimic a user device.
iProov’s recent threat intelligence report revealed that injection attacks were five times as frequent as persistent presentation attacks on the web throughout 2022. What’s more, liveness detection (i.e techniques to determine whether the source of a biometric sample is a live human being or a fake representation) is relatively reliable at detecting traditional presentation attacks, making digital injection attacks the focus for the most adept fraudsters.
Deepfakes become even more dangerous when they are employed in digital injection attacks, as they can be scaled and automated very quickly to cause significant damage.
How to combat synthetic imagery and digital injection attacks
While most biometric technology involves some level of liveness detection to verify an individual’s identity, liveness detection alone cannot detect a digital injection attack. To combat the combination of deepfakes and digital injection attacks, financial service institutions need a robust, multifaceted approach – one that leverages the creation of a one-time biometric.
Microblink and iProov’s digital onboarding solution utilizes one-time biometric technology to ensure that anyone attempting to verify their identity is doing so in real-time and not using synthetic imagery.
How? By illuminating the individual’s face with a unique sequence of colors that cannot be replayed or manipulated synthetically. This assures a user is authenticating right now – it’s not a presentation attack using a photo or mask, and it’s also not a digital injection attack using a replay of a previous authentication or synthetic video such as a deepfake.
Synthetic IDs add more complexity and risk
Before the explosion of online banking, traditional identity fraud was limited in scale with one person presenting one stolen identity at a time. The process was slow and the vigilance of internal employees was key to combating fraud and mitigating risks.
With synthetic IDs and the rise of deepfakes, fraudsters can scale the scope of their attempts, and do so at a faster pace than ever before.
Synthetic IDs are especially dangerous as they enable the production of countless “people” that a fraudster can impersonate. In one example from iProov’s report, some 200-300 attacks were launched globally from the same location within a 24 hour period in an indiscriminate attempt to bypass an organization’s security systems.
Attacks from threat actors are becoming more scalable and automated, and the synthetic imagery used to bolster fraudulent verifications is becoming more indiscernible from reality to the human eye. That’s why organizations require the most cutting-edge biometric and identity document verification technologies to combat threats.
Combating synthetic IDs with better ID scanning
With bad actors leveraging a combination of real and fraudulent information to create synthetic identity documents, a simple scan will no longer suffice.
That’s where an AI-driven ID capture, extraction and verification can excel. An AI-based approach can understand the full context of the identity document it’s scanning, providing data consistency and validation checks across the extracted information, and systematically looking for visual defects or anomalies to provide a greater level of assurance.
By taking a data-driven approach that combines non-forensic and forensic inspection of diverse identity documents, as well as liveness detection that creates a one-time biometric, your business can feel confident user identity documents are genuine.
Lastly, the flexibility and continuous learning of an AI-based solution ensures that it can extract and verify a vast majority of ID types and geographic varieties — ensuring you’re not sacrificing flexibility and end user experience or ease-of-use in favor of security and trust.
Risk reduction requires the right tech partners
The combination of synthetic identities and their accompanying IDs, along with deepfakes, are introducing more risk to digital onboarding processes for organizations of all sizes across industries.
Without a tech stack that helps protect against the sophistication of these new and improved fraud methods, the risks to financial service providers include lost revenue, customer goodwill, and regulatory penalties.
Leveraging AI-based ID capture, extraction and verification technologies – like those offered by Microblink – alongside one-time biometric scan solutions like iProov – enables a more secure digital onboarding experience. Combining the assurance and flexibility of these proven technologies can help combat the growing danger of financial fraud.