Identy.io, a mobile biometric authentication provider, is betting that the next generation of fraud defense must assume one uncomfortable truth: traditional liveness checks are no longer enough. The company has announced new deepfake detection capabilities built directly into its facial capture solution, targeting the rapid rise of AI-generated identity fraud that is overwhelming banks, governments, and enterprises worldwide.
The move comes as deepfakes—once a novelty—have become a mainstream attack vector. According to industry estimates cited by Identy.io, generative AI–enabled fraud losses in the U.S. are expected to hit $40 billion by 2027, up from $12.3 billion just four years earlier. In Q1 2025 alone, deepfake-enabled fraud exceeded $200 million, with deepfakes now responsible for 40% of biometric fraud attempts globally.
That growth curve explains why vendors across fintech, payments, and digital identity are racing to rethink how trust is established when AI can convincingly impersonate a real person in real time.
Why Deepfakes Change the Rules of Biometric Security
For more than a decade, biometric systems relied on two pillars: presentation attack detection (PAD) and liveness detection. These methods were effective against older spoofing tactics—printed photos, video replays, or even 3D masks—because those attacks left physical clues behind. Screens reflected light differently. Photos lacked depth. Masks failed dynamic challenges like blinking or head movement.
Deepfakes break both assumptions.
Modern attacks blend two techniques at once. First, they use AI-generated facial models mapped onto a live human operator, allowing the synthetic face to pass active liveness challenges naturally. Second, attackers bypass physical presentation entirely by injecting fake video feeds directly into the biometric capture pipeline using virtual cameras or software hooks.
In short, there is no “artifact” to detect, and the system is interacting with something that behaves like a real person. Traditional PAD wasn’t built for that reality.
Jesús Aragón, CEO and co-founder of Identy.io, frames the problem in broader terms. When visual evidence itself becomes unreliable, it doesn’t just enable fraud—it undermines public trust in digital interactions altogether. From financial onboarding to government services, identity verification becomes a weak link across the digital economy.
Defense-in-Depth for the AI Fraud Era
Identy.io’s response is a layered security architecture designed to assume compromise at multiple levels. Rather than relying on a single detection method, the company is combining several independent defenses that address both synthetic media creation and digital injection pathways.
At the core is AI-driven deepfake detection, which analyzes visual and temporal signals to identify artifacts and inconsistencies introduced by generative models. This layer focuses specifically on detecting AI-generated faces, not just physical spoofs.
Complementing that is injection attack prevention, which verifies the integrity of the capture process itself. By validating the camera, device, and execution environment, the system aims to block synthetic video streams from ever entering the biometric pipeline—a critical countermeasure against virtual camera attacks.
Traditional passive PAD remains in place as a baseline, continuing to defend against legacy spoofing methods like photos, screens, and masks. The key difference is that no single layer is trusted to stop everything. If one mechanism fails, others are designed to catch what slips through.
This defense-in-depth approach mirrors a broader shift across cybersecurity, where single-point controls are increasingly viewed as brittle in the face of adaptive AI-driven threats.
Performance Without Penalizing Users
One of the persistent challenges in biometric security is the trade-off between fraud prevention and user experience. Aggressive detection systems often increase false rejections, frustrating legitimate users and driving abandonment.
Identy.io says its enhanced solution builds on an established performance baseline. In iBeta ISO 30107-3 testing, the company previously achieved 100% attack detection with 0% false rejections of bona fide users. The new deepfake capabilities are designed to extend that balance, addressing both synthetic content and delivery mechanisms without adding friction to the capture process.
That matters because identity verification increasingly sits at the front door of digital services. Whether opening a bank account, accessing healthcare, or onboarding employees, high-friction authentication can translate directly into lost revenue or degraded service delivery.
Deepfakes Go Mainstream—and Cheap
Perhaps the most alarming aspect of the deepfake surge is how accessible the tools have become. What once required specialized hardware and expertise can now be done with free or low-cost software running on consumer PCs. Attackers no longer need advanced technical skills; they just need time and a target.
This democratization of attack capabilities is accelerating fraud volumes and lowering the barrier to entry for criminal operations. It also explains why identity vendors are under pressure to move faster than regulatory frameworks, which tend to lag technological change.
In that context, Identy.io’s announcement reflects a broader industry reckoning: AI fraud is not an edge case anymore—it’s the default threat model.
Implications for Financial Services and Government
For financial institutions, deepfake resistance is rapidly becoming a regulatory and reputational issue, not just a technical one. As losses mount, institutions that rely on outdated liveness systems may face higher fraud costs, compliance scrutiny, and erosion of customer trust.
Government agencies face a parallel challenge. Identity proofing underpins everything from digital IDs to benefits distribution, and deepfake-enabled impersonation threatens the integrity of those systems at scale.
By embedding deepfake detection directly into facial capture, Identy.io is positioning its platform as a future-ready component for organizations that can no longer afford incremental security upgrades.
The Road Ahead for Biometric Trust
The biometrics industry is entering a new phase—one where authenticity must be continuously verified, not assumed. Deepfake detection, injection prevention, and environment integrity checks are likely to become standard features rather than premium add-ons.
Identy.io’s latest release underscores that reality. As AI continues to blur the line between real and synthetic, identity systems must evolve from static checks into adaptive, layered defenses.
In an era where faces can be fabricated on demand, trust will belong to the platforms that can prove—repeatedly and reliably—that the person on the other side of the camera is who they claim to be.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI











