According to the FBI, criminals are using AI-deepfake technology to send AI-generated spam, create AI-generated social media personas, and develop AI-coded phishing websites to defraud companies and consumers alike. In 2025 alone, deepfake-enabled AI fraud resulted in over $200 million in losses.
Table of Contents
ToggleReal-World Examples Highlight Rising Threat
Last year, Deloitte, a financial auditing and consulting firm, reported an instance where a Hong Kong company sent USD 25 million to a fraudster who was using deepfake technology to impersonate the Hong Kong firm’s Chief Financial Officer.
According to the Bitget Anti-Scam Report, in 2024, AI-related scams were directly tied to $4.6B in cryptocurrency losses.
CBS News also reported that AI-generated deepfake videos of Elon Musk were contributing to billions of dollars in fraud losses within the United States.
According to the report, unsuspecting people were shown videos of Elon Musk endorsing third-party financial services and cryptocurrency schemes. One victim opened a $10,000 account, only to discover that they’d been duped by a scam company that used deepfake videos of Musk to appear legitimate.
A New Industry Responds to the AI Threat
A disturbing pattern of deepfake AI fraud is emerging, and in response, a new industry is developing to stop and identify AI-enabled cybercrime.
Last month, a new AI fraud prevention startup, TruthScan, announced the launch of a suite of deepfake detection tools, designed to identify and prevent AI-related deepfake fraud.
Fighting Deepfakes with AI
“Deepfake-related fraud will cause a devastating loss of assets to both businesses and individuals this year,” Says Christian Perry, CEO of TruthScan.
Before TruthScan, Perry directly oversaw and helped develop an adversarial AI software called Undetectable AI, which specializes in researching AI-deepfake technology.
Perry says, “When we developed Undetectable AI, we were at the forefront of detecting AI-generated content, and researching and developing novel ways of bypassing AI detection systems.”
“Now we’re taking everything we’ve learned, all of the data we’ve collected, and plugging that into the TruthScan suite to help companies identify AI-generated documents like IDs, documents, and facial recognition systems.”
The Cybersecurity Community Takes Notice
The cybersecurity community is beginning to embrace AI as a critical security tool needed to combat the rise of AI-related fraud, a concept that is still in its early stages and arguably underserved.
Benjamin Miller, a former U.S. diplomat, fraud prevention expert, and advisor at TruthScan, discusses this matter. He shares his concerns about how people still don’t understand the implicit risk of AI-related fraud. “I think we could be looking at billions in fraud-related financial losses this year directly tied to AI-cyber-crime.”
How Deepfake Detection Works
How exactly does deepfake detection work? Perry explains that the TruthScan software analyzes image patterns, pixels, watermarks, and altered file data to create a unique fingerprint for digital images and media.
“On one hand, we’re always training on AI-generated media, but we’re also looking at things like image file tampering, pixel analysis, and indicators of provenance.”
A New Age of Fraudulent Documents
So, imagine you’re a cryptocurrency marketplace, and you have a know-your-customer (KYC) policy that requires users to upload their identification documents.
Previously, to fraudulently bypass these systems, fraudsters might have had to spend hours creating passable fake documents. Or, they might have also spent this time stealing images of a real person’s driver’s license. Now, with the advancement of generative image models, this can be done much more easily. Fake documents, such as passports, ID cards, or even birth certificates, can be generated in just minutes with the right software or tools.
One example of a site offering AI-generated fake IDs that made headlines was called OnlyFakes. This service churned out AI-generated fake IDs for $15 a piece.
Mission: Slow AI Fraud Down
Perry says he and the team at TruthScan think the implications here are obvious. “If it’s getting quicker and easier for criminals to do fraud with AI, we need to create systems that make doing fraud much slower and harder—ideally, next to impossible.”
Categories of Deepfake Media
Right now, there are various categories of deepfake media. The most common and dangerous are videos, photos, and audio.
Deepfake photos encompass fraudulent identification documents, cards, or evidence, and even malicious pornography. Deepfake video is on a similar track, but also encompasses fraudulent business calls, endorsements, blackmail, and misrepresentation.
Audio that is Deepfaked allows phone scammers to impersonate celebrities, company personnel, or even an individual’s family member. The scammers do this in a way that’s nearly indistinguishable to the untrained naked eye or ear.
When the founders of TruthScan launched Undetectable AI two years ago, they claimed to have honest intentions. Technically, the service allows users to deepfake text content by making it read more like text a human would write. However, they felt that the risk of abuse was low.
Ironically, the founders experienced various fraud attacks, some of which involved attackers using AI. Now, they hope to apply everything they’ve learned to thwart would-be fraudsters. “We’re going to be offering the full TruthScan suite to enterprises, government agencies, and industries.”
The Road Ahead: Collaboration Is Key
Looking ahead, the battle against synthetic deception will hinge on collaboration as much as code. Regulators, enterprises, and security researchers must share threat intelligence in near‑real time, benchmark detection standards, and invest in ongoing staff education.
As Perry notes, the cost of complacency is measured not only in stolen funds but also in eroded public trust—a currency far harder to replace.
While a new wave of financial crime is emerging, new methods to detect it are also emerging.
Featured Image: Photo from Unsplash; Thanks!