From cryptocurrency market scams to $25 million heists, AI-generated deepfakes have already wreaked havoc on the world.
Imagine getting a call that sounds exactly like your partner, asking to use your credit card in an emergency. Their voice is shaky and frantic. In a moment of concern, with a twinge of confusion, you pull out your wallet and read the numbers off your credit card. For one Hong Kong-based multinational firm, this became a reality, except it was the voice of their CFO being impersonated, and the amount they were asked to wire transfer was $25 million. They sent it immediately.
Imagine running a cryptocurrency exchange with a KYC system to prevent ID theft and money laundering, only to find out that people were bypassing that system with AI-generated fake IDs.
Or waking up to your phone buzzing from people sharing naked photos of you on social media that you never took, because they’re AI-generated, yet perfect replicas.
All of these examples are based on real-life occurrences.
With AI deepfakes becoming smarter, faster to create, and more realistic-looking every day, one question remains: How do we stop deepfakes from being weaponized and wreaking havoc on the world?
Watermarking AI Content
Whether invisible or hidden under the pixels, digital watermarks are one of the few proposed solutions to combating deepfakes. AI tools like Grok, Google Veo3, and even OpenAI’s SORA all include digital watermarks that reveal the origin of the video.
Some advocates and policymakers have proposed requiring all forms of AI-generated content to include watermarks.
One of the problems with visible watermarking is that it can be cropped out of digital images and media. When it comes to text content, watermarks are erasable.
Wired reported that researchers “broke” all of the watermarks they tested, including text-based ones.
So, although watermarking can act as a sort of provenance stamp, the technology isn’t foolproof.
Detecting Deepfake Content
AI detection is typically associated with AI-generated text, but detection technology may be more effective on multimedia content than on text.
For example, a newly founded startup called TruthScan recently launched its AI deepfake detection and anti-AI fraud suite.
“Here’s the thing, right, when it comes to video and image content, we’re looking at thousands or even millions of data points, each pixel, the source of the file — I mean, there’s just so much to analyze,” says Christian Perry, CEO of TruthScan.
The TruthScan CEO says he launched TruthScan after concluding that deepfake content was a new and potentially more dangerous form of computer viruses.
“At least when it comes to computer viruses, you have things like anti-virus software and a general knowledge about their existence.”
But when it comes to deepfakes, “there’s no defense against the most dangerous ones,” he explains.
TruthScan claims to be the first company to address all forms of AI-generated deepfake fraud using proprietary adversarial technology.
Using Blockchain Technology
People don’t typically understand one of the key elements that makes cryptocurrency transactions so promising.
Blockchain technology enables us to create an immutable ledger that records information consistently verified and tied to a single location and wallet address.
Think of it this way: any piece of media — audio, video, or text — can be timestamped and turned into a cryptographic signature. With blockchain technology, each of these media assets can be precisely timestamped at its moment of creation or upload.
This timestamping process is not merely a simple record; it transforms the media into a unique cryptographic signature, often referred to as a hash. This signature is a condensed, unique identifier that acts as a digital fingerprint for that specific piece of media. Any subsequent alteration to the original media, no matter how minor, would result in a completely different cryptographic signature, immediately alerting anyone to the change.
When this inherent immutability and verifiable timestamping capabilities of blockchain technology are combined with advanced AI media detection algorithms, the potential applications become incredibly powerful and far-reaching.
AI can analyze vast amounts of media to identify patterns, authenticate content, and even detect deepfakes or manipulated media with increasing accuracy. Linking all these AI-driven insights directly to the blockchain’s immutable ledger could create a virtually tamper-proof system for verifying the authenticity and origin of digital content.
This synergistic relationship holds immense promise for combating misinformation, protecting intellectual property, and establishing trust in an increasingly digital world.






