AI watermarking is a lazy solution to content authentication
by Rijul Gupta, co-founder & CEO, DeepMedia
In an era where our digital reality warps and contorts daily, where fact and fiction blur, we find ourselves on the precipice of a digital abyss. The shadows that chase us are not just of algorithms and bits — they wear our faces, echo our voices, and haunt us with chilling precision.
Consider the jarring DeepFake of Ukrainian President Zelensky that sowed chaos during the Ukraine war, or the digitally sculpted replicas of industry titans like Elon Musk and Mark Zuckerberg manipulated to defraud the innocent. If such grand deceptions weren’t foreboding enough, the technology is being weaponized to exploit our most intimate vulnerabilities.
Picture this: your own parents, trembling in fear, swindled by an entity using a flawless recreation of your voice or face, demanding ransoms or revealing fabricated secrets. I shudder at the very thought of my own mother and father being ensnared by such cruel treachery. It is a deeply personal threat, one that keeps tech visionaries like myself awake at night.
Against this backdrop, the solution proposed by tech giants like OpenAI, Meta, Alphabet, and Microsoft is as frail as it is futile: AI watermarking. In essence, a digital signature to help discern real from replicated.
It’s a snake-oil solution.
Watermarking, at its core, is about asserting ownership — reminiscent of how Getty embeds its signature on photographs. However, the digital realm isn’t so simple. As we speak, researchers harness the power of Generative Adversarial Networks (GANs) and Transformers with a singular mission: to erase watermarks, both visible and concealed. A trip to the internet, and even rudimentary tools on free photo editors can shatter these watermarks.
If our defense is so easily dismantled, how can we call it protection?
Consider the enormity of our digital cosmos. With AI-generated content increasing by as much as 500% this year, the idea of manual watermarking in this vast expanse is comparable to painting every grain of sand on a beach. It’s not just daunting but downright impractical. And in the realm of real-time, can one genuinely watermark a live-streamed deepfake? The premise crumbles under its own weight.
Beyond the visual, our senses are further betrayed by audio deepfakes. Every phone call, radio transmission, and podcast episode is a potential victim. Crafting counterfeit voices is no longer a Hollywood fantasy but a stark reality.
With watermarking, even the most discreet audio tags can be scrubbed clean or distorted using rudimentary software. It’s like trying to contain water with a sieve; the essence simply slips through.
Our quest shouldn’t be for mere decorative fixes, but for profound, robust solutions.
Detection is our beacon.
DeepMedia isn’t merely a spectator in this revolution; we’re on the front lines. Our DeepID platform, empowered by one-shot deepfakes, attains a remarkable 99% accuracy rate, detecting the most sophisticated synthetic media across 20 different languages.
This isn’t just a corporate goal; it’s a personal mission. I launched this company back in 2017 with co-founder Emma Brown because we have a profound belief that AI has the potential to elevate humanity, but only if shielded from malevolent uses.
While many AI tools falter — especially when faced with images of women, non-binary individuals, and people of color — ours stands apart. We are not surprised that most deepfake detectors are so anglocentric, prioritizing English and sidelining other languages. We’ve dedicated significant resources to ensure our technology resonates across genders, races, ages, and languages because we believe every voice matters, every face counts.
In a world swathed in digital murk, we strive to be the illuminating beacon. While watermarking offers a feeble nod to the challenge, what we require are robust strides.
And so today we stand at a pivotal juncture: Do we let illusions define our reality, or do we champion truth?
Our battle isn’t against technology, but for it. We urge the world to join us, not in evoking more mirages, but in unveiling the truth.
Jesse Pitts has been with the Global Banking & Finance Review since 2016, serving in various capacities, including Graphic Designer, Content Publisher, and Editorial Assistant. As the sole graphic designer for the company, Jesse plays a crucial role in shaping the visual identity of Global Banking & Finance Review. Additionally, Jesse manages the publishing of content across multiple platforms, including Global Banking & Finance Review, Asset Digest, Biz Dispatch, Blockchain Tribune, Business Express, Brands Journal, Companies Digest, Economy Standard, Entrepreneur Tribune, Finance Digest, Fintech Herald, Global Islamic Finance Magazine, International Releases, Online World News, Luxury Adviser, Palmbay Herald, Startup Observer, Technology Dispatch, Trading Herald, and Wealth Tribune.