Skip to content

The AI Reality Check: It’s Not Skynet, It’s the Erosion of Truth

This is the second part of a 5-part serie that examines the trouble of legislating AI. The first part is available here: The Illusion of Control: Why “Regulating AI” is a Political Fantasy

Beyond the Robots: Understanding AI’s True Immediate Threat

When we talk about Artificial Intelligence, our minds often leap to Hollywood blockbusters: sentient robots, apocalyptic scenarios, or hyper-intelligent machines plotting humanity’s downfall. We envision Skynet, HAL 9000, or menacing androids. And while these cinematic visions are certainly compelling, they distract us from a far more insidious, immediate, and pervasive threat that AI poses right now, in our daily lives: the quiet erosion of truth.

This isn’t a future problem; it’s a present reality. The real danger isn’t that AI will become self-aware and decide to destroy us. It’s that AI, in its current state, is fundamentally altering our perception of reality, making it increasingly difficult to discern what’s real, what’s fabricated, and what’s merely plausible.

The Public Misunderstanding: Why AI Still Has “Hands and Fingers”

One of the biggest obstacles to understanding AI’s current impact is a widespread misunderstanding of its capabilities. For many, the “AI” they encounter is still clumsy. They see AI-generated images with six fingers, distorted faces, or nonsensical text, and they confidently conclude: “Oh, AI isn’t that good. It’s obviously fake.”

AI generated hand
AI generated hand

This dismissal, while understandable given past limitations, is profoundly dangerous. It traps public perception in an outdated understanding of AI’s rapid evolution. Imagine looking at early digital photography and concluding that photos could never be perfectly realistic. That’s where we are with many people’s view of AI.

The “hands and fingers” problem is rapidly diminishing. AI models are improving at an astonishing rate, learning to render complex details with increasing fidelity. What was once a clear tell-tale sign of AI generation is quickly becoming a rare artifact, reserved for older models or less optimized workflows. The public’s continued focus on these diminishing imperfections creates a false sense of security, leading them to believe they can easily spot AI-generated content. They can’t, and soon, they won’t even be able to rely on the obvious tells.

The “Can’t Unsee” Effect: Deepfakes and the Psychological Impact

The erosion of truth isn’t just about images and text. It’s about a fundamental shift in our psychological landscape. We’re entering an era where convincing deepfakes of voices, videos, and even real-time conversations are becoming increasingly accessible and indistinguishable from reality.

Think about a phone call where the voice on the other end is perfectly cloned from a loved one, making a request or an emotional plea that is entirely fabricated. Or a video of a public figure saying something they never did, appearing utterly authentic. The power of these tools is immense, and terrifying.

AI, Justitia, legislation, regulation, cyber crime, deepfake

In 2024 Starling bank conducted a survey regarding voice clone scams, together with Mortar Research. 28% of the participants believed that had been the subject of a voice clone scam in the past year. While that is freightening enough, it’s not even close to being as scary as the fact that 48% of the participants wasn’t even aware that something like this was possible. Only about 30% of the participants felt that they knew what to look out for, and how to protect themselves against this type of scams.

The psychological impact is profound. Once you’ve seen or heard a convincing deepfake, a seed of doubt is planted. Every subsequent piece of media, every news report, every online interaction, can be viewed through a new lens of suspicion. “Is this real? Or is it AI?” This “can’t unsee” effect creates a pervasive sense of distrust, not just in AI, but in the very fabric of shared reality. It undermines our ability to agree on what is true, fostering division and making it harder for society to function based on common facts.

The emotional aspects and very real consequences of deepfakes

This isn’t merely about confusion; it’s about the deep, searing emotional damage that arises when highly convincing fabrications strike at the heart of our most fundamental relationships and our personal identity. Imagine the immediate, gut-wrenching shock of seeing a video of a spouse in a fabricated intimate scenario, or a loved one making a hateful or incriminating statement they never uttered. Even if the intellect quickly registers it as “fake,” the primal part of the brain that processes visual and auditory information has already recorded the “event.” This is the core of the “can’t unsee” problem: what the senses perceive, the mind struggles to truly delete, leaving a corrosive residue of doubt and hurt.

The consequences stretch into every facet of life. Marriages can be shattered by a jealous ex circulating fabricated “evidence.” Careers can be destroyed when a coworker or a malicious actor engineers a video clip of you saying something egregious, sending it to your boss. Reputations are smeared, friendships are fractured, and individuals can face public shaming or sophisticated bullying based on entirely manufactured realities. The emotional fallout – the betrayal, the shame, the anger, the profound violation of trust – is real, even if the “event” itself never occurred. We humans are not biologically wired to easily disregard what our eyes and ears tell us, and AI is exploiting this ancient vulnerability with terrifying new precision.

AI, Nth Room, legislation, cybercrime

One example of how deepfakes have been used is related to the infamous Nth Room where digital deepfake videos have been used to coerce young girls, of which many has been teens or underage, to perform actual sexual acts. These has been recorded and both sold/traded and been used to blackmail the girls into further humiliating acts. For those who might want to dive deeper into what Nth room was, and the damage it did, the podcast Rotten Mango has several episodes where they go to the bottom with the creation and funcion on the phenomenon.

While Nth Room is an extreme case, it would serve us well to acknowledge with what ease this could happen again. Anywhere in the world. And while we can’t protect ourselves in the way that we can prevent someone from creating deepfakes of us or our beloved ones, we can prepare ourselves by educating ourselves. The vast majority of the population, in my opinion, need to be educated both in how extremely realistic deepfakes can be, and mentally prepare themselves that this can happen to themselves or someone close to them.

The technology for it is already out there, and it will not go away.

If you haven’t read part 1 yet, you can find it here: Part 1

Published inAIEnglishTech