Skip to content

AI Harm: We Have Laws for That

This is part 3 of the 5-part series of articles regarding regulation and legislation of AI. The previous parts are found here:

 Last time, we bravely faced the unsettling truth that trying to control open-source AI is a bit like attempting to lasso a whisper, and we stared down the very real “crisis of reality” deepfakes are brewing. Now, let’s tackle another popular misconception floating around: the notion that AI has suddenly dropped a whole new species of crime onto our unsuspecting legal systems, leaving them utterly baffled and unprepared.

Let’s be honest right from the start: that’s mostly fiction.

See, AI, in its current state, is fundamentally a tool. It’s a remarkably sophisticated, lightning-fast, and occasionally unnerving tool, no doubt. But it’s still just a tool. And much like any other powerful instrument – from a printing press to a chainsaw, or even that free photo editor you accidentally downloaded – when someone decides to use it to do something truly heinous, chances are, we’ve already got a law on the books for that. We’re not talking about entirely new categories of sin here, just the same old bad actors, now equipped with shinier, faster toys.

“AI Crime”: Familiar Faces with a Digital Makeover

Let’s pick apart some of these so-called “AI crimes” that are making headlines. You’ll probably find they sound suspiciously like the garden-variety mischief and malice humanity has been perfecting for centuries.

AI crime, cyber crime,

Take defamation, for instance. Imagine a deepfake video meticulously crafted by AI, showing someone you know – perhaps a local politician, or just that annoying neighbor with the perpetually barking dog – saying or doing something utterly fabricated and reputation-destroying. Terrifying? Absolutely. A brand-new crime? Not a chance. This is classic defamation, specifically libel if it’s a recorded or written falsehood. It’s no different than someone painstakingly photoshopping a fake picture to ruin a reputation back in the day, or spreading false rumors via a fake newspaper clipping. The AI just automates the forgery; the intent to harm a reputation remains the same, and our existing laws on libel and slander are perfectly equipped to handle it.

Then there’s fraud and impersonation. AI is already churning out shockingly realistic phishing emails, voice scams where “your bank manager” sounds eerily like, well, your actual bank manager, or even crafting fake identities to siphon off your hard-earned cash. Is it clever? Undeniably. Is it new? Please. This is just good old-fashioned fraud, identity theft, and impersonation, albeit with a splash of sci-fi glamor. Our legal system has been chasing con artists and identity thieves since before they had electricity, let alone neural networks. The laws governing deception and theft are deeply entrenched and directly applicable.

And what about harassment and threats? AI can generate personalized, abusive content, orchestrate targeted harassment campaigns, or even deliver chilling, synthesized threats that land with disturbing accuracy. Again, the method is novel, but the vile intent and the very real psychological damage inflicted are anything but. Legal protections against harassment, cyberstalking, and making threats aren’t suddenly null and void because the nasty message was spat out by an algorithm. The internet has been giving us a crash course in digital abuse for decades; AI merely provides a faster, more efficient megaphone for the abusers.

Or consider copyright infringement, a particularly lively debate in the AI world. Generative AI models, trained on mountains of data – including, let’s face it, vast amounts of copyrighted material – can then spit out content that bears an uncanny resemblance to existing works. While the legal eagles are having a grand old time debating the nuances of “fair use” and “transformative works” (and I’m sure their billable hours are thriving), the fundamental concept of using someone else’s creative property without permission is as old as art itself. AI didn’t invent intellectual property theft; it simply made it a whole lot faster and more insidious.

And finally, the most egregious example: child exploitation. The ability of AI to generate abusive images or videos of children is truly sickening, a moral abyss. But let’s be absolutely clear: the creation, distribution, or possession of such material is already covered by some of the most stringent laws globally, with universally harsh penalties. AI provides a new, horrifying means for this crime, but it creates no legal vacuum. The laws are unequivocally there, and they are brutal for a reason.

The Actual Headaches: Who Pulled the Digital Trigger, and From Where?

So, if our legal toolbox is already pretty well-stocked, why does everyone keep wringing their hands about “unregulated AI harm” and demanding new laws? Because the real struggle isn’t about what the crime is, but the maddeningly complex puzzle of who committed it, and where they were when they did it.

Attribution, my friends, is the bane of the digital age. Try tracing that AI-generated deepfake back to the specific human who deliberately created or distributed it with malicious intent. It’s like trying to find the author of an anonymous hate letter in a world where everyone has suddenly developed perfect, untraceable penmanship, and they can mail that letter to a billion people instantly, from anywhere. This calls for highly sophisticated digital forensics, robust metadata, and ideally, some form of proof-of-origin technology like C2PA (Coalition for Content Provenance and Authenticity), which aims to cryptographically link content to its source. But getting global adoption for such solutions is, shall we say, a significant logistical headache.

And then there’s the delightful tangle of cross-jurisdictional enforcement. Imagine an AI-powered scam originates from a malicious actor chilling out in Ulaanbaatar, targets an unsuspecting granny in Stockholm, and bounces through a series of anonymous servers somewhere in the Cayman Islands. Suddenly, you’re not just dialling the local police. You’re deep in the labyrinth of international law, extradition treaties, and grappling with the distinct possibility that the country where the perpetrator resides simply couldn’t care less about another nation’s regulatory zeal. It’s essentially trying to catch a criminal who just keeps hopping borders, but now they can do it at the speed of light, making traditional law enforcement methods painfully slow and often comically ineffectiv.

Smart Moves, Not Panic Moves: Adapting the Legal System We Have

Instead of succumbing to the political theatre of “regulating AI” with broad, often ill-informed, and ultimately unenforceable legislation (which, let’s be honest, often just stifles legitimate innovation), we should be focusing on making our existing legal machinery smarter and more agile.

Our courts, for instance, need to get their heads around how to authenticate AI-generated evidence. Digital forensics, the analysis of cryptographic watermarks, and the ability to prove (or disprove) authenticity will become absolutely critical in legal proceedings. This isn’t about drafting entirely new statutes; it’s about equipping legal professionals with new techniques to apply existing laws effectively.

We also need to have honest, robust conversations about platform accountability. What’s the role of social media giants, hosting providers, and even AI model developers in policing AI-generated harmful content? This is a complex, ever-evolving debate, but the core of it should be about refining existing concepts of liability and responsibility, not inventing entirely new “AI platform police” laws from scratch.

AI sheriff, AI law, legislation, regulation

And remember, legal precedent is a powerful force. As AI-driven cases inevitably start making their way through the courts, judges will naturally begin interpreting existing laws in the context of this new technology. It’s a slow, methodical churn, but it’s precisely how our legal system has always evolved to address challenges, from the advent of the telephone to the widespread adoption of the internet.

Crucially, laws against defamation, fraud, harassment, and exploitation aren’t really interested in the brand of hammer that was used. They are fundamentally interested in the malicious human intent and the harmful action that resulted. AI, as a tool, does not change that core principle. The law is designed to punish the actor, not the screwdriver they wielded.

Final Thoughts: Empower the System, Ditch the Fear-Mongering

Let’s cut through the fluff: the narrative that AI introduces a completely new, unregulated wild west of crime is, for the most part, a political fantasy designed to generate headlines. Most of the genuine harms AI can facilitate are already covered by laws specifically designed to curb human malice.

What we actually need is not more laws, but a more intelligent, agile, and globally coordinated application of the ones we already possess. This means:

  • Smarter application: Law enforcement agencies, legal professionals, and regulatory bodies desperately need better education and resources to understand AI’s true capabilities and to leverage digital forensics effectively. We need more “CSI: Deep Learning” and less “Old Man Yells at Cloud” (though that still makes a great article image!).
  • International cooperation: Because digital borders are about as effective as a colander for holding water. We desperately need better global teamwork to nab the bad guys who think they can hide across a thousand servers and a dozen jurisdictions.
  • Public awareness: Educating the general population on how to spot deepfakes, identify sophisticated phishing scams, and generally protect themselves from AI-driven manipulation is, frankly, far more effective than trying to police every single pixel generated by a rogue algorithm.

Ultimately, true harm always stems from malicious human actors, irrespective of the gleaming new tools they choose to employ. Our energy and resources should be spent holding those actors accountable within our established, adaptable legal frameworks, rather than chasing the phantom of “AI regulation” – an exercise in futility that will only distract from the very real and solvable challenges at hand.

If you like what you’ve read here, consider signing up for my newsletter and get updates directly in your inbox.

Published inAIEnglishTech