The headlines this week are predictable, if not exhausting. As reported by Reuters, the UK government and Ofcom have launched an investigation into Elon Musk’s X, specifically targeting the Grok AI chatbot for its ability to generate “illegally non-consensual” images. Prime Minister Keir Starmer has called the content “disgusting,” and the regulatory machinery is spinning up to do what it does best: ban the tool to silence the problem.

While the intent—protecting individuals from digital abuse—is noble, the method is a dangerous exercise in performative safety. By focusing on the means of creation rather than the actor behind it, regulators are selling the public a false bill of goods.
As I’ve argued before, we are witnessing a fundamental misunderstanding of the technological landscape. Banning a platform like Grok won’t stop deepfakes; it will merely erode our ability to deal with them.
The Accountability Paradox
Let’s look at the facts regarding Grok. To generate images on the platform, one requires a Premium subscription. This means there is a credit card, a billing address, and a distinct digital footprint attached to every prompt entered into the system.
If a user generates illegal, non-consensual imagery using Grok, we know exactly who they are. The evidence is tied to their identity. This is the ideal scenario for law enforcement: a direct line from the crime to the criminal.
By clamoring to ban or cripple these centralized, regulated tools, we are inadvertently pushing bad actors toward open-source alternatives. While it’s certainly tempting to just demand the implementation of filters on every AI platform available, it is not a solution.
The technology for creating deepfakes on your own PC, using models like Flux or Stable Diffusion, has been available for years already. These can be run on almost any old home computer, disconnected from the internet, with zero oversight and zero traceability. When we ban the tools that leave a paper trail, we ensure that the only deepfakes being made are the ones we can never trace.
The Danger of a False Sense of Security
The most insidious side effect of regulatory bans is psychological. When governments announce, “We have banned the deepfake machines,” the general public exhales. They lower their guard. They begin to operate under the assumption that the images they see on their screens have been vetted and are “safe.”

This is a catastrophic error in the information age. As I discussed in Why Regulating AI is a Political Fantasy, we cannot legislate away the existence of the technology. When the public believes the government has “handled” the AI problem, they stop critically analyzing the media they consume.
If deepfakes are “banned,” then the average person is more likely to believe a fake video is real, simply because “it wouldn’t be allowed online if it were fake.” Prohibition doesn’t remove the lie; it validates it by creating an illusion of truth.
We Already Have Laws for This
The outrage surrounding Grok ignores a simple legal reality: we do not need new laws to criminalize harassment, defamation, or the distribution of non-consensual intimate imagery.
As I detailed in AI Harm: We Have Laws for That, the tool used to commit the crime is irrelevant. If someone uses Photoshop to forge a signature, we charge them with fraud, we don’t ban Adobe. If someone uses a camera to spy on a neighbor, we charge them with invasion of privacy, we don’t ban Nikon.
The hyper-focus on the AI aspect is a distraction. If a user utilizes Grok to break the law, charge the user. The mechanisms for justice already exist; we just need the resolve to apply them to the human, not the algorithm.
The Erosion of Truth and the Need for Resilience
We are entering an era where seeing is no longer believing. This is uncomfortable, but it is our reality. Attempting to preserve the “sanctity of the image” by banning generators is a futile fight against entropy.
In Reality Check: Not Skynet, but Erosion of Truth, I warned that the real danger isn’t a rogue AI takeover, but the collapse of shared objective reality. Banning Grok doesn’t fix this; it just delays our adaptation to it.
We need to stop treating the public like children who must be shielded from “dangerous math.” Instead, we must focus on AI Resilience Through Education and Critical Thinking. We need a society that is skeptical by default, capable of verifying sources, and understanding that digital content is malleable.
The investigation into X and Grok is a political reflex, not a technological solution. It attempts to put the genie back in the bottle by smashing the bottle, unaware that the genie has already moved to a hard drive in a basement where the regulators can’t see it.
We don’t need bans. We need traceability, we need the enforcement of existing laws, and most importantly, we need a public that understands that in 2026, truth is something you verify, not something you are spoon-fed.
If you prefer reality checks over moral panic, consider joining my newsletter. You’ll get these articles directly in your inbox—no algorithms, no filters, just the analysis.
On a lighter note, if you are interested in the creative and technical side of AI, check out my Patreon. I regularly share free, clean ComfyUI workflows and tools for builders and artists.
