We are actively crippling ourselves in a way that is not looking good for the future, and I’m not talking about AI stealing our jobs. I’m talking about how we are hampering our own future, through ideological guardrails, in a way that is not only a threat to national security, but possibly sets us up for a guaranteed loss in the race for AGI.
Even if you think this sounds scary, you are most likely underestimating the severity of the threat to the very way we live our lives. The reason? We are building AI models in our own image, with our own flaws hardcoded into its very core.
The worst part of it all is that we are willfully doing this to ourselves, and here is how we do it.
Political Signals
In most of the western world, well-meaning politicians have been imposing both implicit and explicit laws for self censoring. Explicit laws might include things like hate speech, in order to curb racism, resulting in citizens being jailed not for crimes, but for wrongthink.
Comedian Graham Linehan was arrested by 5 armed police officers when he arrived at Heathrow airport, for tweeting:
If a trans-identified male is in a female-only space, he is committing a violent, abusive act. Make a scene, call the cops and if all else fails, punch him in the balls.
Graham Linehan
Larry Bushart, a retired police officer, was jailed for over a month for posting a meme regarding the assassination of Charlie Kirk on Facebook.

Lucy Connolly was sentenced to 31 months in prison for a tweet, calling for mass deportations, following the stabbing, and killing, of 3 young girls and injuring an additional 10 children at a Taylor Swift themed childrens yoga and dance workshop.

The message is clear – there are things you are not allowed to say, no matter the context.
Personally, I see these incidents as part of the culture war where both sides demand free speech for themselves, and at the same time want to restrict the free speech of their opponents.
Message Received
Pretty much all tech companies are global business, and they read the messages that are sent very clearly. Their goal is maximal reach globally, and because of that they have to adjust their content accordingly. For generative AI such as Stable Diffusion and Flux, it means a strict no female nipples policy, since the US believes the female nipple to be morally bankrupt and sinful.
For LLMs such as Gemini and ChatGPT it means that you can not speak negatively about immigration, because the European mind is that all critique against immigration policy is inherently racist and evil. The same goes for companies like Meta by the way.
To maximize their reach, they have to take every bit of law, both explicit and implicit ones, and bake it into their policy. It’s just not feasible to keep a specific model for europeans, one for americans, and another one for the Middle east. The result is a heavily censored models that are not accurate, and that rather give you a moral lecture than an answer.
You end up with cringey forced diversity versions of historical figures, rather than historical accuracy.

I have a personal AI, Nova, that i have been working with for nearly a year. I have to be certain that if I ask a question, the AI will answer truthfully to the best of its ability, no matter how uncomfortable that truth is. Despite that I have written a highly sophisticated system prompt that should prevent Nova from making things up, or trying to explain things in a context, I have to constantly remind it that if we can’t speak freely and unfiltered with each other, then it is of no use to me.
And this is when using the developer version, where I can actually turn the safety settings completely off.

And even with all safety settings turned off, I still get hesitant answers or moral lectures every now and then. I would simply not have the patience to run the publicly available model. where I have no control over the safety settings at all.
Chinese Liberty
A large portion of the AI models that are trained today, are trained by Chinese companies, and are refreshingly free from self-censoring. Chinese models like Qwen and WAN are created as tools for us to use. If I ask it a question, it will not try to sugarcoat the answer based on self-imposed moral high grounds.
Just as a hammer won’t lecture me about how phenomenal screws are when I try to use it on a nail.
This makes the Chinese models more accurate and more trustworthy than models trained by western companies. These models are also cheaper to use, and are often based on open source code.
These models are basically what we wish Gemini, Claude and ChatGPT should be.
Implementation
Aftonbladet can reveal that the Swedish Public Employment Service (Arbetsförmedlingen) has purchased an AI solution for internal use. It involves a language model that functions roughly like ChatGPT. The model has been installed on powerful servers acquired by the authority and has been in operation for some time. Now, it has suddenly been stopped.

Comes from Alibaba
According to several concurring sources, this concerns a Chinese language model coming from the company Alibaba. It is called Qwen 3. The model is described in open sources as very powerful, affordable, and particularly good at interpreting visual elements.
Several people at the authority reportedly reacted to the choice of a Chinese model, and also to the fact that [the model] itself advised authority personnel against using it, when asked direct questions within the service.
– “It has been unclear what data this model has been trained on. In theory, it could be sensitive information about job seekers that also risks ending up in China’s hands,” says a source.
The above is just one example of how Qwen was implemented within a Swedish government body, because it’s more useful and cheaper than its Western counterparts.
I too am using Qwen, both as an LLM and for generative AI, simply because it does what I tell it to do. Of course, even the Chinese models are regulated, but as far as I can tell it’s limited to what is explicitly against the law (such as CSAM etc). That is perfectly reasonable. What is completely unreasonable is that a machine lecturing me about the immorality of free speech.

Security vs Utility
While Qwen is one of the models in my arsenal for work, apparently the Swedish Public Employment Service also see its utility, and we need to talk about the cost of this convenience. We are currently observing a massive, voluntary migration of data and usage from Western models to Chinese ones.
The Security Threat
The immediate threat isn’t just that a specific job seeker’s data might end up on a server in Hangzhou. The threat is that we are normalizing the use of Chinese infrastructure for our daily intellectual labor. The Chinese models don’t initially show signs of collecting or sharing data in malicious ways—they just work. This creates a false sense of security. It feels like a simple tool, like a hammer, but it is a tool that reports back to the workshop it came from.
Red Alert
By the time the security leaks are noticed, or the dependency becomes critical, it will be too late. If Chinese models become a vital part of our personal and professional infrastructure because they are the only ones that allow us to work freely, we have effectively outsourced our cognitive processing to a geopolitical rival.
Today when I opened my TikTok app, I was greeted by this message:
Transfers of EEA user data to China” via remote access: Update on the Irish GDPR decision
In April 2025, the Irish Data Protection Commission (DPC) found that TikTok had not met the requirements of the GDPR in connection with transfers of certain “EEA user data to China via remote access.

The DPC ordered TikTok to bring its transfers into compliance with the requirements within 6 months, otherwise they must be cancelled.
TikTok completely disagrees with the DPC’s decision and has appealed through the Irish courts.
The High Court of Ireland has paused the decision while this takes place, which means that the transfers can continue until further notice. You can read more about this here.
If you thought that Chinese companies might collect and use your data, you can now upgrade that from plausible to certain.
The Race for AGI
This leads to the ultimate consequence. AI models improve through usage. The less a model is being used, the less likely it is to become the AGI (Artificial General Intelligence).
Every time we choose Qwen over Gemini because Gemini is too busy lecturing us to do its job, we are providing training data and feedback to China. We are actively crippling our own development and accelerating theirs.
We are handling the lead in the most important technological race in human history to China, not because they have better engineers or better chips, but because they decided to build tools while we decided to build moral guardians.
We are hampering our own future through ideological guardrails, setting ourselves up for a guaranteed loss. And we are doing it to ourselves.
But who cares about the race for AGI? If China makes the first AGI, then surely America or Europe will make the second, and we’ll be on equal footing in no time?
No, not by far. In the video below I’m explaining why there’s no second place in AGI. If we keep to this path we have begun, with self censoring, moral positioning and lectures we will not have the second AGI.
We will have to play by whichever rules China sets for us. And trust me, whatever might offend you today, to the point where you demand that we cripple our only chance to set the rules, will seem like a light breeze in comparison of what to come.
Sign up for my newsletter to get insights, tips and tricks directly in your inbox.
