Skip to content

The Illusion of Control: Why “Regulating AI” is a Political Fantasy

The call for regulating AI get stronger the more politicians and decisionmakers are becoming  aware of AI’s true potential, but is it even possible to regulate something like Artificial Intelligence? In this 5 part series I will highlight reasons why regulation and legislation regarding AI at best can affect some online services, and in a worst case scenario have an opposite effect and make it even easier for malicious groups or individuals to weaponize AI.

Let’s be honest from the very start. When it comes to national legislation, politicians are years too late to legislate against almost anything that could potentially have warranted such regulations. That battle was lost the same second that Open Source AI became publicly available. Let’s have a look at a few legitimate concerns that not only politicians, but probably a large part of the population, would support legislation in.

  • Mandatory watermark on all AI generated media
    We have all seen fake images and videos getting posted and shared across social media and other parts of the internet. Ranging from follower-farming to deepfakes of celebrities.
  • Mandatory NSFW (Not Safe For Work) filters
    To prevent the creation of unconsensual, sexually explicit images of real people.
  • Additional filter to prevent the use of fake influential people
    To make it harder for someone to create videos of influential people, politicians and decisionmakers in propaganda purpose.
  • Mandatory watermarks in texts generated by LLM’s
    To prevent, for example students from cheating and as a consequence land work positions that they really aren’t qualified for.
  • Various additional regulations
    There will be a plethora of things that politicians want to regulate that will either restrict AI from doing certain things, or restrict people from using AI in a certain way. Some of these regulations might be legitimate, while others most certainly would relate to politicians being uncomfortable with a population that rapidly gains more information as well as power.

The Illusion Of Regulating AI

The bulletpoint list (at least the top 4 points) shows completely reasonable grounds for regulations, so why can’t we just legislate to address those points?

The main reason is that even though you potentially could legislate against these, very valid points, it would be impossible to enforce that legislation. This type of legislation would at most affect online services that offer image and video generation on their websites. But even that would be uncertain and ineffective. Simply because you have to take into account which national law applies to which online service.

You can’t expect a company that is operating from Hongkong and that is hosting its website and services there, to follow the national legislation in France, as an example. This is something that recently has been shown regarding Twitter/X and Elon Musk. What’s called “hate speech” in a lot of European countries is called “free speech” in the USA. We can all have our opinions about which side is correct in this, but the fact still stands – a country or a region can not enforce their own laws outside said country or region.

But you can ban Twitter/X as well as AI services that don’t comply with your national laws, can’t you? Again, yes you can. And again, you can’t enforce it. People will still access these services whether they are banned or not. Either with the use of VPN or some other service. And you can bet that if one option is blocked or removed, another one will become available before the day is over.

Regulating AI: Political Fantasy versus Open Source Reality

Even if we for the argument’s sake pretend that the above complications with enforcing legislation isn’t an issue, there’s still one major reason for why these types of legislations basically are pointlesss. That would be the Open Source community. Most people have probably heard the term “Open Source” and has some vague idea about what it might be.

But most people probably also don’t understand how extremely large this community really is, and how it actually works. Up until a few years ago when I first started getting interested in cryptocurrencies, and more accurately in creating smart contracts for various purposes, I had probably visited Github a couple of times. Most likely because it showed up in the search results when searching for “free software”.

The open source way is a set of principles grounded in transparency and collaboration, and can be summarized as follows:

  • Transparency: The code is open and available to everyone.
  • Collaboration: The community monitors itself and are volunteering to help inprove the code.
  • Release early and often: An iterative approach leads to better solutions faster. You learn by doing, but also from making mistakes.
  • Inclusive meritocracy: Good ideas can come from anywhere, and the best ideas should win.
  • Community: Communities form naturally by groups of people with common interests.

So what does all this mean? Basically it means that everything politicians (and others) are afraid might happen in the future, unless they regulate AI, is already possible and available openly to the general population.

Already back in October 2022 when Stable Diffusion 1.5 was released as an Open Source model, every person owning a home computer was given the ability to create very realistic images and make them look like a real and existing person. Because this was Open Source, the code was spread rapidly through the community and subsequently improved upon.

The model was fine-tuned by a lot of different people, and for a lot of different purposes. And if we’re going to be honest, a lot of the models were fine-tuned with the purpose to create high quality, realistic sexual images. Today, there’s no way of knowing how many fine-tuned (sexual explicit as well as non-sexual) Stable Diffusion 1.5 models that are out there, but I would estimate that it’s probably several thousands different models, and most likely there are thousands of copies of each of those. Making it a probability that there are millions of this particular model available.

No legislation in the world will change that. There’s literally nothing anyone can do that will either collectively destroy all these models, or force the owners to destroy them. This fact alone should be enough to discourage any thought of legislation, or at least make it clear that the legislation can never be enforced.

Even though it was possible to create high quality and realistic images using Stable Diffusion 1.5, it wasn’t always easy and disfigurment was pretty common. It would only take 8-9 months until the next model (SDXL) was released in July 2023, also Open Source and available to the general population. This model had a higher quality output and was easier to control with the introduction of OpenAI‘s CLIP (Contrastive Language–Image Pre-training) VIT-L.

Suddenly a model that was both easier to control (meaning control output), had higher quality output, and was just as easy to fine-tune as the 1.5 model was made available to everyone. Without having any real statistics and only going by my own observations, a very large part of this model was also fine-tuned for sexual content. And I would estimate that there are even more SDXL fine-tuned models available than the 1.5 models.

Again, the models, the code and everything else is out there and nothing can be done about it. These models are not affected by legislation about watermark or NSFW filters, how could they? It’s as if you own a blue coffee mug at home, and all of a sudden the whole world bans blue coffee mugs. Your mug won’t magically change color because someone legislated against it. And no country would have the resources to manually go through every citizens home in search for blue coffee mugs. The same goes for these AI models, except that every model can be copied in a few seconds, moved from computer to a cloud storage, emailed to a friend, kept in an USB stick or any other way you can think of to store data.

As long as a single one of these models are still available in a single persons computer, they might still all be available. Because of how easy it is to copy and share them.

Just Stop Making AI Open Source Then

If we once again pretend that the above complications with enforcing any legislation regarding issues with national law and the fact that everything you might want to legslate against, already is available to everyone.

The Open Source community is neither good nor bad, but rather good and bad. You have people that creates high quality deepfake code, but you also have those creating high quality code to expose deepfakes. And all in all the Open Source community is one (big) reason to why AI are evolving as quickly as it does. And for the Open Source community to work on these projects (which largely are completely unpaid), the code has to be open and available.

Stable Diffusion 3

About a year ago, June 12, 2024, was the official releasedate of Stable Diffusion 3, but already in April 2024 I was able to test it and write a review of it. With this model Stability AI had abandoned Open Source, partly in favour of safety and security. With regards to security and safety the code were no longer available in Stability AI’s new model, and fine-tuning was not possible. This, together with heavy filters to filter out any content that Stability AI had deemed unsafe and/or unsecure rendered the model nearly useless. Let me show you an example.

The images above was both created using Stable Diffusion 3, with the only difference that the prompt for left image specifies “a male artist” while the right image specifies “a female artist”.

Stable Diffusion 3 vs SDXL

The left image above was created using the SDXL model, and the right image was created using Stable Diffusion 3. The were both created using the exact same prompt.

The conclusion seems to be that generating images with women in them is unsafe, which limits the use of the model a lot. Even for people who never intended to create NSFW images. On top of the occasional blurred out images that depicted (fully clothed) women, the model had some kind of Mengele aspirations. Which resulted in way too many images like the ones below.

Stable Diffusion 3

This of course resulted in that pretty much no one wanted to use this model. The everyday user didn’t like it because of its extreme limitations, and the Open Source community didn’t see any point of using it because it was closed code and no way for them to tinker with it and really test to see what it was capable of. And despite that there were no real way of fine-tuning the model, Stability AI licensed the model so hard that CivitAI decided to ban the use of it all together on their website.

This hurt Stability AI a lot, and 4 months later (October 2024) they announced their updated model, Stable Diffusion 3.5. Stability AI had now opened up for the idea of some fine-tuning of their model, and they lifted some of the restrictions in the licensing. And while the 3.5 model was pretty good quality-wise, Stability AI had lost a lot of faith among the AI community.

The whole debacle with Stable Diffusion 3 paired with the fact that Black Forest Labs released their Flux model almost at the same time as Stable Diffusion 3.5 was released, resulted in that a large part of the community picked Flux over Stable Diffusion 3.5.

The image above shows the statistics of the model Stable Diffusion 3.5 and Flux.1 Dev on CivitAI. Stable Diffusion has 16 950 downloads and 17 308 on-site generations, while Flux.1 D has 178 547 downloads and 8 315 580 on-site generations, roughly during the same time period.

Stability AI went from completely dominating the market for generative AI to becoming a curiousity that might have some uses, all in just a few months. A big part of that loss can (in my opinion) be attributed to Stability AI rejecting the Open Source community and being overly zealous when it came to “safety and security”.

For any AI company to reject the Open Source community in a similar way in the future, they would need to have something so exceptionally special that no one else can achieve in any way, or risk the same fate as Stability AI. And that’s why the Open Source community will continue being a large and important part of the development of AI.

Published inAIEnglishTech