I have previously written about AI and how you can create your own AI images by installing Stable Diffusion on your computer. When you install AI to create your own images, you also avoid the censorship that most sites and apps implement. Many may think that there is only one reason why you would not want to use a censored version of AI to generate images. But it is far from only dirty guys who create NSFW images who want to avoid censorship.
Are Censored AI Good or Bad?
Censored AI is neither good nor bad, it all depends on the context. Since generative AI is available everywhere on the internet, even as apps on your phone, I think censorship still has a purpose. No normal person wants their child to sit around creating images of a grossly pornographic nature, whether by mistake or intentionally.
Censorship filters don’t always work as intended, however, and using a language other than English can sometimes be interpreted as deliberately trying to bypass the filter. This can lead to you being blocked from the AI page or app, in some cases permanently. The censorship also often catches scantily clad images, or images that can be interpreted as violent. If you’re also a serious user and creative artist, you definitely don’t want to have to think about how to formulate a prompt.
Is Generative AI User-Friendly?
This is also a question that has no direct answer. If you google Create AI images you will soon discover that there are endless amounts of websites and apps that offer their services. Some are completely free, some are free with an optional paid service and many are powered by advertising revenue. In other words, finding the right one is not easy.
Generally speaking, most websites and apps that offer the service are relatively easy to use. But if you don’t use their paid service, you’re forced to endure a lot of advertising, long queues and even censorship. What’s really of interest is whether the AI you’re running locally on your own computer is user-friendly or not.
In my post where I described how to install Stable Diffusion on your computer, you can see that that particular version runs entirely with text commands. For those who are not used to it, this is not very user-friendly, and in addition, that version used up a lot of resources and took a long time to generate images.
AI is now developing at a breathtaking pace, and there are a huge number of people who volunteer to help with the development and share their projects. This means that the post I wrote 5 months ago is more or less completely outdated. There is AI that is faster, better and easier to use now. When I write “AI” here, it is a bit misleading because what we call AI is basically a model that has been trained to do certain things. Like creating images from text.
Using A UI to Create AI Images
What is UI, someone might wonder, and below is a simple explanation.
UI can be simply explained as a series of visual elements that enable the user to interact with a product or system. A UI element can be a button, a screen, a sound or a page. Something the user can press, see, hear or interact with to be able to exchange information. UI design focuses on the look and feel of products and systems, how the surface looks and works. UI aims to only visually guide the user through the user interface of a product or system.
Xlent
There are now a number of different options for using UI to work with the AI models you have installed locally on your computer. I’m going to go through one of them, and it’s called ComfyUI. It can be installed and used on both Windows and Mac computers, but since I use Windows, that’s the one I’m going to go through.
Install ComfyUI
First of all, you need some kind of compression program. You probably already have Winrar or Winzip on your computer, but I recommend installing 7-Zip.
You can download the program here –> Download 7-Zip
Once you have downloaded and installed 7Zip, you need to download ComfyUI itself.
You can download ComfyUI here –> Download ComfyUI
The file is approximately 1.5 GB in size and may take a while to download depending on your internet connection.
The downloaded file is named ComfyUI_windows_portable_nvidia_cu118_or_cpu.7z. When you right-click on the file, select 7-Zip –> extract here, which will create a folder named ComfyUI_windows_portable. The process may take a while and it is important that you do not interrupt it.

When 7-Zip is finished, you will have a bunch of files and folders in the folder called ComfyUI_windows_portable, and you can now move the entire folder to somewhere else on your computer.
The next step is to download one or more models (what we sometimes loosely refer to as AI). It is perfectly possible to copy the model from Stable Diffusion if you have previously installed the version I wrote about here.
You can also download the model here –> Download Stable Diffusion v. 1.4
After you have downloaded the model, you will have a file named sd-v1-4.ckpt. You should now move or copy that file to checkpoints which you will find in your ComfyUI folder. This is also where you save other models that you can download at a later time.

Now you have all the basic files you need to run Stable Diffusion using ComfyUI.
To start, go to your ComfyUI_windows_portable folder and there are 2 different .bat files. If you have an Nvidia graphics card, start by double-clicking on the file called run_nvidia_gpu, and if you have any other graphics card, double-click on run_cpu. Running AI through the processor (run_cpu) is much slower than doing it through the graphics card.
How to Use ComfyUI?
When you start ComfyUI for the first time, the program will open in your browser and look something like this.

ComfyUI is initially preset to create a test image, so if you don’t change any settings and press Queue Prompt, it will generate an image similar to this.

All images you create will be saved in the Output folder which you can find in ComfyUI_windows_portable –> ComfyUI –> output
The Different Parts of ComfyUI
I will go through the most basic parts of the workflow itself in ComfyUI, a more in-depth dive will be done in a separate post.
Load Checkpoint

Here you load the AI model itself (sd-v1-4.ckpt that you downloaded earlier). The model I have right here is Stable Diffusion version 3 (sd_v3). Different models have been trained for different things. For example, there is a model called Anything v3 which is specifically trained to create anime images, or Realistic Vision V2 which is specifically trained to create realistic images.
Text Encode (Prompt)

In the top text box, you enter keywords that describe what you want your image to contain. The bottom text box is a so-called negative prompt, which means that you enter here what you do not want to be included in your image.
Empty Latent Image

Here you specify the size of the image you want to create in width and height and how many images are created at a time in batch_size. I recommend not changing the size itself, but instead upscale the image later on. Larger images use more resources and take longer to create. You will notice that you have quickly created a couple of hundred images that look almost the same while you try to find the right settings.
KSampler

You could say that this is where you fine-tune the settings for your images.
Seed is a random number that lays the foundation for your image, so to speak. For example, if you create an image that you think is okay but that you want to improve or change, you need to use the same seed. For example, if we use exactly the same settings as when we created the first image, but with a different seed, we get this image, for example.

Here I used the following seed: 610348621073340
You can copy my seed, and get almost exactly the same image yourself.
Control_after_generate has 4 different settings:
Fixed – the seed does not change between images you create. As mentioned, this is necessary if you want to create the same image multiple times.
Increment – After each image you create, your seed increases by 1.
Decrement – After each image you create, your seed decreases by 1.
Randomize – A completely new random seed is created for each image.
Steps can be roughly explained as how many times an image has to be changed before it is finished. 20 steps means that the image is changed, or sampled, 20 times. It is easy to think that the higher this number is, the better the image will be. This is not true. A lot depends on which model you use, and some models create very good images in just 10-15 steps while others need 100 steps.
Cfg is a scale that tells you how close your image should be to what you describe in the text. A higher value means that the image should have a higher similarity to your text, and a lower value means that the similarity does not need to be as close.
Sampler_name is the algorithm you choose to use to alter, or sample, your image. Just as there are different checkpoints that are good for different purposes, there are a multitude of samplers.
Scheduler is a type of scheme, just as it sounds, that tells you how the image should change. If you think about it right when you start creating a new image, everything is really just grainy pixels (we never see this). The goal is to filter out the graininess so that a clear image emerges.
With each step, the image becomes less grainy, and when all steps are finished, the graininess should be gone. With more steps, fewer grainy pixels are removed per step, you could basically say.
Sampler is the mathematical formula that tells how much graininess to remove and is dependent on the scheduler to know how much graininess to remove in each step. The image below shows an example with two different schedules performed with the same sampler.

Even though both eventually end up at 0, they remove different amounts of graininess per step.
Denoise is not entirely easy to explain, but in short, a higher value gives more room for creativity and a lower value gives more detailed images. The best thing to do is to try it out to see the difference.
VAE Decode And Save Image

VAE stands for variational autoencoder and helps encode and decode your images, making them either a little sharper or a little softer. Save image really just shows a preview of the image saved in your output folder.
This post ended up being much longer than I originally intended, and I’ve only covered the basics. As I’ve written before, I’ll be making more posts (or at least one more) where I cover how to create your own streams and scale up images, I think.
If you liked this guide and other texts I’m writing here, please check out my Patreon for exclusive tips and nodes!
You might also want to subscribe to my weekly newsletter, and get news and tips straight in your inbox.