< Skip to content

Revisiting ComfyUI

ComfyUI was the first UI for Stable Diffusion that I tried way back when, but once I found Automatic1111 and Forge I haven’t seen any point in going back to ComfyUI again. My opinon is that Comfy overcomplicates things that are really easy in other UI’s. If I can make the same things in a quick and easier way in another UI, then why make life harder than it already is?

Comfy does have some advantages though, new releases becomes available almost instantly in Comfy. As an example Stable Cascade was available for Comfy just a few days after its release, and as far as I know there’s still no decent integration for Automatic1111/Forge.

There’s two reasons why I choose to revisit Comfy.

  1. I know so much more about generative AI today than I did back when I first tried it, and maybe that will help me understand Comfy better.

  1. I want to see what the options for text-to-video are for Comfy, as I have had a lot of issues with Animatediff on Forge.

ComfyUI Part 2: Comfier than ever?

Since I’ve already covered the installation process, I’m going to skip that in this post.

Once Comfy was installed I got the default workflow up and running.

And of course the Galaxy bottle that everyone generates the first time they start up Comfy.

Nice! At least I know that much is working.

Then I spent a good 20 minutes cursing because I couldn’t remember much more, and much less how and where to get nodes and workflows. Eventually I found ComfyUI manager with the help of google, which makes everything a little bit easier at least.

Browsing workflows at OpenArt I found one I wanted to try out, a 8K upscaler that supposably my computer could handle. Once the workflow was downloaded and I opened it in Comfy it looked like this.

I spent another 20 minutes cursing while trying to find and install the missing nodes and models. And another 15 minutes cursing while trying to figure out why it still wasn’t working.

Eventually I got it working and I was able to actually try it out.

I decided to use an image I created a few days earlier to test the upscaler.

The result was not bad at all, so now I at least have one useful thing I can use Comfy for.

Animating with ComfyUI

Next I wanted to create videos from text and/or images and downloaded some workflows from OpenArt.

The first one I downloaded was BIG and had lots of red parts in it, so I skipped that one. The second one I downloaded was also big but had a bit less red areas in it.

Despite solving all the errors I still got one error after another, which I kept solving. Eventually the workflow got processed all the way to the last step, and then kept giving me some bullshit error. And I gave up trying to solve that one.

If Comfy could just make the error messages a bit more understandable, it would have been a much better UI. Giving a bunch of bullshit text that doesn’t make any sense to normal people is just annoying. Tell me what’s missing or which module/model/node that is causing the issue, and preferrable a suggestion on how to fix it!

Eventually I found this workflow which basically is the same as prompt travel with animatediff in Automatic1111/Forge.

Fortunately I had everything I needed installed from trying to get the other workflows to work, so I just had to press the queue button to see what happen.

It turns out that the pre-defined sample prompt would take a long time to finish, 70 minutes from a first glance. But at least it was working so I figured I’ll just wait it out.

The result after a 70 minutes wait was this. Sure, it’s a pretty cool video, or at least the concept is cool. But the quality isn’t very impressive, I think.

But then I noticed that it till wasn’t ready. It still needed to work for another 3,5 hours!

I have no idea what the final result would actually look like, as I refused to wait for another 3,5 hours. For comparison I created this video from a method I call poor man’s animation, which I will explain further in my next post.

I’m going to keep using Forge as my main UI for Stable Diffusion, Fooocus for some specific inpaint/outpaint and I’ll add Comfy for super upscale and maybe one or two other tings that aren’t overly complicated.

Dela med dina vänner
Published inAI ImagesTech