SDXL Images — Forward to a Reset

AI Art with ComfyUI and Stable Diffusion SDXL — Day Zero Basics For an Automatic1111 User

Eric Richards
6 min readAug 1, 2023

--

So, with a little bit of effort it is possible to get ComfyUI up and running alongside your existing Automatic1111 install and to push out some images from the new SDXL model.

I’m going to discuss the few foibles I went through to get it going should you want to create your first SDXL image on your local machine.

I am running an nVidia RTX 2070 video card with 8GB of VRAM. It works with both the base and the refiner. If you have a card below that (my card is in the middle of the pack, with newer and newer cards pushing it down) it might be difficult to setup.

Okay, so what did I do?

Step Zero: Acquire the SDXL Models

You can find SDXL on both HuggingFace and CivitAI. You’re supposed to get two models as of writing this:

  • The base model.
  • The refiner model.

Since SDXL 1.0 was released, there has been a point release for both of these models. I’m sure as time passes there will be additional releases.

CivitAI:

HuggingFace:

You should be always downloading safetensor files. Stay away from CKPT pickle files.

I put these models in my Automatic1111 models directory. We’ll have to be sure that ComfyUI re-uses Automatic1111 as much as possible.

Step One: Getting the Source and Environment For ComfyUI.

Since I come from a development background, I’m fine with grabbing source off of GitHub. So, in a directory that also contains my Automatic1111 enlistment, I cloned, in Windows Git Bash command line, the source code down:

git clone https://github.com/comfyanonymous/ComfyUI.git

There is also a stand-alone Windows install that some of this applies to but I haven’t used it so… good luck on that path.

Step Two: Share with Automatic1111

Since I’ve downloaded bunches of models and embeddings and such for Automatic1111, I of course want to share those files with ComfyUI vs. copying them over into the ComfyUI directories. So:

  • Copy extra_model_paths.yaml.example to extra_model_paths.yaml
  • Edit extra_model_paths.yaml per the comments in the file.

When I edited this, not only did I have to put in the full Windows path to my Automatic1111 base directory but I had to change all forward slash directories to backslashes. I did this because no models were recognized other than the hardcoded SD-1.5 model.

Step Three: Share the Python venv with Automatic1111

So Python. PyTorch. Xformers. Automatic1111 is really well packaged and designed and part of that is establishing a Python virtual environment for the libraries that it uses. I want to share this vs. installing random Python libraries active by default on my desktop. So, before I run ComfyUI I need to enter into Automatic1111’s nice virtual environment.

Within your Automatic1111 environment, given that you’ve been running it, there’s a directory called venv. Within this directory is the Scripts directory, containing files to setup the virtual environment, depending on what command line interface you’re using. I’m using cmd.exe on Windows so I’m going to execute Activate.bat — there’s Activate.ps1 if you’re using PowerShell and just Activate if you’re using bash.

So… I execute Activate.bat (path unique to me):

C:\Users\Eric\repos\Stable-Diffusion\stable-diffusion-webui\venv\Scripts\activate.bat

…and I see that my command prompt is now w/in the venv of Automatic1111. This means I’m sharing all the PyTorch and XFormers and such that Automatic1111 is using. Which, for now at least, are the right versions.

Step Four: Start ComfyUI Server

Within this venv, I can now execute:

Python main.py

Step Five: Initial Default Image Generation

There’s a bit of initial grinding and after that’s done, I can go to http://localhost:8188/

By default, what I see is a flow for an SD1.5 model to generate an image. Select “Queue Prompt” to run this flow.

The images can be found within the ComfyUI directory called output.

If this has worked, congratulations. You have run your first ComfyUI. Now you probably want to run an SDXL flow. But how?

Step Six: Get a ComfyUI SDXL Flow

The easiest way to generate your first SDXL flow is to grab an image created with the kind of flow you want and to use the metadata embedded in the image to extract the flow. I love it!

Here’s a good example image and flow: SDXL Examples | ComfyUI_examples (comfyanonymous.github.io)

Save that PNG image locally. Then, drag that image into your ComfyUI web page. You should see the page transformed into a new (rather complex) flow. This is the flow and settings used to generate the image you just downloaded.

Step Seven: Fire Off SDXL! Do it. Do it!

Select that “Queue Prompt” to get your first SDXL 1024x1024 image generated. Now, the first one takes a while. Lots are being loaded and such. But it gets better. Okay, so my first generation took over 10 minutes:

Prompt executed in 619.35 seconds

My second generation was way faster! 30 seconds:

Prompt executed in 28.72 seconds

So there’s some overhead for the first image but after that it’s reasonable.

Step Eight: Make More Than One Image at a Time

There are two ways I see to make multiple images with one “Queue Workflow”

  • Doesn’t work well for me: within the “Empty Lantent Image” node, increase the batch size. I don’t have enough memory to do this.
  • Below the “Queue Prompt” button there is an “Extra options” checkbox. Select this and “Batch count” appears. Go ahead and increase this and your prompt will be run that many times.

Step Next: Learn To Write Prompts All Over Again

This is where I am now. Writing prompts for SDXL is going to be a whole new level of trial and error. I discovered very quickly that just dumping my SD1.5 mega prompts in would just get a noisy mess. So I’m back to square one, seeing what works for other people and building from there. I don’t feel like I’ll be abandoning SD1.5 anytime soon, at least not until what I make in SDXL far exceeds what I see in SD1.5.

In addition to learning prompts, I have all of ComfyUI to learn. Folks have been at it for over four months so I feel like it will be a mature environment. My first goal is to bring in the randomness I was enjoying with wildcards and see how an overnight SDXL batch goes.

A Few Smatterings of Tips

If you restart the backend after changing some configurations or such, you’ll need to reload the front-end web page, too. I made all sorts of proper fixes to configure the server but since I hadn’t reloaded the front-end web page it was still loaded up with its own snapshot of bad data. So, change some options on the server python that requires to re-run main.py? Reload your web page, too.

Run that python main.py — help if you want to see all the command line options.

Example images with embedded workflows? ComfyUI Examples | ComfyUI_examples (comfyanonymous.github.io) — click through for the image / workflow that you’re interested in and remember that you can drag & drop the image into the ComfyUI web page and it will load up the embedded workflow in that image. At least… it should.

One manual just aching for edits, but a start: ComfyUI Community Manual (blenderneko.github.io)

YouTube has quite a few videos about starting out with ComfyUI. I plead to all video content creators: it’s so awesome you’ve got a big huge screen! But if you’re going to screen record it, please boost your font-size so that I can actually see what’s going on!

“Queue Prompt” — there is a queue. You can tweak settings and then select “Queue Prompt” — tweak and iterate more and add more and more to the queue. You don’t have to wait until the current set of image generation is done before adding more work into the queue. That’s an interesting change for me to get used to.

--

--

Eric Richards

Technorati of Leisure. Ex-software leadership Microsoft (Office, Windows, HoloLens), Intel Supercomputers, and Axon. https://www.instagram.com/rufustheruse.art