Quickly Created AI Images Via Automatic1111 SD-Forge Tool Variant

SD-Forge for Stable Diffusion Is Incredibly Fast at Making AI Art Images

Eric Richards
5 min readFeb 18, 2024

--

Where I am: for local, at-home, Stable Diffusion image generation I’ve used Automatic1111 and ComfyUI. I was forced to switch to ComfyUI when SDXL came out because I just couldn’t run it in A1111 on my RTX 2070 Super GPU card. Eventually with later A1111 updates I could, and I switched back.

The good news: I could use that powerful prompt with the extensions I love. The bad news: so much slower image generation than using ComfyUI. Eh, that’s what kicking big jobs off before bed is about.

But still I longed for that speed. ComfyUI has come a long way, right? Maybe it would have many new mature nodes that brought the power of A1111’s prompt back, maybe I should reconsider…

Then: *bam* came news of SD-Forge. SD-Forge is a variant of A1111 — it has the web UI and prompt box that I’m used to and a revamped back-end running behind the scenes that is SO FAST. Like ComfyUI fast. Without all the obscure nodes and workflows that looks like incomprehensible plans to a foreign power’s nuclear power plant.

Yes, Lord, yes. Thank you for this speed.

Setting Up SD-Forge

I’ve switched to SD-Forge and it would be quite a lot to get me to go back to my original A1111. Let me share real quick about what I did to switch to SD-Forge with a minimal amount of pain.

The home page, as it is, for SD-Forge:

First, I brought SD-Forge down via a git clone. There is an option to install it as well. I use git so that’s what I did. Install as you like and know where it gets installed to.

My next goal was to use my current Automatic1111 home as the golden source for models and textual embeddings. That way I only have to download them once into a single place, even if I’m not using A1111 as much.

Note: the following where I set up links to the directories for A1111 is one way. Another way that involves less chicanery is to edit the webui-user.bat file and set the A1111_HOME variable your A1111 home directory and then add the — ckpt-dir and others to your COMMANDLINE_ARGS variable. Do this before you invoke webui-user.bat for the first time. This will not get all the directories under /models. You can get checkpoints, LoRAs, and embeddings. It won’t get upscalers, for instance. If that’s good enough, there you go.

I did this via the Windows command line shell. I was running as administrator, though I don’t think that was needed. What I did at a high level was create directory junctions so that the model folder in the Forge installation pointed to the model folder in the Automatic1111 installation.

My steps, should you want to do something similar:

First, I change directory into the home of my old Automatic1111 installation (e.g., ……\stable-diffusion-webui) and set a variable to that directory:

set "SDHOME=%cd%"

Then, I change to the new Forge installation (e.g., ……\stable-diffusion-webui-forge) and set another variable to that directory:

set "FORGEHOME=%cd%"

Under Forge, rename any directories (or delete them):

ren models models.orig
ren embeddings embeddings.orig

Create directory junctions over to the old Automatic1111 installation:

mklink /J %forgehome%\models %sdhome%\models
mklink /J %forgehome%\embeddings %sdhome%\embeddings

At that point, you should be able to do the initial run of webui-user.bat to create the Python environment unique to Forge. Now, I screwed up the first time I did this. Instead of creating these junctions back to A1111 I created symbolic links. Not good enough in this case. The directories didn’t appear to be there for Python (or cmd.exe, it ends up). So that’s why I went with junctions.

I didn’t junction over the script extensions… I’m concerned about incompatible extensions screwing one or the other up. So when I went and installed the dynamic prompts extension, I later linked the wildcard directory over to A1111 so I only have one source of wildcards to tend to:

mklink /J %forgehome%\extensions\sd-dynamic-prompts\wildcards %sdhome%\extensions\sd-dynamic-prompts\wildcards

Regarding your output / outputs directory, I did try making a junction with that as well. It worked, kind-of. There was a permission issue displaying thumbnails in SD-Forge after generation. All the thumbnails were broken. So I brought up settings and just explicitly set there where I wanted outputs to go to vs. using a junction directory. Here’s an example of how I set the txt2img output directory, given the relative location of SD-Forge to my original A1111:

..\..\stable-diffusion-webui\outputs\txt2img-images

Now, we’re ready to cook.

How Much Faster?

The UX is a bit different than what I’m used to but mostly the same. At first some of the LoRAs weren’t showing up but then later after a few UI restarts with the settings and all they are there. Let’s do a wildcard run with some of my organized LoRA friends:

The prompt:

__rr_exp/00arrange__, <lora:Jean-Baptiste Monge Style:0.9> Jean-Baptiste Monge Style page, <lora:InkArtXL_1.2:0.4> ink art

The negative:

((floating sword, turned back, face markings, jpeg, close-up, up close, head shot, perfect skin, long neck, mutated, deviantart, cartoon, 3D, poor quality, amateur, portrait, pose, boring, deformed, polydactyl, extra fingers, reversed hands, six fingers)) negativeXL_D

For Plot X/Y/Z I make sure keep -1 for seed is checked and then have the following Prompt S/R for X and for Y:

X:

<lora:Jean-Baptiste Monge Style:0.9> Jean-Baptiste Monge Style page, <lora:John Harris Style:0.9> John Harris Style page, <lora:Pascal Campion Style:0.9> Pascal Campion Style, <lora:Stephen Gammell Style:0.9> Stephen Gammell Style, <lora:Dave McKean Style:0.9> Dave McKean Style page, <lora:Euan Uglow Style:0.9> Euan Uglow Style page, <lora:Maxfield Parrish Style:0.9> Maxfield Parrish Style page

Y:

<lora:InkArtXL_1.2:0.4> ink art , <lora:Studio Ghibli Style:0.4> Studio Ghibli Style, <lora:sdxl_lora_rustako:0.4>, <lora:steampunk_xl_v2:0.4> Steampunk, <lora:ParchartXL-1.5:0.4> ink illustration, <lora:Mike Mignola Style:0.4> Mike Mignola Style page, <lora:linquivera:0.4> linquivera, <lora:Fractal_Vines:0.4> fractalvines, <lora:HKStyle:0.4> HKStyle

I like the Restart sampler and I’m upping it to 47 steps vs. the 31 I was doing.

And my old card is now shooting out images way, way faster than it did under A1111. And it’s consistent — under A1111 I had wide variety: during a large batch some images might take a minute and others 15 minutes. Here, I easily get way faster image generation. It’s running five times faster and, given the consistency and increased step count I’m doing, it’s probably closer to 10 times faster. For me.

Also, img2img upscaling by 2x is super-fast for me now. Like I can just do a little work on the side while the upscaling is happening and it’s done.

And there is a lot more to SD-Forge than just being an A1111 face on a fast back-end image generator. I haven’t delved into any of that yet.

I’m sure there are some negatives but I haven’t hit them yet.

--

--

Eric Richards

Technorati of Leisure. Ex-software leadership Microsoft (Office, Windows, HoloLens), Intel Supercomputers, and Axon. https://www.instagram.com/rufustheruse.art