Home

Awesome

Pre CFG nodes

A set of nodes to prepare the noise predictions before the CFG function

All can be chained and repeated within the same workflow!

They are to be highly compatible with most nodes.

The order matters and depends on your needs.

The best chaining order is therefore to be determined by your own preferences.

All are to be used like any model patching node, right after the model loader.

Nodes:

Other nodes

There are now too many nodes for me to just add a screenshot and a bunch of details but it would be a shame not to describe them:

Pre CFG automatic scale

image

mode:

Support empty uncond:

If you use the already available node named ConditioningSetTimestepRange you can stop generating a negative prediction earlier by letting your negative conditioning go through it while setting it like this:

image

This speeds up your generation speed by two for the steps where there is no negative.

The only issue if you do this is that the CFG function will weight your positive prediction times your CFG scale against nothing and you will get a black image.

"support_empty_uncond" therefore divides your positive prediction by your CFG scale and avoids this issue.

Doing this combination is similar to the "boost" feature of my original automatic CFG node. It can also let you avoid artifacts if you want to use the strict scaling.

If you want to use this option in a chained setup using this node multiple times I recommand to use it only once and on the last.

Pre CFG perp-neg

image

Applies the already known perp-neg logic.

Code taken and adapted from ComfyAnon implementation.

The context length (added after the screenshot of the node) can be set to a higher value if you are using a tensor rt engine requiring a higher context length.

For more details you can check my node related to this "Conditioning crop or fill" where I explain a bit more about this.

Pre CFG sharpening (experimental)

image

Subtract from the current step something from the previous step. This tends to make the images sharper and less saturated.

A negative value can be set.

Pre CFG exponentiation (experimental)

image

A value lower than one will simplify the end result and enhance the saturation / contrasts.

A value higher than one will do the opposite and if pushed too far will most likely make a mess.

Gradient scaling:

Named like this because I initially wanted to test what would happen if I used, instead of a single CFG scale, a tensor shaped like the latent space with a gradual variation. So, not the kind of gradient used for backpropagation. Then why not try to use masks instead? And what if I could make it so each value will participate so the image would match as close as possible to an input image?

The result is an arithmetic scaling method which does not noticeably slow down the sampling while also scaling the intensity of the values like an "automatic cfg".

So here it is:

image

So, simply put:

Potential uses:

General light direction/composition influence (all same seed):

combined_image

Vignetting:

combined_v_images

Color influence:

combined_rgb_image

Pattern matching, here with a black and white spiral:

00347UI_00001_

A blue one with a lower scale:

00297UI_00001_

As you can notice the details a pretty well done in general. It seems that using an input latent as a guide also helps with the overall quality. Using a "freshly" encoded latent, I haven't tried to loop back a latent space resulting from sampling directly.

Text is a bit harder to enforce and may require more tweaking with the scales:

00133UI_00001_

Since it takes advantage of the "wiggling room" left by the CFG scale so to make the generation match an image, it can hardly contradict what is being generated.

Here, an example using a black and red spiral, since the base description is about black and white I could only enforce the red by using destructive scales:

combined_side_by_side_image

Side use:

Note: