Contact Us
Apps Page Background Image
Learn/Course/ComfyUI Chapter3 Workflow Analyzation

FeaturedComfyUI Chapter3 Workflow Analyzation

1
1
2
mimicpc
08/06/2024
ComfyUI
Analyzation
The Tex2img workflow is as same as the classic one, including one Load checkpoint, one postive prompt node with one negative prompt node and one K Sampler. Empty Latent Image decide the size of the generated image.


Today's session aims to help all readers become familiar with some basic applications of ComfyUI, including Hi-ResFix, inpainting, Embeddings, Lora and ControlNet. This will greatly improve the efficiency of image generation using ComfyUI.


Hi-ResFix Workflow

Save the image from the examples given by developer, drag into ComfyUI, we can get the Hire fix - Latent workflow.

https://comfyanonymous.github.io/ComfyUI_examples/2_pass_txt2img/

Let's analyse the workflow. The Tex2img workflow is as same as the classic one, including one Load checkpoint, one postive prompt node with one negative prompt node and one K Sampler. Empty Latent Image decide the size of the generated image.

Click "Queue Prompt" to send the prompt to the first KSampler, which generates a low-quality image. The latent information is then delivered to the upscale latent and to VAE Decode. After this, it is sent to the second KSampler for subsampling. The final step is VAE Decode, after which the final image is ready to be saved.


Esrgan upscaler is also considered as one of the upscaling method. Save the Non latent Upscaling workflow and drag into ComfyUI.

https://comfyanonymous.github.io/ComfyUI_examples/2_pass_txt2img/

The structure of this workflow is much more complicated than the Hi-ResFix Latent workflow. First, the initial KSampler decodes and generates an image. This image is then sent to the Upscale Image (using a model) for upscaling. The next step involves using the Load Upscale Model to load a model specifically designed for image upscaling. After that, the image goes through another Upscale Image process to adjust it to the final size. Finally, the image is delivered to the VAE for encoding and then sent to the last KSampler. Once this process is complete, the final image is ready.


Inpainting Workflow

Save the image from the examples given by developer, drag into ComfyUI, we can get the Inpainting workflow.

https://comfyanonymous.github.io/ComfyUI_examples/inpaint/

The inpainting workflow is straightforward. First, upload an image using the load image node. Then, use a prompt to describe the changes you want to make, and the image will be ready for inpainting. However, you might wonder where to apply the mask on the image. The mask function in ComfyUI is somewhat hidden. To access it, right-click on the uploaded image and select "Open in Mask Editor." This will open a separate interface where you can draw the mask. Remember to click "save to node" once you're done. It's recommended to set the denoise value in Ksampler to 0.8. Finally, click "queue prompt" to see the final image.



To create a smaller repainting similar to the original image, you need to slightly adjust the inpainting workflow. First, delete the VAE Encode node. Next, find a node named Set Latent Noise Mask by double-clicking on an empty space to open the search function. After adding the Set Latent Noise Mask node, reconnect the nodes, including a new VAE Encode node. Once everything is connected, click "queue prompt" to generate the final image.


Embeddings&Lora Workflow

Using embeddings in ComfyUI is straightforward and easy. Simply type the embeddings in the prompt node, and they will be displayed automatically.

For using Lora in ComfyUI, there's a Lora loader available. However, managing multiple Loras can get messy. To handle various Loras efficiently, it's crucial to use custom nodes. You can search for a stack solution that suits your needs. As an example, let's use the Lora stacker in the Efficiency Nodes Pack. Drag a line from lora_stack and click on Lora stacker. You can add as many Loras as you need by adjusting the lora_count.


ControlNet Workflow

Save the image from the examples given by developer, drag into ComfyUI, we can get the ControlNet workflow.

https://comfyanonymous.github.io/ComfyUI_examples/controlnet/

Apply ControlNet is the core of the structure, which responsible for receiving preprocessed image information. The strength is to control the weight of ControlNet.

Users need to upload a preprocessing graph in ComfyUI. To do this, download and install a preprocessor custom node first. After installation, search for "openpose" to find the openpose loader. Connect the openpose loader to the Image loader and apply the ControlNer loader. Once connected, the workflow will be ready to use.



To apply multiple ControlNets, follow these steps:

  1. Create as many Apply ControlNet nodes as you need.
  2. Search for and add the appropriate preprocessors.
  3. Create ControlNet Model Loader nodes based on the number of preprocessors.
  4. Ensure each Apply ControlNet node is paired with a preprocessor and a model loader.
  5. Connect each Apply ControlNet node to the prompt node in sequence.
  6. Finally, connect the prompt node to the K Sampler.


Now that you've learned the basics of using ComfyUI, join us to explore more about Stable Diffusion.


Share link: https://home.mimicpc.com/app-image-share?key=aeca81c92a57488bbf7192b5fed0df7d

Share link: https://home.mimicpc.com/app-image-share?key=c62d33bf437f497b8e6ddcbec75c90cf

Share link: https://home.mimicpc.com/app-image-share?key=b272718780054c2db9454e4401a579a3

Catalogue