The concept of Models
If you are a designer, art creator or other relative job that require to generate tons of images with different styles, you need to learn how to add models into your AI journey.
Have you ever seen amazing AI-generated images on Civitai, copied the same prompts to try on your computer, and ended up with completely different results? Don’t worry, it's not your fault, the prompts' fault, or the AI's fault—the issue lies with the model. The model determines the painting style of the image. Different models produce different images, even with the same prompt. So, using various models can yield a wide range of images from the same initial prompt.
prompt:1girl, solo, looking at viewer, blue eyes, animal ears, jewelry, closed mouth, white shirt, upper body, pink hair, flower, white hair, multicolored hair, earrings, outdoors, sky, day, cloud, cat ears, blue sky, lips, animal ear fluff, petals, animal, sunlight, cat, extra ears, sun, stud earrings, black cat
Learn about Models
Models are normally storaged in the folder named "Models" inside "StableDiffusion" folder. If you download new models, you could just copy them in and using stablediffusion to operate it.
These large models are called "checkpoints," typically in .ckpt or .safetensor formats, and range in size from 1 to 7 GB. A checkpoint serves as a database for AI training, supporting the generation of images. Training a large model requires significant computing power, similar to how players save their progress in a game after playing for a long time. When the computation reaches a crucial point, a checkpoint is created to save the progress, allowing the training to continue from that point. This is the origin of checkpoints. Most models are continuously trained and modified based on these checkpoints.
As previously mentioned, smaller models like Lora, embeddings, and hypernetworks will be discussed in more detail later. There's also a section called "VAE" under "Models." VAE stands for Variational Autoencoder, essentially acting as a color filter. Nowadays, creators typically integrate VAE into the model. If not, users need to configure VAE correctly; otherwise, the generated images will appear in grayscale.
Methods to Download Models
Stable Diffusion has officially released open-source models like 1.4 and 2.0. However, images generated with these models often lack detail and exhibit a single style. As a result, most users prefer personal models, which are models trained and published by individuals.
There are two main websites for downloading these models: Hugging Face and Civitai.
- Hugging Face (https://huggingface.co/models): Hugging Face is a professional website focused on deep learning and AI. It covers a wide range of AI-related topics, not just AI-generated images, making it potentially challenging for beginners to navigate.
- Civitai (https://civitai.com/): Civitai is the most popular AI model-sharing website. It features a plethora of impressive works, in addition to models, making it a favorite among users.
How to use model
You can directly select an image you like on Civitai. When you click on the image, the model and Lora used will be displayed on the right-hand side, along with the prompts. Simply download the model and Lora, copy the prompts, and you're ready to try it yourself.
Classification&Recommandation of Models
There are three main types of models:
1. **Second Dimension Style (Manga Style)**: These models are designed to create manga-like characters. Recommended models include AbyssOrangeMix, Counterfeit, Anything, and Dreamlike Diffusion.
2. **Realistic Style**: These models produce images that closely resemble real life. Recommended models for this style are Deliberate, Realistic Vision, and LOFI.
3. **2.5D Style**: These models fall between second dimension and realistic styles, resembling the visuals found in games and 3D animations. Recommended models include NeverEnding Dream and Protogen.
How to download Auto1111
For Users Familiar with Python and Git:
- Download and Install via Git:
- If you have a Python environment set up and are comfortable with Git:
- Clone the Stable Diffusion repository from GitHub using the following command:
- git clone https://github.com/StableDiffusion/StableDiffusion.git
- Navigate into the cloned directory (
StableDiffusion
) and proceed with installation as per the provided documentation.
For Beginners or Those Using an Integration Pack:
- Download and Install Using an Integration Pack:
- Visit the official website of Stable Diffusion or an integration pack provider like auto1111. https://github.com/AUTOMATIC1111/stable-diffusion-webui
- Download the integration package (typically a zip file).
- Use decompression software such as Bandzip or WinRAR to extract the downloaded zip file.
- Or through a cloud service vendor, who generally has Stable diffusion pre-built and setup.
- Setting Up Stable Diffusion:
- Create a new folder on your computer for Stable Diffusion. Ensure the folder path contains only English characters and has sufficient local disk storage.
- Unzip the contents of the downloaded zip file into this new folder.
- Running Stable Diffusion:
- Locate and double-click the
run.bat
file within the Stable Diffusion folder. - Wait for the application to load; this may take a moment.
- The Stable Diffusion WebUI homepage should automatically open in your default web browser.
- Locate and double-click the
- Operating Stable Diffusion:
- Keep the command-line interface (CLI) window open while using the Stable Diffusion WebUI.
- Interact with the WebUI through your browser to generate AI images or perform other tasks.
- Remember to keep the command-line interface running while using the WebUI, and close it when you're done operating in the browser.