ACCC1380 commited on
Commit
4d4e913
1 Parent(s): adb0aeb

Upload lora-scripts/sd-scripts/docs/train_lllite_README.md with huggingface_hub

Browse files
lora-scripts/sd-scripts/docs/train_lllite_README.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # About ControlNet-LLLite
2
+
3
+ __This is an extremely experimental implementation and may change significantly in the future.__
4
+
5
+ 日本語版は[こちら](./train_lllite_README-ja.md)
6
+
7
+ ## Overview
8
+
9
+ ControlNet-LLLite is a lightweight version of [ControlNet](https://github.com/lllyasviel/ControlNet). It is a "LoRA Like Lite" that is inspired by LoRA and has a lightweight structure. Currently, only SDXL is supported.
10
+
11
+ ## Sample weight file and inference
12
+
13
+ Sample weight file is available here: https://huggingface.co/kohya-ss/controlnet-lllite
14
+
15
+ A custom node for ComfyUI is available: https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI
16
+
17
+ Sample images are at the end of this page.
18
+
19
+ ## Model structure
20
+
21
+ A single LLLite module consists of a conditioning image embedding that maps a conditioning image to a latent space and a small network with a structure similar to LoRA. The LLLite module is added to U-Net's Linear and Conv in the same way as LoRA. Please refer to the source code for details.
22
+
23
+ Due to the limitations of the inference environment, only CrossAttention (attn1 q/k/v, attn2 q) is currently added.
24
+
25
+ ## Model training
26
+
27
+ ### Preparing the dataset
28
+
29
+ In addition to the normal DreamBooth method dataset, please store the conditioning image in the directory specified by `conditioning_data_dir`. The conditioning image must have the same basename as the training image. The conditioning image will be automatically resized to the same size as the training image. The conditioning image does not require a caption file.
30
+
31
+ (We do not support the finetuning method dataset.)
32
+
33
+ ```toml
34
+ [[datasets.subsets]]
35
+ image_dir = "path/to/image/dir"
36
+ caption_extension = ".txt"
37
+ conditioning_data_dir = "path/to/conditioning/image/dir"
38
+ ```
39
+
40
+ At the moment, random_crop cannot be used.
41
+
42
+ For training data, it is easiest to use a synthetic dataset with the original model-generated images as training images and processed images as conditioning images (the quality of the dataset may be problematic). See below for specific methods of synthesizing datasets.
43
+
44
+ Note that if you use an image with a different art style than the original model as a training image, the model will have to learn not only the control but also the art style. ControlNet-LLLite has a small capacity, so it is not suitable for learning art styles. In such cases, increase the number of dimensions as described below.
45
+
46
+ ### Training
47
+
48
+ Run `sdxl_train_control_net_lllite.py`. You can specify the dimension of the conditioning image embedding with `--cond_emb_dim`. You can specify the rank of the LoRA-like module with `--network_dim`. Other options are the same as `sdxl_train_network.py`, but `--network_module` is not required.
49
+
50
+ Since a large amount of memory is used during training, please enable memory-saving options such as cache and gradient checkpointing. It is also effective to use BFloat16 with the `--full_bf16` option (requires RTX 30 series or later GPU). It has been confirmed to work with 24GB VRAM.
51
+
52
+ For the sample Canny, the dimension of the conditioning image embedding is 32. The rank of the LoRA-like module is also 64. Adjust according to the features of the conditioning image you are targeting.
53
+
54
+ (The sample Canny is probably quite difficult. It may be better to reduce it to about half for depth, etc.)
55
+
56
+ The following is an example of a .toml configuration.
57
+
58
+ ```toml
59
+ pretrained_model_name_or_path = "/path/to/model_trained_on.safetensors"
60
+ max_train_epochs = 12
61
+ max_data_loader_n_workers = 4
62
+ persistent_data_loader_workers = true
63
+ seed = 42
64
+ gradient_checkpointing = true
65
+ mixed_precision = "bf16"
66
+ save_precision = "bf16"
67
+ full_bf16 = true
68
+ optimizer_type = "adamw8bit"
69
+ learning_rate = 2e-4
70
+ xformers = true
71
+ output_dir = "/path/to/output/dir"
72
+ output_name = "output_name"
73
+ save_every_n_epochs = 1
74
+ save_model_as = "safetensors"
75
+ vae_batch_size = 4
76
+ cache_latents = true
77
+ cache_latents_to_disk = true
78
+ cache_text_encoder_outputs = true
79
+ cache_text_encoder_outputs_to_disk = true
80
+ network_dim = 64
81
+ cond_emb_dim = 32
82
+ dataset_config = "/path/to/dataset.toml"
83
+ ```
84
+
85
+ ### Inference
86
+
87
+ If you want to generate images with a script, run `sdxl_gen_img.py`. You can specify the LLLite model file with `--control_net_lllite_models`. The dimension is automatically obtained from the model file.
88
+
89
+ Specify the conditioning image to be used for inference with `--guide_image_path`. Since preprocess is not performed, if it is Canny, specify an image processed with Canny (white line on black background). `--control_net_preps`, `--control_net_weights`, and `--control_net_ratios` are not supported.
90
+
91
+ ## How to synthesize a dataset
92
+
93
+ ### Generating training images
94
+
95
+ Generate images with the base model for training. Please generate them with Web UI or ComfyUI etc. The image size should be the default size of the model (1024x1024, etc.). You can also use bucketing. In that case, please generate it at an arbitrary resolution.
96
+
97
+ The captions and other settings when generating the images should be the same as when generating the images with the trained ControlNet-LLLite model.
98
+
99
+ Save the generated images in an arbitrary directory. Specify this directory in the dataset configuration file.
100
+
101
+
102
+ You can also generate them with `sdxl_gen_img.py` in this repository. For example, run as follows:
103
+
104
+ ```dos
105
+ python sdxl_gen_img.py --ckpt path/to/model.safetensors --n_iter 1 --scale 10 --steps 36 --outdir path/to/output/dir --xformers --W 1024 --H 1024 --original_width 2048 --original_height 2048 --bf16 --sampler ddim --batch_size 4 --vae_batch_size 2 --images_per_prompt 512 --max_embeddings_multiples 1 --prompt "{portrait|digital art|anime screen cap|detailed illustration} of 1girl, {standing|sitting|walking|running|dancing} on {classroom|street|town|beach|indoors|outdoors}, {looking at viewer|looking away|looking at another}, {in|wearing} {shirt and skirt|school uniform|casual wear} { |, dynamic pose}, (solo), teen age, {0-1$$smile,|blush,|kind smile,|expression less,|happy,|sadness,} {0-1$$upper body,|full body,|cowboy shot,|face focus,} trending on pixiv, {0-2$$depth of fields,|8k wallpaper,|highly detailed,|pov,} {0-1$$summer, |winter, |spring, |autumn, } beautiful face { |, from below|, from above|, from side|, from behind|, from back} --n nsfw, bad face, lowres, low quality, worst quality, low effort, watermark, signature, ugly, poorly drawn"
106
+ ```
107
+
108
+ This is a setting for VRAM 24GB. Adjust `--batch_size` and `--vae_batch_size` according to the VRAM size.
109
+
110
+ The images are generated randomly using wildcards in `--prompt`. Adjust as necessary.
111
+
112
+ ### Processing images
113
+
114
+ Use an external program to process the generated images. Save the processed images in an arbitrary directory. These will be the conditioning images.
115
+
116
+ For example, you can use the following script to process the images with Canny.
117
+
118
+ ```python
119
+ import glob
120
+ import os
121
+ import random
122
+ import cv2
123
+ import numpy as np
124
+
125
+ IMAGES_DIR = "path/to/generated/images"
126
+ CANNY_DIR = "path/to/canny/images"
127
+
128
+ os.makedirs(CANNY_DIR, exist_ok=True)
129
+ img_files = glob.glob(IMAGES_DIR + "/*.png")
130
+ for img_file in img_files:
131
+ can_file = CANNY_DIR + "/" + os.path.basename(img_file)
132
+ if os.path.exists(can_file):
133
+ print("Skip: " + img_file)
134
+ continue
135
+
136
+ print(img_file)
137
+
138
+ img = cv2.imread(img_file)
139
+
140
+ # random threshold
141
+ # while True:
142
+ # threshold1 = random.randint(0, 127)
143
+ # threshold2 = random.randint(128, 255)
144
+ # if threshold2 - threshold1 > 80:
145
+ # break
146
+
147
+ # fixed threshold
148
+ threshold1 = 100
149
+ threshold2 = 200
150
+
151
+ img = cv2.Canny(img, threshold1, threshold2)
152
+
153
+ cv2.imwrite(can_file, img)
154
+ ```
155
+
156
+ ### Creating caption files
157
+
158
+ Create a caption file for each image with the same basename as the training image. It is fine to use the same caption as the one used when generating the image.
159
+
160
+ If you generated the images with `sdxl_gen_img.py`, you can use the following script to create the caption files (`*.txt`) from the metadata in the generated images.
161
+
162
+ ```python
163
+ import glob
164
+ import os
165
+ from PIL import Image
166
+
167
+ IMAGES_DIR = "path/to/generated/images"
168
+
169
+ img_files = glob.glob(IMAGES_DIR + "/*.png")
170
+ for img_file in img_files:
171
+ cap_file = img_file.replace(".png", ".txt")
172
+ if os.path.exists(cap_file):
173
+ print(f"Skip: {img_file}")
174
+ continue
175
+ print(img_file)
176
+
177
+ img = Image.open(img_file)
178
+ prompt = img.text["prompt"] if "prompt" in img.text else ""
179
+ if prompt == "":
180
+ print(f"Prompt not found in {img_file}")
181
+
182
+ with open(cap_file, "w") as f:
183
+ f.write(prompt + "\n")
184
+ ```
185
+
186
+ ### Creating a dataset configuration file
187
+
188
+ You can use the command line arguments of `sdxl_train_control_net_lllite.py` to specify the conditioning image directory. However, if you want to use a `.toml` file, specify the conditioning image directory in `conditioning_data_dir`.
189
+
190
+ ```toml
191
+ [general]
192
+ flip_aug = false
193
+ color_aug = false
194
+ resolution = [1024,1024]
195
+
196
+ [[datasets]]
197
+ batch_size = 8
198
+ enable_bucket = false
199
+
200
+ [[datasets.subsets]]
201
+ image_dir = "path/to/generated/image/dir"
202
+ caption_extension = ".txt"
203
+ conditioning_data_dir = "path/to/canny/image/dir"
204
+ ```
205
+
206
+ ## Credit
207
+
208
+ I would like to thank lllyasviel, the author of ControlNet, furusu, who provided me with advice on implementation and helped me solve problems, and ddPn08, who implemented the ControlNet dataset.
209
+
210
+ ## Sample
211
+
212
+ Canny
213
+ ![kohya_ss_girl_standing_at_classroom_smiling_to_the_viewer_class_78976b3e-0d4d-4ea0-b8e3-053ae493abbc](https://github.com/kohya-ss/sd-scripts/assets/52813779/37e9a736-649b-4c0f-ab26-880a1bf319b5)
214
+
215
+ ![im_20230820104253_000_1](https://github.com/kohya-ss/sd-scripts/assets/52813779/c8896900-ab86-4120-932f-6e2ae17b77c0)
216
+
217
+ ![im_20230820104302_000_1](https://github.com/kohya-ss/sd-scripts/assets/52813779/b12457a0-ee3c-450e-ba9a-b712d0fe86bb)
218
+
219
+ ![im_20230820104310_000_1](https://github.com/kohya-ss/sd-scripts/assets/52813779/8845b8d9-804a-44ac-9618-113a28eac8a1)