ACCC1380 commited on
Commit
99f5c2e
1 Parent(s): ec5fd16

Upload lora-scripts/sd-scripts/library/original_unet.py with huggingface_hub

Browse files
lora-scripts/sd-scripts/library/original_unet.py ADDED
@@ -0,0 +1,1919 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Diffusers 0.10.2からStable Diffusionに必要な部分だけを持ってくる
2
+ # 条件分岐等で不要な部分は削除している
3
+ # コードの多くはDiffusersからコピーしている
4
+ # 制約として、モデルのstate_dictがDiffusers 0.10.2のものと同じ形式である必要がある
5
+
6
+ # Copy from Diffusers 0.10.2 for Stable Diffusion. Most of the code is copied from Diffusers.
7
+ # Unnecessary parts are deleted by condition branching.
8
+ # As a constraint, the state_dict of the model must be in the same format as that of Diffusers 0.10.2
9
+
10
+ """
11
+ v1.5とv2.1の相違点は
12
+ - attention_head_dimがintかlist[int]か
13
+ - cross_attention_dimが768か1024か
14
+ - use_linear_projection: trueがない(=False, 1.5)かあるか
15
+ - upcast_attentionがFalse(1.5)かTrue(2.1)か
16
+ - (以下は多分無視していい)
17
+ - sample_sizeが64か96か
18
+ - dual_cross_attentionがあるかないか
19
+ - num_class_embedsがあるかないか
20
+ - only_cross_attentionがあるかないか
21
+
22
+ v1.5
23
+ {
24
+ "_class_name": "UNet2DConditionModel",
25
+ "_diffusers_version": "0.6.0",
26
+ "act_fn": "silu",
27
+ "attention_head_dim": 8,
28
+ "block_out_channels": [
29
+ 320,
30
+ 640,
31
+ 1280,
32
+ 1280
33
+ ],
34
+ "center_input_sample": false,
35
+ "cross_attention_dim": 768,
36
+ "down_block_types": [
37
+ "CrossAttnDownBlock2D",
38
+ "CrossAttnDownBlock2D",
39
+ "CrossAttnDownBlock2D",
40
+ "DownBlock2D"
41
+ ],
42
+ "downsample_padding": 1,
43
+ "flip_sin_to_cos": true,
44
+ "freq_shift": 0,
45
+ "in_channels": 4,
46
+ "layers_per_block": 2,
47
+ "mid_block_scale_factor": 1,
48
+ "norm_eps": 1e-05,
49
+ "norm_num_groups": 32,
50
+ "out_channels": 4,
51
+ "sample_size": 64,
52
+ "up_block_types": [
53
+ "UpBlock2D",
54
+ "CrossAttnUpBlock2D",
55
+ "CrossAttnUpBlock2D",
56
+ "CrossAttnUpBlock2D"
57
+ ]
58
+ }
59
+
60
+ v2.1
61
+ {
62
+ "_class_name": "UNet2DConditionModel",
63
+ "_diffusers_version": "0.10.0.dev0",
64
+ "act_fn": "silu",
65
+ "attention_head_dim": [
66
+ 5,
67
+ 10,
68
+ 20,
69
+ 20
70
+ ],
71
+ "block_out_channels": [
72
+ 320,
73
+ 640,
74
+ 1280,
75
+ 1280
76
+ ],
77
+ "center_input_sample": false,
78
+ "cross_attention_dim": 1024,
79
+ "down_block_types": [
80
+ "CrossAttnDownBlock2D",
81
+ "CrossAttnDownBlock2D",
82
+ "CrossAttnDownBlock2D",
83
+ "DownBlock2D"
84
+ ],
85
+ "downsample_padding": 1,
86
+ "dual_cross_attention": false,
87
+ "flip_sin_to_cos": true,
88
+ "freq_shift": 0,
89
+ "in_channels": 4,
90
+ "layers_per_block": 2,
91
+ "mid_block_scale_factor": 1,
92
+ "norm_eps": 1e-05,
93
+ "norm_num_groups": 32,
94
+ "num_class_embeds": null,
95
+ "only_cross_attention": false,
96
+ "out_channels": 4,
97
+ "sample_size": 96,
98
+ "up_block_types": [
99
+ "UpBlock2D",
100
+ "CrossAttnUpBlock2D",
101
+ "CrossAttnUpBlock2D",
102
+ "CrossAttnUpBlock2D"
103
+ ],
104
+ "use_linear_projection": true,
105
+ "upcast_attention": true
106
+ }
107
+ """
108
+
109
+ import math
110
+ from types import SimpleNamespace
111
+ from typing import Dict, Optional, Tuple, Union
112
+ import torch
113
+ from torch import nn
114
+ from torch.nn import functional as F
115
+ from einops import rearrange
116
+ from library.utils import setup_logging
117
+ setup_logging()
118
+ import logging
119
+ logger = logging.getLogger(__name__)
120
+
121
+ BLOCK_OUT_CHANNELS: Tuple[int] = (320, 640, 1280, 1280)
122
+ TIMESTEP_INPUT_DIM = BLOCK_OUT_CHANNELS[0]
123
+ TIME_EMBED_DIM = BLOCK_OUT_CHANNELS[0] * 4
124
+ IN_CHANNELS: int = 4
125
+ OUT_CHANNELS: int = 4
126
+ LAYERS_PER_BLOCK: int = 2
127
+ LAYERS_PER_BLOCK_UP: int = LAYERS_PER_BLOCK + 1
128
+ TIME_EMBED_FLIP_SIN_TO_COS: bool = True
129
+ TIME_EMBED_FREQ_SHIFT: int = 0
130
+ NORM_GROUPS: int = 32
131
+ NORM_EPS: float = 1e-5
132
+ TRANSFORMER_NORM_NUM_GROUPS = 32
133
+
134
+ DOWN_BLOCK_TYPES = ["CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D"]
135
+ UP_BLOCK_TYPES = ["UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D"]
136
+
137
+
138
+ # region memory efficient attention
139
+
140
+ # FlashAttentionを使うCrossAttention
141
+ # based on https://github.com/lucidrains/memory-efficient-attention-pytorch/blob/main/memory_efficient_attention_pytorch/flash_attention.py
142
+ # LICENSE MIT https://github.com/lucidrains/memory-efficient-attention-pytorch/blob/main/LICENSE
143
+
144
+ # constants
145
+
146
+ EPSILON = 1e-6
147
+
148
+ # helper functions
149
+
150
+
151
+ def exists(val):
152
+ return val is not None
153
+
154
+
155
+ def default(val, d):
156
+ return val if exists(val) else d
157
+
158
+
159
+ # flash attention forwards and backwards
160
+
161
+ # https://arxiv.org/abs/2205.14135
162
+
163
+
164
+ class FlashAttentionFunction(torch.autograd.Function):
165
+ @staticmethod
166
+ @torch.no_grad()
167
+ def forward(ctx, q, k, v, mask, causal, q_bucket_size, k_bucket_size):
168
+ """Algorithm 2 in the paper"""
169
+
170
+ device = q.device
171
+ dtype = q.dtype
172
+ max_neg_value = -torch.finfo(q.dtype).max
173
+ qk_len_diff = max(k.shape[-2] - q.shape[-2], 0)
174
+
175
+ o = torch.zeros_like(q)
176
+ all_row_sums = torch.zeros((*q.shape[:-1], 1), dtype=dtype, device=device)
177
+ all_row_maxes = torch.full((*q.shape[:-1], 1), max_neg_value, dtype=dtype, device=device)
178
+
179
+ scale = q.shape[-1] ** -0.5
180
+
181
+ if not exists(mask):
182
+ mask = (None,) * math.ceil(q.shape[-2] / q_bucket_size)
183
+ else:
184
+ mask = rearrange(mask, "b n -> b 1 1 n")
185
+ mask = mask.split(q_bucket_size, dim=-1)
186
+
187
+ row_splits = zip(
188
+ q.split(q_bucket_size, dim=-2),
189
+ o.split(q_bucket_size, dim=-2),
190
+ mask,
191
+ all_row_sums.split(q_bucket_size, dim=-2),
192
+ all_row_maxes.split(q_bucket_size, dim=-2),
193
+ )
194
+
195
+ for ind, (qc, oc, row_mask, row_sums, row_maxes) in enumerate(row_splits):
196
+ q_start_index = ind * q_bucket_size - qk_len_diff
197
+
198
+ col_splits = zip(
199
+ k.split(k_bucket_size, dim=-2),
200
+ v.split(k_bucket_size, dim=-2),
201
+ )
202
+
203
+ for k_ind, (kc, vc) in enumerate(col_splits):
204
+ k_start_index = k_ind * k_bucket_size
205
+
206
+ attn_weights = torch.einsum("... i d, ... j d -> ... i j", qc, kc) * scale
207
+
208
+ if exists(row_mask):
209
+ attn_weights.masked_fill_(~row_mask, max_neg_value)
210
+
211
+ if causal and q_start_index < (k_start_index + k_bucket_size - 1):
212
+ causal_mask = torch.ones((qc.shape[-2], kc.shape[-2]), dtype=torch.bool, device=device).triu(
213
+ q_start_index - k_start_index + 1
214
+ )
215
+ attn_weights.masked_fill_(causal_mask, max_neg_value)
216
+
217
+ block_row_maxes = attn_weights.amax(dim=-1, keepdims=True)
218
+ attn_weights -= block_row_maxes
219
+ exp_weights = torch.exp(attn_weights)
220
+
221
+ if exists(row_mask):
222
+ exp_weights.masked_fill_(~row_mask, 0.0)
223
+
224
+ block_row_sums = exp_weights.sum(dim=-1, keepdims=True).clamp(min=EPSILON)
225
+
226
+ new_row_maxes = torch.maximum(block_row_maxes, row_maxes)
227
+
228
+ exp_values = torch.einsum("... i j, ... j d -> ... i d", exp_weights, vc)
229
+
230
+ exp_row_max_diff = torch.exp(row_maxes - new_row_maxes)
231
+ exp_block_row_max_diff = torch.exp(block_row_maxes - new_row_maxes)
232
+
233
+ new_row_sums = exp_row_max_diff * row_sums + exp_block_row_max_diff * block_row_sums
234
+
235
+ oc.mul_((row_sums / new_row_sums) * exp_row_max_diff).add_((exp_block_row_max_diff / new_row_sums) * exp_values)
236
+
237
+ row_maxes.copy_(new_row_maxes)
238
+ row_sums.copy_(new_row_sums)
239
+
240
+ ctx.args = (causal, scale, mask, q_bucket_size, k_bucket_size)
241
+ ctx.save_for_backward(q, k, v, o, all_row_sums, all_row_maxes)
242
+
243
+ return o
244
+
245
+ @staticmethod
246
+ @torch.no_grad()
247
+ def backward(ctx, do):
248
+ """Algorithm 4 in the paper"""
249
+
250
+ causal, scale, mask, q_bucket_size, k_bucket_size = ctx.args
251
+ q, k, v, o, l, m = ctx.saved_tensors
252
+
253
+ device = q.device
254
+
255
+ max_neg_value = -torch.finfo(q.dtype).max
256
+ qk_len_diff = max(k.shape[-2] - q.shape[-2], 0)
257
+
258
+ dq = torch.zeros_like(q)
259
+ dk = torch.zeros_like(k)
260
+ dv = torch.zeros_like(v)
261
+
262
+ row_splits = zip(
263
+ q.split(q_bucket_size, dim=-2),
264
+ o.split(q_bucket_size, dim=-2),
265
+ do.split(q_bucket_size, dim=-2),
266
+ mask,
267
+ l.split(q_bucket_size, dim=-2),
268
+ m.split(q_bucket_size, dim=-2),
269
+ dq.split(q_bucket_size, dim=-2),
270
+ )
271
+
272
+ for ind, (qc, oc, doc, row_mask, lc, mc, dqc) in enumerate(row_splits):
273
+ q_start_index = ind * q_bucket_size - qk_len_diff
274
+
275
+ col_splits = zip(
276
+ k.split(k_bucket_size, dim=-2),
277
+ v.split(k_bucket_size, dim=-2),
278
+ dk.split(k_bucket_size, dim=-2),
279
+ dv.split(k_bucket_size, dim=-2),
280
+ )
281
+
282
+ for k_ind, (kc, vc, dkc, dvc) in enumerate(col_splits):
283
+ k_start_index = k_ind * k_bucket_size
284
+
285
+ attn_weights = torch.einsum("... i d, ... j d -> ... i j", qc, kc) * scale
286
+
287
+ if causal and q_start_index < (k_start_index + k_bucket_size - 1):
288
+ causal_mask = torch.ones((qc.shape[-2], kc.shape[-2]), dtype=torch.bool, device=device).triu(
289
+ q_start_index - k_start_index + 1
290
+ )
291
+ attn_weights.masked_fill_(causal_mask, max_neg_value)
292
+
293
+ exp_attn_weights = torch.exp(attn_weights - mc)
294
+
295
+ if exists(row_mask):
296
+ exp_attn_weights.masked_fill_(~row_mask, 0.0)
297
+
298
+ p = exp_attn_weights / lc
299
+
300
+ dv_chunk = torch.einsum("... i j, ... i d -> ... j d", p, doc)
301
+ dp = torch.einsum("... i d, ... j d -> ... i j", doc, vc)
302
+
303
+ D = (doc * oc).sum(dim=-1, keepdims=True)
304
+ ds = p * scale * (dp - D)
305
+
306
+ dq_chunk = torch.einsum("... i j, ... j d -> ... i d", ds, kc)
307
+ dk_chunk = torch.einsum("... i j, ... i d -> ... j d", ds, qc)
308
+
309
+ dqc.add_(dq_chunk)
310
+ dkc.add_(dk_chunk)
311
+ dvc.add_(dv_chunk)
312
+
313
+ return dq, dk, dv, None, None, None, None
314
+
315
+
316
+ # endregion
317
+
318
+
319
+ def get_parameter_dtype(parameter: torch.nn.Module):
320
+ return next(parameter.parameters()).dtype
321
+
322
+
323
+ def get_parameter_device(parameter: torch.nn.Module):
324
+ return next(parameter.parameters()).device
325
+
326
+
327
+ def get_timestep_embedding(
328
+ timesteps: torch.Tensor,
329
+ embedding_dim: int,
330
+ flip_sin_to_cos: bool = False,
331
+ downscale_freq_shift: float = 1,
332
+ scale: float = 1,
333
+ max_period: int = 10000,
334
+ ):
335
+ """
336
+ This matches the implementation in Denoising Diffusion Probabilistic Models: Create sinusoidal timestep embeddings.
337
+
338
+ :param timesteps: a 1-D Tensor of N indices, one per batch element.
339
+ These may be fractional.
340
+ :param embedding_dim: the dimension of the output. :param max_period: controls the minimum frequency of the
341
+ embeddings. :return: an [N x dim] Tensor of positional embeddings.
342
+ """
343
+ assert len(timesteps.shape) == 1, "Timesteps should be a 1d-array"
344
+
345
+ half_dim = embedding_dim // 2
346
+ exponent = -math.log(max_period) * torch.arange(start=0, end=half_dim, dtype=torch.float32, device=timesteps.device)
347
+ exponent = exponent / (half_dim - downscale_freq_shift)
348
+
349
+ emb = torch.exp(exponent)
350
+ emb = timesteps[:, None].float() * emb[None, :]
351
+
352
+ # scale embeddings
353
+ emb = scale * emb
354
+
355
+ # concat sine and cosine embeddings
356
+ emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=-1)
357
+
358
+ # flip sine and cosine embeddings
359
+ if flip_sin_to_cos:
360
+ emb = torch.cat([emb[:, half_dim:], emb[:, :half_dim]], dim=-1)
361
+
362
+ # zero pad
363
+ if embedding_dim % 2 == 1:
364
+ emb = torch.nn.functional.pad(emb, (0, 1, 0, 0))
365
+ return emb
366
+
367
+
368
+ # Deep Shrink: We do not common this function, because minimize dependencies.
369
+ def resize_like(x, target, mode="bicubic", align_corners=False):
370
+ org_dtype = x.dtype
371
+ if org_dtype == torch.bfloat16:
372
+ x = x.to(torch.float32)
373
+
374
+ if x.shape[-2:] != target.shape[-2:]:
375
+ if mode == "nearest":
376
+ x = F.interpolate(x, size=target.shape[-2:], mode=mode)
377
+ else:
378
+ x = F.interpolate(x, size=target.shape[-2:], mode=mode, align_corners=align_corners)
379
+
380
+ if org_dtype == torch.bfloat16:
381
+ x = x.to(org_dtype)
382
+ return x
383
+
384
+
385
+ class SampleOutput:
386
+ def __init__(self, sample):
387
+ self.sample = sample
388
+
389
+
390
+ class TimestepEmbedding(nn.Module):
391
+ def __init__(self, in_channels: int, time_embed_dim: int, act_fn: str = "silu", out_dim: int = None):
392
+ super().__init__()
393
+
394
+ self.linear_1 = nn.Linear(in_channels, time_embed_dim)
395
+ self.act = None
396
+ if act_fn == "silu":
397
+ self.act = nn.SiLU()
398
+ elif act_fn == "mish":
399
+ self.act = nn.Mish()
400
+
401
+ if out_dim is not None:
402
+ time_embed_dim_out = out_dim
403
+ else:
404
+ time_embed_dim_out = time_embed_dim
405
+ self.linear_2 = nn.Linear(time_embed_dim, time_embed_dim_out)
406
+
407
+ def forward(self, sample):
408
+ sample = self.linear_1(sample)
409
+
410
+ if self.act is not None:
411
+ sample = self.act(sample)
412
+
413
+ sample = self.linear_2(sample)
414
+ return sample
415
+
416
+
417
+ class Timesteps(nn.Module):
418
+ def __init__(self, num_channels: int, flip_sin_to_cos: bool, downscale_freq_shift: float):
419
+ super().__init__()
420
+ self.num_channels = num_channels
421
+ self.flip_sin_to_cos = flip_sin_to_cos
422
+ self.downscale_freq_shift = downscale_freq_shift
423
+
424
+ def forward(self, timesteps):
425
+ t_emb = get_timestep_embedding(
426
+ timesteps,
427
+ self.num_channels,
428
+ flip_sin_to_cos=self.flip_sin_to_cos,
429
+ downscale_freq_shift=self.downscale_freq_shift,
430
+ )
431
+ return t_emb
432
+
433
+
434
+ class ResnetBlock2D(nn.Module):
435
+ def __init__(
436
+ self,
437
+ in_channels,
438
+ out_channels,
439
+ ):
440
+ super().__init__()
441
+ self.in_channels = in_channels
442
+ self.out_channels = out_channels
443
+
444
+ self.norm1 = torch.nn.GroupNorm(num_groups=NORM_GROUPS, num_channels=in_channels, eps=NORM_EPS, affine=True)
445
+
446
+ self.conv1 = torch.nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
447
+
448
+ self.time_emb_proj = torch.nn.Linear(TIME_EMBED_DIM, out_channels)
449
+
450
+ self.norm2 = torch.nn.GroupNorm(num_groups=NORM_GROUPS, num_channels=out_channels, eps=NORM_EPS, affine=True)
451
+ self.conv2 = torch.nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
452
+
453
+ # if non_linearity == "swish":
454
+ self.nonlinearity = lambda x: F.silu(x)
455
+
456
+ self.use_in_shortcut = self.in_channels != self.out_channels
457
+
458
+ self.conv_shortcut = None
459
+ if self.use_in_shortcut:
460
+ self.conv_shortcut = torch.nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0)
461
+
462
+ def forward(self, input_tensor, temb):
463
+ hidden_states = input_tensor
464
+
465
+ hidden_states = self.norm1(hidden_states)
466
+ hidden_states = self.nonlinearity(hidden_states)
467
+
468
+ hidden_states = self.conv1(hidden_states)
469
+
470
+ temb = self.time_emb_proj(self.nonlinearity(temb))[:, :, None, None]
471
+ hidden_states = hidden_states + temb
472
+
473
+ hidden_states = self.norm2(hidden_states)
474
+ hidden_states = self.nonlinearity(hidden_states)
475
+
476
+ hidden_states = self.conv2(hidden_states)
477
+
478
+ if self.conv_shortcut is not None:
479
+ input_tensor = self.conv_shortcut(input_tensor)
480
+
481
+ output_tensor = input_tensor + hidden_states
482
+
483
+ return output_tensor
484
+
485
+
486
+ class DownBlock2D(nn.Module):
487
+ def __init__(
488
+ self,
489
+ in_channels: int,
490
+ out_channels: int,
491
+ add_downsample=True,
492
+ ):
493
+ super().__init__()
494
+
495
+ self.has_cross_attention = False
496
+ resnets = []
497
+
498
+ for i in range(LAYERS_PER_BLOCK):
499
+ in_channels = in_channels if i == 0 else out_channels
500
+ resnets.append(
501
+ ResnetBlock2D(
502
+ in_channels=in_channels,
503
+ out_channels=out_channels,
504
+ )
505
+ )
506
+ self.resnets = nn.ModuleList(resnets)
507
+
508
+ if add_downsample:
509
+ self.downsamplers = [Downsample2D(out_channels, out_channels=out_channels)]
510
+ else:
511
+ self.downsamplers = None
512
+
513
+ self.gradient_checkpointing = False
514
+
515
+ def set_use_memory_efficient_attention(self, xformers, mem_eff):
516
+ pass
517
+
518
+ def set_use_sdpa(self, sdpa):
519
+ pass
520
+
521
+ def forward(self, hidden_states, temb=None):
522
+ output_states = ()
523
+
524
+ for resnet in self.resnets:
525
+ if self.training and self.gradient_checkpointing:
526
+
527
+ def create_custom_forward(module):
528
+ def custom_forward(*inputs):
529
+ return module(*inputs)
530
+
531
+ return custom_forward
532
+
533
+ hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb)
534
+ else:
535
+ hidden_states = resnet(hidden_states, temb)
536
+
537
+ output_states += (hidden_states,)
538
+
539
+ if self.downsamplers is not None:
540
+ for downsampler in self.downsamplers:
541
+ hidden_states = downsampler(hidden_states)
542
+
543
+ output_states += (hidden_states,)
544
+
545
+ return hidden_states, output_states
546
+
547
+
548
+ class Downsample2D(nn.Module):
549
+ def __init__(self, channels, out_channels):
550
+ super().__init__()
551
+
552
+ self.channels = channels
553
+ self.out_channels = out_channels
554
+
555
+ self.conv = nn.Conv2d(self.channels, self.out_channels, 3, stride=2, padding=1)
556
+
557
+ def forward(self, hidden_states):
558
+ assert hidden_states.shape[1] == self.channels
559
+ hidden_states = self.conv(hidden_states)
560
+
561
+ return hidden_states
562
+
563
+
564
+ class CrossAttention(nn.Module):
565
+ def __init__(
566
+ self,
567
+ query_dim: int,
568
+ cross_attention_dim: Optional[int] = None,
569
+ heads: int = 8,
570
+ dim_head: int = 64,
571
+ upcast_attention: bool = False,
572
+ ):
573
+ super().__init__()
574
+ inner_dim = dim_head * heads
575
+ cross_attention_dim = cross_attention_dim if cross_attention_dim is not None else query_dim
576
+ self.upcast_attention = upcast_attention
577
+
578
+ self.scale = dim_head**-0.5
579
+ self.heads = heads
580
+
581
+ self.to_q = nn.Linear(query_dim, inner_dim, bias=False)
582
+ self.to_k = nn.Linear(cross_attention_dim, inner_dim, bias=False)
583
+ self.to_v = nn.Linear(cross_attention_dim, inner_dim, bias=False)
584
+
585
+ self.to_out = nn.ModuleList([])
586
+ self.to_out.append(nn.Linear(inner_dim, query_dim))
587
+ # no dropout here
588
+
589
+ self.use_memory_efficient_attention_xformers = False
590
+ self.use_memory_efficient_attention_mem_eff = False
591
+ self.use_sdpa = False
592
+
593
+ # Attention processor
594
+ self.processor = None
595
+
596
+ def set_use_memory_efficient_attention(self, xformers, mem_eff):
597
+ self.use_memory_efficient_attention_xformers = xformers
598
+ self.use_memory_efficient_attention_mem_eff = mem_eff
599
+
600
+ def set_use_sdpa(self, sdpa):
601
+ self.use_sdpa = sdpa
602
+
603
+ def reshape_heads_to_batch_dim(self, tensor):
604
+ batch_size, seq_len, dim = tensor.shape
605
+ head_size = self.heads
606
+ tensor = tensor.reshape(batch_size, seq_len, head_size, dim // head_size)
607
+ tensor = tensor.permute(0, 2, 1, 3).reshape(batch_size * head_size, seq_len, dim // head_size)
608
+ return tensor
609
+
610
+ def reshape_batch_dim_to_heads(self, tensor):
611
+ batch_size, seq_len, dim = tensor.shape
612
+ head_size = self.heads
613
+ tensor = tensor.reshape(batch_size // head_size, head_size, seq_len, dim)
614
+ tensor = tensor.permute(0, 2, 1, 3).reshape(batch_size // head_size, seq_len, dim * head_size)
615
+ return tensor
616
+
617
+ def set_processor(self):
618
+ return self.processor
619
+
620
+ def get_processor(self):
621
+ return self.processor
622
+
623
+ def forward(self, hidden_states, context=None, mask=None, **kwargs):
624
+ if self.processor is not None:
625
+ (
626
+ hidden_states,
627
+ encoder_hidden_states,
628
+ attention_mask,
629
+ ) = translate_attention_names_from_diffusers(
630
+ hidden_states=hidden_states, context=context, mask=mask, **kwargs
631
+ )
632
+ return self.processor(
633
+ attn=self,
634
+ hidden_states=hidden_states,
635
+ encoder_hidden_states=context,
636
+ attention_mask=mask,
637
+ **kwargs
638
+ )
639
+ if self.use_memory_efficient_attention_xformers:
640
+ return self.forward_memory_efficient_xformers(hidden_states, context, mask)
641
+ if self.use_memory_efficient_attention_mem_eff:
642
+ return self.forward_memory_efficient_mem_eff(hidden_states, context, mask)
643
+ if self.use_sdpa:
644
+ return self.forward_sdpa(hidden_states, context, mask)
645
+
646
+ query = self.to_q(hidden_states)
647
+ context = context if context is not None else hidden_states
648
+ key = self.to_k(context)
649
+ value = self.to_v(context)
650
+
651
+ query = self.reshape_heads_to_batch_dim(query)
652
+ key = self.reshape_heads_to_batch_dim(key)
653
+ value = self.reshape_heads_to_batch_dim(value)
654
+
655
+ hidden_states = self._attention(query, key, value)
656
+
657
+ # linear proj
658
+ hidden_states = self.to_out[0](hidden_states)
659
+ # hidden_states = self.to_out[1](hidden_states) # no dropout
660
+ return hidden_states
661
+
662
+ def _attention(self, query, key, value):
663
+ if self.upcast_attention:
664
+ query = query.float()
665
+ key = key.float()
666
+
667
+ attention_scores = torch.baddbmm(
668
+ torch.empty(query.shape[0], query.shape[1], key.shape[1], dtype=query.dtype, device=query.device),
669
+ query,
670
+ key.transpose(-1, -2),
671
+ beta=0,
672
+ alpha=self.scale,
673
+ )
674
+ attention_probs = attention_scores.softmax(dim=-1)
675
+
676
+ # cast back to the original dtype
677
+ attention_probs = attention_probs.to(value.dtype)
678
+
679
+ # compute attention output
680
+ hidden_states = torch.bmm(attention_probs, value)
681
+
682
+ # reshape hidden_states
683
+ hidden_states = self.reshape_batch_dim_to_heads(hidden_states)
684
+ return hidden_states
685
+
686
+ # TODO support Hypernetworks
687
+ def forward_memory_efficient_xformers(self, x, context=None, mask=None):
688
+ import xformers.ops
689
+
690
+ h = self.heads
691
+ q_in = self.to_q(x)
692
+ context = context if context is not None else x
693
+ context = context.to(x.dtype)
694
+ k_in = self.to_k(context)
695
+ v_in = self.to_v(context)
696
+
697
+ q, k, v = map(lambda t: rearrange(t, "b n (h d) -> b n h d", h=h), (q_in, k_in, v_in))
698
+ del q_in, k_in, v_in
699
+
700
+ q = q.contiguous()
701
+ k = k.contiguous()
702
+ v = v.contiguous()
703
+ out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None) # 最適なのを選んでくれる
704
+
705
+ out = rearrange(out, "b n h d -> b n (h d)", h=h)
706
+
707
+ out = self.to_out[0](out)
708
+ return out
709
+
710
+ def forward_memory_efficient_mem_eff(self, x, context=None, mask=None):
711
+ flash_func = FlashAttentionFunction
712
+
713
+ q_bucket_size = 512
714
+ k_bucket_size = 1024
715
+
716
+ h = self.heads
717
+ q = self.to_q(x)
718
+ context = context if context is not None else x
719
+ context = context.to(x.dtype)
720
+ k = self.to_k(context)
721
+ v = self.to_v(context)
722
+ del context, x
723
+
724
+ q, k, v = map(lambda t: rearrange(t, "b n (h d) -> b h n d", h=h), (q, k, v))
725
+
726
+ out = flash_func.apply(q, k, v, mask, False, q_bucket_size, k_bucket_size)
727
+
728
+ out = rearrange(out, "b h n d -> b n (h d)")
729
+
730
+ out = self.to_out[0](out)
731
+ return out
732
+
733
+ def forward_sdpa(self, x, context=None, mask=None):
734
+ h = self.heads
735
+ q_in = self.to_q(x)
736
+ context = context if context is not None else x
737
+ context = context.to(x.dtype)
738
+ k_in = self.to_k(context)
739
+ v_in = self.to_v(context)
740
+
741
+ q, k, v = map(lambda t: rearrange(t, "b n (h d) -> b h n d", h=h), (q_in, k_in, v_in))
742
+ del q_in, k_in, v_in
743
+
744
+ out = F.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
745
+
746
+ out = rearrange(out, "b h n d -> b n (h d)", h=h)
747
+
748
+ out = self.to_out[0](out)
749
+ return out
750
+
751
+ def translate_attention_names_from_diffusers(
752
+ hidden_states: torch.FloatTensor,
753
+ context: Optional[torch.FloatTensor] = None,
754
+ mask: Optional[torch.FloatTensor] = None,
755
+ # HF naming
756
+ encoder_hidden_states: Optional[torch.FloatTensor] = None,
757
+ attention_mask: Optional[torch.FloatTensor] = None
758
+ ):
759
+ # translate from hugging face diffusers
760
+ context = context if context is not None else encoder_hidden_states
761
+
762
+ # translate from hugging face diffusers
763
+ mask = mask if mask is not None else attention_mask
764
+
765
+ return hidden_states, context, mask
766
+
767
+ # feedforward
768
+ class GEGLU(nn.Module):
769
+ r"""
770
+ A variant of the gated linear unit activation function from https://arxiv.org/abs/2002.05202.
771
+
772
+ Parameters:
773
+ dim_in (`int`): The number of channels in the input.
774
+ dim_out (`int`): The number of channels in the output.
775
+ """
776
+
777
+ def __init__(self, dim_in: int, dim_out: int):
778
+ super().__init__()
779
+ self.proj = nn.Linear(dim_in, dim_out * 2)
780
+
781
+ def gelu(self, gate):
782
+ if gate.device.type != "mps":
783
+ return F.gelu(gate)
784
+ # mps: gelu is not implemented for float16
785
+ return F.gelu(gate.to(dtype=torch.float32)).to(dtype=gate.dtype)
786
+
787
+ def forward(self, hidden_states):
788
+ hidden_states, gate = self.proj(hidden_states).chunk(2, dim=-1)
789
+ return hidden_states * self.gelu(gate)
790
+
791
+
792
+ class FeedForward(nn.Module):
793
+ def __init__(
794
+ self,
795
+ dim: int,
796
+ ):
797
+ super().__init__()
798
+ inner_dim = int(dim * 4) # mult is always 4
799
+
800
+ self.net = nn.ModuleList([])
801
+ # project in
802
+ self.net.append(GEGLU(dim, inner_dim))
803
+ # project dropout
804
+ self.net.append(nn.Identity()) # nn.Dropout(0)) # dummy for dropout with 0
805
+ # project out
806
+ self.net.append(nn.Linear(inner_dim, dim))
807
+
808
+ def forward(self, hidden_states):
809
+ for module in self.net:
810
+ hidden_states = module(hidden_states)
811
+ return hidden_states
812
+
813
+
814
+ class BasicTransformerBlock(nn.Module):
815
+ def __init__(
816
+ self, dim: int, num_attention_heads: int, attention_head_dim: int, cross_attention_dim: int, upcast_attention: bool = False
817
+ ):
818
+ super().__init__()
819
+
820
+ # 1. Self-Attn
821
+ self.attn1 = CrossAttention(
822
+ query_dim=dim,
823
+ cross_attention_dim=None,
824
+ heads=num_attention_heads,
825
+ dim_head=attention_head_dim,
826
+ upcast_attention=upcast_attention,
827
+ )
828
+ self.ff = FeedForward(dim)
829
+
830
+ # 2. Cross-Attn
831
+ self.attn2 = CrossAttention(
832
+ query_dim=dim,
833
+ cross_attention_dim=cross_attention_dim,
834
+ heads=num_attention_heads,
835
+ dim_head=attention_head_dim,
836
+ upcast_attention=upcast_attention,
837
+ )
838
+
839
+ self.norm1 = nn.LayerNorm(dim)
840
+ self.norm2 = nn.LayerNorm(dim)
841
+
842
+ # 3. Feed-forward
843
+ self.norm3 = nn.LayerNorm(dim)
844
+
845
+ def set_use_memory_efficient_attention(self, xformers: bool, mem_eff: bool):
846
+ self.attn1.set_use_memory_efficient_attention(xformers, mem_eff)
847
+ self.attn2.set_use_memory_efficient_attention(xformers, mem_eff)
848
+
849
+ def set_use_sdpa(self, sdpa: bool):
850
+ self.attn1.set_use_sdpa(sdpa)
851
+ self.attn2.set_use_sdpa(sdpa)
852
+
853
+ def forward(self, hidden_states, context=None, timestep=None):
854
+ # 1. Self-Attention
855
+ norm_hidden_states = self.norm1(hidden_states)
856
+
857
+ hidden_states = self.attn1(norm_hidden_states) + hidden_states
858
+
859
+ # 2. Cross-Attention
860
+ norm_hidden_states = self.norm2(hidden_states)
861
+ hidden_states = self.attn2(norm_hidden_states, context=context) + hidden_states
862
+
863
+ # 3. Feed-forward
864
+ hidden_states = self.ff(self.norm3(hidden_states)) + hidden_states
865
+
866
+ return hidden_states
867
+
868
+
869
+ class Transformer2DModel(nn.Module):
870
+ def __init__(
871
+ self,
872
+ num_attention_heads: int = 16,
873
+ attention_head_dim: int = 88,
874
+ in_channels: Optional[int] = None,
875
+ cross_attention_dim: Optional[int] = None,
876
+ use_linear_projection: bool = False,
877
+ upcast_attention: bool = False,
878
+ ):
879
+ super().__init__()
880
+ self.in_channels = in_channels
881
+ self.num_attention_heads = num_attention_heads
882
+ self.attention_head_dim = attention_head_dim
883
+ inner_dim = num_attention_heads * attention_head_dim
884
+ self.use_linear_projection = use_linear_projection
885
+
886
+ self.norm = torch.nn.GroupNorm(num_groups=TRANSFORMER_NORM_NUM_GROUPS, num_channels=in_channels, eps=1e-6, affine=True)
887
+
888
+ if use_linear_projection:
889
+ self.proj_in = nn.Linear(in_channels, inner_dim)
890
+ else:
891
+ self.proj_in = nn.Conv2d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0)
892
+
893
+ self.transformer_blocks = nn.ModuleList(
894
+ [
895
+ BasicTransformerBlock(
896
+ inner_dim,
897
+ num_attention_heads,
898
+ attention_head_dim,
899
+ cross_attention_dim=cross_attention_dim,
900
+ upcast_attention=upcast_attention,
901
+ )
902
+ ]
903
+ )
904
+
905
+ if use_linear_projection:
906
+ self.proj_out = nn.Linear(in_channels, inner_dim)
907
+ else:
908
+ self.proj_out = nn.Conv2d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0)
909
+
910
+ def set_use_memory_efficient_attention(self, xformers, mem_eff):
911
+ for transformer in self.transformer_blocks:
912
+ transformer.set_use_memory_efficient_attention(xformers, mem_eff)
913
+
914
+ def set_use_sdpa(self, sdpa):
915
+ for transformer in self.transformer_blocks:
916
+ transformer.set_use_sdpa(sdpa)
917
+
918
+ def forward(self, hidden_states, encoder_hidden_states=None, timestep=None, return_dict: bool = True):
919
+ # 1. Input
920
+ batch, _, height, weight = hidden_states.shape
921
+ residual = hidden_states
922
+
923
+ hidden_states = self.norm(hidden_states)
924
+ if not self.use_linear_projection:
925
+ hidden_states = self.proj_in(hidden_states)
926
+ inner_dim = hidden_states.shape[1]
927
+ hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * weight, inner_dim)
928
+ else:
929
+ inner_dim = hidden_states.shape[1]
930
+ hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * weight, inner_dim)
931
+ hidden_states = self.proj_in(hidden_states)
932
+
933
+ # 2. Blocks
934
+ for block in self.transformer_blocks:
935
+ hidden_states = block(hidden_states, context=encoder_hidden_states, timestep=timestep)
936
+
937
+ # 3. Output
938
+ if not self.use_linear_projection:
939
+ hidden_states = hidden_states.reshape(batch, height, weight, inner_dim).permute(0, 3, 1, 2).contiguous()
940
+ hidden_states = self.proj_out(hidden_states)
941
+ else:
942
+ hidden_states = self.proj_out(hidden_states)
943
+ hidden_states = hidden_states.reshape(batch, height, weight, inner_dim).permute(0, 3, 1, 2).contiguous()
944
+
945
+ output = hidden_states + residual
946
+
947
+ if not return_dict:
948
+ return (output,)
949
+
950
+ return SampleOutput(sample=output)
951
+
952
+
953
+ class CrossAttnDownBlock2D(nn.Module):
954
+ def __init__(
955
+ self,
956
+ in_channels: int,
957
+ out_channels: int,
958
+ add_downsample=True,
959
+ cross_attention_dim=1280,
960
+ attn_num_head_channels=1,
961
+ use_linear_projection=False,
962
+ upcast_attention=False,
963
+ ):
964
+ super().__init__()
965
+ self.has_cross_attention = True
966
+ resnets = []
967
+ attentions = []
968
+
969
+ self.attn_num_head_channels = attn_num_head_channels
970
+
971
+ for i in range(LAYERS_PER_BLOCK):
972
+ in_channels = in_channels if i == 0 else out_channels
973
+
974
+ resnets.append(ResnetBlock2D(in_channels=in_channels, out_channels=out_channels))
975
+ attentions.append(
976
+ Transformer2DModel(
977
+ attn_num_head_channels,
978
+ out_channels // attn_num_head_channels,
979
+ in_channels=out_channels,
980
+ cross_attention_dim=cross_attention_dim,
981
+ use_linear_projection=use_linear_projection,
982
+ upcast_attention=upcast_attention,
983
+ )
984
+ )
985
+ self.attentions = nn.ModuleList(attentions)
986
+ self.resnets = nn.ModuleList(resnets)
987
+
988
+ if add_downsample:
989
+ self.downsamplers = nn.ModuleList([Downsample2D(out_channels, out_channels)])
990
+ else:
991
+ self.downsamplers = None
992
+
993
+ self.gradient_checkpointing = False
994
+
995
+ def set_use_memory_efficient_attention(self, xformers, mem_eff):
996
+ for attn in self.attentions:
997
+ attn.set_use_memory_efficient_attention(xformers, mem_eff)
998
+
999
+ def set_use_sdpa(self, sdpa):
1000
+ for attn in self.attentions:
1001
+ attn.set_use_sdpa(sdpa)
1002
+
1003
+ def forward(self, hidden_states, temb=None, encoder_hidden_states=None):
1004
+ output_states = ()
1005
+
1006
+ for resnet, attn in zip(self.resnets, self.attentions):
1007
+ if self.training and self.gradient_checkpointing:
1008
+
1009
+ def create_custom_forward(module, return_dict=None):
1010
+ def custom_forward(*inputs):
1011
+ if return_dict is not None:
1012
+ return module(*inputs, return_dict=return_dict)
1013
+ else:
1014
+ return module(*inputs)
1015
+
1016
+ return custom_forward
1017
+
1018
+ hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb)
1019
+ hidden_states = torch.utils.checkpoint.checkpoint(
1020
+ create_custom_forward(attn, return_dict=False), hidden_states, encoder_hidden_states
1021
+ )[0]
1022
+ else:
1023
+ hidden_states = resnet(hidden_states, temb)
1024
+ hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample
1025
+
1026
+ output_states += (hidden_states,)
1027
+
1028
+ if self.downsamplers is not None:
1029
+ for downsampler in self.downsamplers:
1030
+ hidden_states = downsampler(hidden_states)
1031
+
1032
+ output_states += (hidden_states,)
1033
+
1034
+ return hidden_states, output_states
1035
+
1036
+
1037
+ class UNetMidBlock2DCrossAttn(nn.Module):
1038
+ def __init__(
1039
+ self,
1040
+ in_channels: int,
1041
+ attn_num_head_channels=1,
1042
+ cross_attention_dim=1280,
1043
+ use_linear_projection=False,
1044
+ ):
1045
+ super().__init__()
1046
+
1047
+ self.has_cross_attention = True
1048
+ self.attn_num_head_channels = attn_num_head_channels
1049
+
1050
+ # Middle block has two resnets and one attention
1051
+ resnets = [
1052
+ ResnetBlock2D(
1053
+ in_channels=in_channels,
1054
+ out_channels=in_channels,
1055
+ ),
1056
+ ResnetBlock2D(
1057
+ in_channels=in_channels,
1058
+ out_channels=in_channels,
1059
+ ),
1060
+ ]
1061
+ attentions = [
1062
+ Transformer2DModel(
1063
+ attn_num_head_channels,
1064
+ in_channels // attn_num_head_channels,
1065
+ in_channels=in_channels,
1066
+ cross_attention_dim=cross_attention_dim,
1067
+ use_linear_projection=use_linear_projection,
1068
+ )
1069
+ ]
1070
+
1071
+ self.attentions = nn.ModuleList(attentions)
1072
+ self.resnets = nn.ModuleList(resnets)
1073
+
1074
+ self.gradient_checkpointing = False
1075
+
1076
+ def set_use_memory_efficient_attention(self, xformers, mem_eff):
1077
+ for attn in self.attentions:
1078
+ attn.set_use_memory_efficient_attention(xformers, mem_eff)
1079
+
1080
+ def set_use_sdpa(self, sdpa):
1081
+ for attn in self.attentions:
1082
+ attn.set_use_sdpa(sdpa)
1083
+
1084
+ def forward(self, hidden_states, temb=None, encoder_hidden_states=None):
1085
+ for i, resnet in enumerate(self.resnets):
1086
+ attn = None if i == 0 else self.attentions[i - 1]
1087
+
1088
+ if self.training and self.gradient_checkpointing:
1089
+
1090
+ def create_custom_forward(module, return_dict=None):
1091
+ def custom_forward(*inputs):
1092
+ if return_dict is not None:
1093
+ return module(*inputs, return_dict=return_dict)
1094
+ else:
1095
+ return module(*inputs)
1096
+
1097
+ return custom_forward
1098
+
1099
+ if attn is not None:
1100
+ hidden_states = torch.utils.checkpoint.checkpoint(
1101
+ create_custom_forward(attn, return_dict=False), hidden_states, encoder_hidden_states
1102
+ )[0]
1103
+
1104
+ hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb)
1105
+ else:
1106
+ if attn is not None:
1107
+ hidden_states = attn(hidden_states, encoder_hidden_states).sample
1108
+ hidden_states = resnet(hidden_states, temb)
1109
+
1110
+ return hidden_states
1111
+
1112
+
1113
+ class Upsample2D(nn.Module):
1114
+ def __init__(self, channels, out_channels):
1115
+ super().__init__()
1116
+ self.channels = channels
1117
+ self.out_channels = out_channels
1118
+ self.conv = nn.Conv2d(self.channels, self.out_channels, 3, padding=1)
1119
+
1120
+ def forward(self, hidden_states, output_size):
1121
+ assert hidden_states.shape[1] == self.channels
1122
+
1123
+ # Cast to float32 to as 'upsample_nearest2d_out_frame' op does not support bfloat16
1124
+ # TODO(Suraj): Remove this cast once the issue is fixed in PyTorch
1125
+ # https://github.com/pytorch/pytorch/issues/86679
1126
+ dtype = hidden_states.dtype
1127
+ if dtype == torch.bfloat16:
1128
+ hidden_states = hidden_states.to(torch.float32)
1129
+
1130
+ # upsample_nearest_nhwc fails with large batch sizes. see https://github.com/huggingface/diffusers/issues/984
1131
+ if hidden_states.shape[0] >= 64:
1132
+ hidden_states = hidden_states.contiguous()
1133
+
1134
+ # if `output_size` is passed we force the interpolation output size and do not make use of `scale_factor=2`
1135
+ if output_size is None:
1136
+ hidden_states = F.interpolate(hidden_states, scale_factor=2.0, mode="nearest")
1137
+ else:
1138
+ hidden_states = F.interpolate(hidden_states, size=output_size, mode="nearest")
1139
+
1140
+ # If the input is bfloat16, we cast back to bfloat16
1141
+ if dtype == torch.bfloat16:
1142
+ hidden_states = hidden_states.to(dtype)
1143
+
1144
+ hidden_states = self.conv(hidden_states)
1145
+
1146
+ return hidden_states
1147
+
1148
+
1149
+ class UpBlock2D(nn.Module):
1150
+ def __init__(
1151
+ self,
1152
+ in_channels: int,
1153
+ prev_output_channel: int,
1154
+ out_channels: int,
1155
+ add_upsample=True,
1156
+ ):
1157
+ super().__init__()
1158
+
1159
+ self.has_cross_attention = False
1160
+ resnets = []
1161
+
1162
+ for i in range(LAYERS_PER_BLOCK_UP):
1163
+ res_skip_channels = in_channels if (i == LAYERS_PER_BLOCK_UP - 1) else out_channels
1164
+ resnet_in_channels = prev_output_channel if i == 0 else out_channels
1165
+
1166
+ resnets.append(
1167
+ ResnetBlock2D(
1168
+ in_channels=resnet_in_channels + res_skip_channels,
1169
+ out_channels=out_channels,
1170
+ )
1171
+ )
1172
+
1173
+ self.resnets = nn.ModuleList(resnets)
1174
+
1175
+ if add_upsample:
1176
+ self.upsamplers = nn.ModuleList([Upsample2D(out_channels, out_channels)])
1177
+ else:
1178
+ self.upsamplers = None
1179
+
1180
+ self.gradient_checkpointing = False
1181
+
1182
+ def set_use_memory_efficient_attention(self, xformers, mem_eff):
1183
+ pass
1184
+
1185
+ def set_use_sdpa(self, sdpa):
1186
+ pass
1187
+
1188
+ def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None):
1189
+ for resnet in self.resnets:
1190
+ # pop res hidden states
1191
+ res_hidden_states = res_hidden_states_tuple[-1]
1192
+ res_hidden_states_tuple = res_hidden_states_tuple[:-1]
1193
+
1194
+ hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
1195
+
1196
+ if self.training and self.gradient_checkpointing:
1197
+
1198
+ def create_custom_forward(module):
1199
+ def custom_forward(*inputs):
1200
+ return module(*inputs)
1201
+
1202
+ return custom_forward
1203
+
1204
+ hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb)
1205
+ else:
1206
+ hidden_states = resnet(hidden_states, temb)
1207
+
1208
+ if self.upsamplers is not None:
1209
+ for upsampler in self.upsamplers:
1210
+ hidden_states = upsampler(hidden_states, upsample_size)
1211
+
1212
+ return hidden_states
1213
+
1214
+
1215
+ class CrossAttnUpBlock2D(nn.Module):
1216
+ def __init__(
1217
+ self,
1218
+ in_channels: int,
1219
+ out_channels: int,
1220
+ prev_output_channel: int,
1221
+ attn_num_head_channels=1,
1222
+ cross_attention_dim=1280,
1223
+ add_upsample=True,
1224
+ use_linear_projection=False,
1225
+ upcast_attention=False,
1226
+ ):
1227
+ super().__init__()
1228
+ resnets = []
1229
+ attentions = []
1230
+
1231
+ self.has_cross_attention = True
1232
+ self.attn_num_head_channels = attn_num_head_channels
1233
+
1234
+ for i in range(LAYERS_PER_BLOCK_UP):
1235
+ res_skip_channels = in_channels if (i == LAYERS_PER_BLOCK_UP - 1) else out_channels
1236
+ resnet_in_channels = prev_output_channel if i == 0 else out_channels
1237
+
1238
+ resnets.append(
1239
+ ResnetBlock2D(
1240
+ in_channels=resnet_in_channels + res_skip_channels,
1241
+ out_channels=out_channels,
1242
+ )
1243
+ )
1244
+ attentions.append(
1245
+ Transformer2DModel(
1246
+ attn_num_head_channels,
1247
+ out_channels // attn_num_head_channels,
1248
+ in_channels=out_channels,
1249
+ cross_attention_dim=cross_attention_dim,
1250
+ use_linear_projection=use_linear_projection,
1251
+ upcast_attention=upcast_attention,
1252
+ )
1253
+ )
1254
+
1255
+ self.attentions = nn.ModuleList(attentions)
1256
+ self.resnets = nn.ModuleList(resnets)
1257
+
1258
+ if add_upsample:
1259
+ self.upsamplers = nn.ModuleList([Upsample2D(out_channels, out_channels)])
1260
+ else:
1261
+ self.upsamplers = None
1262
+
1263
+ self.gradient_checkpointing = False
1264
+
1265
+ def set_use_memory_efficient_attention(self, xformers, mem_eff):
1266
+ for attn in self.attentions:
1267
+ attn.set_use_memory_efficient_attention(xformers, mem_eff)
1268
+
1269
+ def set_use_sdpa(self, sdpa):
1270
+ for attn in self.attentions:
1271
+ attn.set_use_sdpa(sdpa)
1272
+
1273
+ def forward(
1274
+ self,
1275
+ hidden_states,
1276
+ res_hidden_states_tuple,
1277
+ temb=None,
1278
+ encoder_hidden_states=None,
1279
+ upsample_size=None,
1280
+ ):
1281
+ for resnet, attn in zip(self.resnets, self.attentions):
1282
+ # pop res hidden states
1283
+ res_hidden_states = res_hidden_states_tuple[-1]
1284
+ res_hidden_states_tuple = res_hidden_states_tuple[:-1]
1285
+
1286
+ hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
1287
+
1288
+ if self.training and self.gradient_checkpointing:
1289
+
1290
+ def create_custom_forward(module, return_dict=None):
1291
+ def custom_forward(*inputs):
1292
+ if return_dict is not None:
1293
+ return module(*inputs, return_dict=return_dict)
1294
+ else:
1295
+ return module(*inputs)
1296
+
1297
+ return custom_forward
1298
+
1299
+ hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb)
1300
+ hidden_states = torch.utils.checkpoint.checkpoint(
1301
+ create_custom_forward(attn, return_dict=False), hidden_states, encoder_hidden_states
1302
+ )[0]
1303
+ else:
1304
+ hidden_states = resnet(hidden_states, temb)
1305
+ hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample
1306
+
1307
+ if self.upsamplers is not None:
1308
+ for upsampler in self.upsamplers:
1309
+ hidden_states = upsampler(hidden_states, upsample_size)
1310
+
1311
+ return hidden_states
1312
+
1313
+
1314
+ def get_down_block(
1315
+ down_block_type,
1316
+ in_channels,
1317
+ out_channels,
1318
+ add_downsample,
1319
+ attn_num_head_channels,
1320
+ cross_attention_dim,
1321
+ use_linear_projection,
1322
+ upcast_attention,
1323
+ ):
1324
+ if down_block_type == "DownBlock2D":
1325
+ return DownBlock2D(
1326
+ in_channels=in_channels,
1327
+ out_channels=out_channels,
1328
+ add_downsample=add_downsample,
1329
+ )
1330
+ elif down_block_type == "CrossAttnDownBlock2D":
1331
+ return CrossAttnDownBlock2D(
1332
+ in_channels=in_channels,
1333
+ out_channels=out_channels,
1334
+ add_downsample=add_downsample,
1335
+ cross_attention_dim=cross_attention_dim,
1336
+ attn_num_head_channels=attn_num_head_channels,
1337
+ use_linear_projection=use_linear_projection,
1338
+ upcast_attention=upcast_attention,
1339
+ )
1340
+
1341
+
1342
+ def get_up_block(
1343
+ up_block_type,
1344
+ in_channels,
1345
+ out_channels,
1346
+ prev_output_channel,
1347
+ add_upsample,
1348
+ attn_num_head_channels,
1349
+ cross_attention_dim=None,
1350
+ use_linear_projection=False,
1351
+ upcast_attention=False,
1352
+ ):
1353
+ if up_block_type == "UpBlock2D":
1354
+ return UpBlock2D(
1355
+ in_channels=in_channels,
1356
+ prev_output_channel=prev_output_channel,
1357
+ out_channels=out_channels,
1358
+ add_upsample=add_upsample,
1359
+ )
1360
+ elif up_block_type == "CrossAttnUpBlock2D":
1361
+ return CrossAttnUpBlock2D(
1362
+ in_channels=in_channels,
1363
+ out_channels=out_channels,
1364
+ prev_output_channel=prev_output_channel,
1365
+ attn_num_head_channels=attn_num_head_channels,
1366
+ cross_attention_dim=cross_attention_dim,
1367
+ add_upsample=add_upsample,
1368
+ use_linear_projection=use_linear_projection,
1369
+ upcast_attention=upcast_attention,
1370
+ )
1371
+
1372
+
1373
+ class UNet2DConditionModel(nn.Module):
1374
+ _supports_gradient_checkpointing = True
1375
+
1376
+ def __init__(
1377
+ self,
1378
+ sample_size: Optional[int] = None,
1379
+ attention_head_dim: Union[int, Tuple[int]] = 8,
1380
+ cross_attention_dim: int = 1280,
1381
+ use_linear_projection: bool = False,
1382
+ upcast_attention: bool = False,
1383
+ **kwargs,
1384
+ ):
1385
+ super().__init__()
1386
+ assert sample_size is not None, "sample_size must be specified"
1387
+ logger.info(
1388
+ f"UNet2DConditionModel: {sample_size}, {attention_head_dim}, {cross_attention_dim}, {use_linear_projection}, {upcast_attention}"
1389
+ )
1390
+
1391
+ # 外部からの参照用に定義しておく
1392
+ self.in_channels = IN_CHANNELS
1393
+ self.out_channels = OUT_CHANNELS
1394
+
1395
+ self.sample_size = sample_size
1396
+ self.prepare_config(sample_size=sample_size)
1397
+
1398
+ # state_dictの書式が変わるのでmoduleの持ち方は変えられない
1399
+
1400
+ # input
1401
+ self.conv_in = nn.Conv2d(IN_CHANNELS, BLOCK_OUT_CHANNELS[0], kernel_size=3, padding=(1, 1))
1402
+
1403
+ # time
1404
+ self.time_proj = Timesteps(BLOCK_OUT_CHANNELS[0], TIME_EMBED_FLIP_SIN_TO_COS, TIME_EMBED_FREQ_SHIFT)
1405
+
1406
+ self.time_embedding = TimestepEmbedding(TIMESTEP_INPUT_DIM, TIME_EMBED_DIM)
1407
+
1408
+ self.down_blocks = nn.ModuleList([])
1409
+ self.mid_block = None
1410
+ self.up_blocks = nn.ModuleList([])
1411
+
1412
+ if isinstance(attention_head_dim, int):
1413
+ attention_head_dim = (attention_head_dim,) * 4
1414
+
1415
+ # down
1416
+ output_channel = BLOCK_OUT_CHANNELS[0]
1417
+ for i, down_block_type in enumerate(DOWN_BLOCK_TYPES):
1418
+ input_channel = output_channel
1419
+ output_channel = BLOCK_OUT_CHANNELS[i]
1420
+ is_final_block = i == len(BLOCK_OUT_CHANNELS) - 1
1421
+
1422
+ down_block = get_down_block(
1423
+ down_block_type,
1424
+ in_channels=input_channel,
1425
+ out_channels=output_channel,
1426
+ add_downsample=not is_final_block,
1427
+ attn_num_head_channels=attention_head_dim[i],
1428
+ cross_attention_dim=cross_attention_dim,
1429
+ use_linear_projection=use_linear_projection,
1430
+ upcast_attention=upcast_attention,
1431
+ )
1432
+ self.down_blocks.append(down_block)
1433
+
1434
+ # mid
1435
+ self.mid_block = UNetMidBlock2DCrossAttn(
1436
+ in_channels=BLOCK_OUT_CHANNELS[-1],
1437
+ attn_num_head_channels=attention_head_dim[-1],
1438
+ cross_attention_dim=cross_attention_dim,
1439
+ use_linear_projection=use_linear_projection,
1440
+ )
1441
+
1442
+ # count how many layers upsample the images
1443
+ self.num_upsamplers = 0
1444
+
1445
+ # up
1446
+ reversed_block_out_channels = list(reversed(BLOCK_OUT_CHANNELS))
1447
+ reversed_attention_head_dim = list(reversed(attention_head_dim))
1448
+ output_channel = reversed_block_out_channels[0]
1449
+ for i, up_block_type in enumerate(UP_BLOCK_TYPES):
1450
+ is_final_block = i == len(BLOCK_OUT_CHANNELS) - 1
1451
+
1452
+ prev_output_channel = output_channel
1453
+ output_channel = reversed_block_out_channels[i]
1454
+ input_channel = reversed_block_out_channels[min(i + 1, len(BLOCK_OUT_CHANNELS) - 1)]
1455
+
1456
+ # add upsample block for all BUT final layer
1457
+ if not is_final_block:
1458
+ add_upsample = True
1459
+ self.num_upsamplers += 1
1460
+ else:
1461
+ add_upsample = False
1462
+
1463
+ up_block = get_up_block(
1464
+ up_block_type,
1465
+ in_channels=input_channel,
1466
+ out_channels=output_channel,
1467
+ prev_output_channel=prev_output_channel,
1468
+ add_upsample=add_upsample,
1469
+ attn_num_head_channels=reversed_attention_head_dim[i],
1470
+ cross_attention_dim=cross_attention_dim,
1471
+ use_linear_projection=use_linear_projection,
1472
+ upcast_attention=upcast_attention,
1473
+ )
1474
+ self.up_blocks.append(up_block)
1475
+ prev_output_channel = output_channel
1476
+
1477
+ # out
1478
+ self.conv_norm_out = nn.GroupNorm(num_channels=BLOCK_OUT_CHANNELS[0], num_groups=NORM_GROUPS, eps=NORM_EPS)
1479
+ self.conv_act = nn.SiLU()
1480
+ self.conv_out = nn.Conv2d(BLOCK_OUT_CHANNELS[0], OUT_CHANNELS, kernel_size=3, padding=1)
1481
+
1482
+ # region diffusers compatibility
1483
+ def prepare_config(self, *args, **kwargs):
1484
+ self.config = SimpleNamespace(**kwargs)
1485
+
1486
+ @property
1487
+ def dtype(self) -> torch.dtype:
1488
+ # `torch.dtype`: The dtype of the module (assuming that all the module parameters have the same dtype).
1489
+ return get_parameter_dtype(self)
1490
+
1491
+ @property
1492
+ def device(self) -> torch.device:
1493
+ # `torch.device`: The device on which the module is (assuming that all the module parameters are on the same device).
1494
+ return get_parameter_device(self)
1495
+
1496
+ def set_attention_slice(self, slice_size):
1497
+ raise NotImplementedError("Attention slicing is not supported for this model.")
1498
+
1499
+ def is_gradient_checkpointing(self) -> bool:
1500
+ return any(hasattr(m, "gradient_checkpointing") and m.gradient_checkpointing for m in self.modules())
1501
+
1502
+ def enable_gradient_checkpointing(self):
1503
+ self.set_gradient_checkpointing(value=True)
1504
+
1505
+ def disable_gradient_checkpointing(self):
1506
+ self.set_gradient_checkpointing(value=False)
1507
+
1508
+ def set_use_memory_efficient_attention(self, xformers: bool, mem_eff: bool) -> None:
1509
+ modules = self.down_blocks + [self.mid_block] + self.up_blocks
1510
+ for module in modules:
1511
+ module.set_use_memory_efficient_attention(xformers, mem_eff)
1512
+
1513
+ def set_use_sdpa(self, sdpa: bool) -> None:
1514
+ modules = self.down_blocks + [self.mid_block] + self.up_blocks
1515
+ for module in modules:
1516
+ module.set_use_sdpa(sdpa)
1517
+
1518
+ def set_gradient_checkpointing(self, value=False):
1519
+ modules = self.down_blocks + [self.mid_block] + self.up_blocks
1520
+ for module in modules:
1521
+ logger.info(f"{module.__class__.__name__} {module.gradient_checkpointing} -> {value}")
1522
+ module.gradient_checkpointing = value
1523
+
1524
+ # endregion
1525
+
1526
+ def forward(
1527
+ self,
1528
+ sample: torch.FloatTensor,
1529
+ timestep: Union[torch.Tensor, float, int],
1530
+ encoder_hidden_states: torch.Tensor,
1531
+ class_labels: Optional[torch.Tensor] = None,
1532
+ return_dict: bool = True,
1533
+ down_block_additional_residuals: Optional[Tuple[torch.Tensor]] = None,
1534
+ mid_block_additional_residual: Optional[torch.Tensor] = None,
1535
+ ) -> Union[Dict, Tuple]:
1536
+ r"""
1537
+ Args:
1538
+ sample (`torch.FloatTensor`): (batch, channel, height, width) noisy inputs tensor
1539
+ timestep (`torch.FloatTensor` or `float` or `int`): (batch) timesteps
1540
+ encoder_hidden_states (`torch.FloatTensor`): (batch, sequence_length, feature_dim) encoder hidden states
1541
+ return_dict (`bool`, *optional*, defaults to `True`):
1542
+ Whether or not to return a dict instead of a plain tuple.
1543
+
1544
+ Returns:
1545
+ `SampleOutput` or `tuple`:
1546
+ `SampleOutput` if `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
1547
+ """
1548
+ # By default samples have to be AT least a multiple of the overall upsampling factor.
1549
+ # The overall upsampling factor is equal to 2 ** (# num of upsampling layears).
1550
+ # However, the upsampling interpolation output size can be forced to fit any upsampling size
1551
+ # on the fly if necessary.
1552
+ # デフォルトではサンプルは「2^アップサンプルの数」、つまり64の倍数である必要がある
1553
+ # ただそれ以外のサイズにも対応できるように、必要ならアップサンプルのサイズを変更する
1554
+ # 多分画質が悪くなるので、64で割り切れるようにしておくのが良い
1555
+ default_overall_up_factor = 2**self.num_upsamplers
1556
+
1557
+ # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor`
1558
+ # 64で割り切れないときはupsamplerにサイズを伝える
1559
+ forward_upsample_size = False
1560
+ upsample_size = None
1561
+
1562
+ if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]):
1563
+ # logger.info("Forward upsample size to force interpolation output size.")
1564
+ forward_upsample_size = True
1565
+
1566
+ # 1. time
1567
+ timesteps = timestep
1568
+ timesteps = self.handle_unusual_timesteps(sample, timesteps) # 変な時だけ処理
1569
+
1570
+ t_emb = self.time_proj(timesteps)
1571
+
1572
+ # timesteps does not contain any weights and will always return f32 tensors
1573
+ # but time_embedding might actually be running in fp16. so we need to cast here.
1574
+ # there might be better ways to encapsulate this.
1575
+ # timestepsは重みを含まないので常にfloat32のテンソルを返す
1576
+ # しかしtime_embeddingはfp16で動いているかもしれないので、ここでキャストする必要がある
1577
+ # time_projでキャストしておけばいいんじゃね?
1578
+ t_emb = t_emb.to(dtype=self.dtype)
1579
+ emb = self.time_embedding(t_emb)
1580
+
1581
+ # 2. pre-process
1582
+ sample = self.conv_in(sample)
1583
+
1584
+ down_block_res_samples = (sample,)
1585
+ for downsample_block in self.down_blocks:
1586
+ # downblockはforwardで必ずencoder_hidden_statesを受け取るようにしても良さそうだけど、
1587
+ # まあこちらのほうがわかりやすいかもしれない
1588
+ if downsample_block.has_cross_attention:
1589
+ sample, res_samples = downsample_block(
1590
+ hidden_states=sample,
1591
+ temb=emb,
1592
+ encoder_hidden_states=encoder_hidden_states,
1593
+ )
1594
+ else:
1595
+ sample, res_samples = downsample_block(hidden_states=sample, temb=emb)
1596
+
1597
+ down_block_res_samples += res_samples
1598
+
1599
+ # skip connectionにControlNetの出力を追加する
1600
+ if down_block_additional_residuals is not None:
1601
+ down_block_res_samples = list(down_block_res_samples)
1602
+ for i in range(len(down_block_res_samples)):
1603
+ down_block_res_samples[i] += down_block_additional_residuals[i]
1604
+ down_block_res_samples = tuple(down_block_res_samples)
1605
+
1606
+ # 4. mid
1607
+ sample = self.mid_block(sample, emb, encoder_hidden_states=encoder_hidden_states)
1608
+
1609
+ # ControlNetの出力を追加する
1610
+ if mid_block_additional_residual is not None:
1611
+ sample += mid_block_additional_residual
1612
+
1613
+ # 5. up
1614
+ for i, upsample_block in enumerate(self.up_blocks):
1615
+ is_final_block = i == len(self.up_blocks) - 1
1616
+
1617
+ res_samples = down_block_res_samples[-len(upsample_block.resnets) :]
1618
+ down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)] # skip connection
1619
+
1620
+ # if we have not reached the final block and need to forward the upsample size, we do it here
1621
+ # 前述のように最後のブロック以外ではupsample_sizeを伝える
1622
+ if not is_final_block and forward_upsample_size:
1623
+ upsample_size = down_block_res_samples[-1].shape[2:]
1624
+
1625
+ if upsample_block.has_cross_attention:
1626
+ sample = upsample_block(
1627
+ hidden_states=sample,
1628
+ temb=emb,
1629
+ res_hidden_states_tuple=res_samples,
1630
+ encoder_hidden_states=encoder_hidden_states,
1631
+ upsample_size=upsample_size,
1632
+ )
1633
+ else:
1634
+ sample = upsample_block(
1635
+ hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, upsample_size=upsample_size
1636
+ )
1637
+
1638
+ # 6. post-process
1639
+ sample = self.conv_norm_out(sample)
1640
+ sample = self.conv_act(sample)
1641
+ sample = self.conv_out(sample)
1642
+
1643
+ if not return_dict:
1644
+ return (sample,)
1645
+
1646
+ return SampleOutput(sample=sample)
1647
+
1648
+ def handle_unusual_timesteps(self, sample, timesteps):
1649
+ r"""
1650
+ timestampsがTensorでない場合、Tensorに変換する。またOnnx/Core MLと互換性のあるようにbatchサイズまでbroadcastする。
1651
+ """
1652
+ if not torch.is_tensor(timesteps):
1653
+ # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can
1654
+ # This would be a good case for the `match` statement (Python 3.10+)
1655
+ is_mps = sample.device.type == "mps"
1656
+ if isinstance(timesteps, float):
1657
+ dtype = torch.float32 if is_mps else torch.float64
1658
+ else:
1659
+ dtype = torch.int32 if is_mps else torch.int64
1660
+ timesteps = torch.tensor([timesteps], dtype=dtype, device=sample.device)
1661
+ elif len(timesteps.shape) == 0:
1662
+ timesteps = timesteps[None].to(sample.device)
1663
+
1664
+ # broadcast to batch dimension in a way that's compatible with ONNX/Core ML
1665
+ timesteps = timesteps.expand(sample.shape[0])
1666
+
1667
+ return timesteps
1668
+
1669
+
1670
+ class InferUNet2DConditionModel:
1671
+ def __init__(self, original_unet: UNet2DConditionModel):
1672
+ self.delegate = original_unet
1673
+
1674
+ # override original model's forward method: because forward is not called by `__call__`
1675
+ # overriding `__call__` is not enough, because nn.Module.forward has a special handling
1676
+ self.delegate.forward = self.forward
1677
+
1678
+ # override original model's up blocks' forward method
1679
+ for up_block in self.delegate.up_blocks:
1680
+ if up_block.__class__.__name__ == "UpBlock2D":
1681
+
1682
+ def resnet_wrapper(func, block):
1683
+ def forward(*args, **kwargs):
1684
+ return func(block, *args, **kwargs)
1685
+
1686
+ return forward
1687
+
1688
+ up_block.forward = resnet_wrapper(self.up_block_forward, up_block)
1689
+
1690
+ elif up_block.__class__.__name__ == "CrossAttnUpBlock2D":
1691
+
1692
+ def cross_attn_up_wrapper(func, block):
1693
+ def forward(*args, **kwargs):
1694
+ return func(block, *args, **kwargs)
1695
+
1696
+ return forward
1697
+
1698
+ up_block.forward = cross_attn_up_wrapper(self.cross_attn_up_block_forward, up_block)
1699
+
1700
+ # Deep Shrink
1701
+ self.ds_depth_1 = None
1702
+ self.ds_depth_2 = None
1703
+ self.ds_timesteps_1 = None
1704
+ self.ds_timesteps_2 = None
1705
+ self.ds_ratio = None
1706
+
1707
+ # call original model's methods
1708
+ def __getattr__(self, name):
1709
+ return getattr(self.delegate, name)
1710
+
1711
+ def __call__(self, *args, **kwargs):
1712
+ return self.delegate(*args, **kwargs)
1713
+
1714
+ def set_deep_shrink(self, ds_depth_1, ds_timesteps_1=650, ds_depth_2=None, ds_timesteps_2=None, ds_ratio=0.5):
1715
+ if ds_depth_1 is None:
1716
+ logger.info("Deep Shrink is disabled.")
1717
+ self.ds_depth_1 = None
1718
+ self.ds_timesteps_1 = None
1719
+ self.ds_depth_2 = None
1720
+ self.ds_timesteps_2 = None
1721
+ self.ds_ratio = None
1722
+ else:
1723
+ logger.info(
1724
+ f"Deep Shrink is enabled: [depth={ds_depth_1}/{ds_depth_2}, timesteps={ds_timesteps_1}/{ds_timesteps_2}, ratio={ds_ratio}]"
1725
+ )
1726
+ self.ds_depth_1 = ds_depth_1
1727
+ self.ds_timesteps_1 = ds_timesteps_1
1728
+ self.ds_depth_2 = ds_depth_2 if ds_depth_2 is not None else -1
1729
+ self.ds_timesteps_2 = ds_timesteps_2 if ds_timesteps_2 is not None else 1000
1730
+ self.ds_ratio = ds_ratio
1731
+
1732
+ def up_block_forward(self, _self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None):
1733
+ for resnet in _self.resnets:
1734
+ # pop res hidden states
1735
+ res_hidden_states = res_hidden_states_tuple[-1]
1736
+ res_hidden_states_tuple = res_hidden_states_tuple[:-1]
1737
+
1738
+ # Deep Shrink
1739
+ if res_hidden_states.shape[-2:] != hidden_states.shape[-2:]:
1740
+ hidden_states = resize_like(hidden_states, res_hidden_states)
1741
+
1742
+ hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
1743
+ hidden_states = resnet(hidden_states, temb)
1744
+
1745
+ if _self.upsamplers is not None:
1746
+ for upsampler in _self.upsamplers:
1747
+ hidden_states = upsampler(hidden_states, upsample_size)
1748
+
1749
+ return hidden_states
1750
+
1751
+ def cross_attn_up_block_forward(
1752
+ self,
1753
+ _self,
1754
+ hidden_states,
1755
+ res_hidden_states_tuple,
1756
+ temb=None,
1757
+ encoder_hidden_states=None,
1758
+ upsample_size=None,
1759
+ ):
1760
+ for resnet, attn in zip(_self.resnets, _self.attentions):
1761
+ # pop res hidden states
1762
+ res_hidden_states = res_hidden_states_tuple[-1]
1763
+ res_hidden_states_tuple = res_hidden_states_tuple[:-1]
1764
+
1765
+ # Deep Shrink
1766
+ if res_hidden_states.shape[-2:] != hidden_states.shape[-2:]:
1767
+ hidden_states = resize_like(hidden_states, res_hidden_states)
1768
+
1769
+ hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
1770
+ hidden_states = resnet(hidden_states, temb)
1771
+ hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample
1772
+
1773
+ if _self.upsamplers is not None:
1774
+ for upsampler in _self.upsamplers:
1775
+ hidden_states = upsampler(hidden_states, upsample_size)
1776
+
1777
+ return hidden_states
1778
+
1779
+ def forward(
1780
+ self,
1781
+ sample: torch.FloatTensor,
1782
+ timestep: Union[torch.Tensor, float, int],
1783
+ encoder_hidden_states: torch.Tensor,
1784
+ class_labels: Optional[torch.Tensor] = None,
1785
+ return_dict: bool = True,
1786
+ down_block_additional_residuals: Optional[Tuple[torch.Tensor]] = None,
1787
+ mid_block_additional_residual: Optional[torch.Tensor] = None,
1788
+ ) -> Union[Dict, Tuple]:
1789
+ r"""
1790
+ current implementation is a copy of `UNet2DConditionModel.forward()` with Deep Shrink.
1791
+ """
1792
+
1793
+ r"""
1794
+ Args:
1795
+ sample (`torch.FloatTensor`): (batch, channel, height, width) noisy inputs tensor
1796
+ timestep (`torch.FloatTensor` or `float` or `int`): (batch) timesteps
1797
+ encoder_hidden_states (`torch.FloatTensor`): (batch, sequence_length, feature_dim) encoder hidden states
1798
+ return_dict (`bool`, *optional*, defaults to `True`):
1799
+ Whether or not to return a dict instead of a plain tuple.
1800
+
1801
+ Returns:
1802
+ `SampleOutput` or `tuple`:
1803
+ `SampleOutput` if `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
1804
+ """
1805
+
1806
+ _self = self.delegate
1807
+
1808
+ # By default samples have to be AT least a multiple of the overall upsampling factor.
1809
+ # The overall upsampling factor is equal to 2 ** (# num of upsampling layears).
1810
+ # However, the upsampling interpolation output size can be forced to fit any upsampling size
1811
+ # on the fly if necessary.
1812
+ # デフォルトではサンプルは「2^アップサンプルの数」、つまり64の倍数である必要がある
1813
+ # ただそれ以外のサイズにも対応できるように、必要ならアップサンプルのサイズを変更する
1814
+ # 多分画質が悪くなるので、64で割り切れるようにしておくのが良い
1815
+ default_overall_up_factor = 2**_self.num_upsamplers
1816
+
1817
+ # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor`
1818
+ # 64で割り切れないときはupsamplerにサイズを伝える
1819
+ forward_upsample_size = False
1820
+ upsample_size = None
1821
+
1822
+ if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]):
1823
+ # logger.info("Forward upsample size to force interpolation output size.")
1824
+ forward_upsample_size = True
1825
+
1826
+ # 1. time
1827
+ timesteps = timestep
1828
+ timesteps = _self.handle_unusual_timesteps(sample, timesteps) # 変な時だけ処理
1829
+
1830
+ t_emb = _self.time_proj(timesteps)
1831
+
1832
+ # timesteps does not contain any weights and will always return f32 tensors
1833
+ # but time_embedding might actually be running in fp16. so we need to cast here.
1834
+ # there might be better ways to encapsulate this.
1835
+ # timestepsは重みを含まないので常にfloat32のテンソルを返す
1836
+ # しかしtime_embeddingはfp16で動いているかもしれないので、ここでキャストする必要がある
1837
+ # time_projでキャストしておけばいいんじゃね?
1838
+ t_emb = t_emb.to(dtype=_self.dtype)
1839
+ emb = _self.time_embedding(t_emb)
1840
+
1841
+ # 2. pre-process
1842
+ sample = _self.conv_in(sample)
1843
+
1844
+ down_block_res_samples = (sample,)
1845
+ for depth, downsample_block in enumerate(_self.down_blocks):
1846
+ # Deep Shrink
1847
+ if self.ds_depth_1 is not None:
1848
+ if (depth == self.ds_depth_1 and timesteps[0] >= self.ds_timesteps_1) or (
1849
+ self.ds_depth_2 is not None
1850
+ and depth == self.ds_depth_2
1851
+ and timesteps[0] < self.ds_timesteps_1
1852
+ and timesteps[0] >= self.ds_timesteps_2
1853
+ ):
1854
+ org_dtype = sample.dtype
1855
+ if org_dtype == torch.bfloat16:
1856
+ sample = sample.to(torch.float32)
1857
+ sample = F.interpolate(sample, scale_factor=self.ds_ratio, mode="bicubic", align_corners=False).to(org_dtype)
1858
+
1859
+ # downblockはforwardで必ずencoder_hidden_statesを受け取るようにしても良さそうだけど、
1860
+ # まあこちらのほうがわかりやすいかもしれない
1861
+ if downsample_block.has_cross_attention:
1862
+ sample, res_samples = downsample_block(
1863
+ hidden_states=sample,
1864
+ temb=emb,
1865
+ encoder_hidden_states=encoder_hidden_states,
1866
+ )
1867
+ else:
1868
+ sample, res_samples = downsample_block(hidden_states=sample, temb=emb)
1869
+
1870
+ down_block_res_samples += res_samples
1871
+
1872
+ # skip connectionにControlNetの出力を追加する
1873
+ if down_block_additional_residuals is not None:
1874
+ down_block_res_samples = list(down_block_res_samples)
1875
+ for i in range(len(down_block_res_samples)):
1876
+ down_block_res_samples[i] += down_block_additional_residuals[i]
1877
+ down_block_res_samples = tuple(down_block_res_samples)
1878
+
1879
+ # 4. mid
1880
+ sample = _self.mid_block(sample, emb, encoder_hidden_states=encoder_hidden_states)
1881
+
1882
+ # ControlNetの出力を追加する
1883
+ if mid_block_additional_residual is not None:
1884
+ sample += mid_block_additional_residual
1885
+
1886
+ # 5. up
1887
+ for i, upsample_block in enumerate(_self.up_blocks):
1888
+ is_final_block = i == len(_self.up_blocks) - 1
1889
+
1890
+ res_samples = down_block_res_samples[-len(upsample_block.resnets) :]
1891
+ down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)] # skip connection
1892
+
1893
+ # if we have not reached the final block and need to forward the upsample size, we do it here
1894
+ # 前述のように最後のブロック以外ではupsample_sizeを伝える
1895
+ if not is_final_block and forward_upsample_size:
1896
+ upsample_size = down_block_res_samples[-1].shape[2:]
1897
+
1898
+ if upsample_block.has_cross_attention:
1899
+ sample = upsample_block(
1900
+ hidden_states=sample,
1901
+ temb=emb,
1902
+ res_hidden_states_tuple=res_samples,
1903
+ encoder_hidden_states=encoder_hidden_states,
1904
+ upsample_size=upsample_size,
1905
+ )
1906
+ else:
1907
+ sample = upsample_block(
1908
+ hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, upsample_size=upsample_size
1909
+ )
1910
+
1911
+ # 6. post-process
1912
+ sample = _self.conv_norm_out(sample)
1913
+ sample = _self.conv_act(sample)
1914
+ sample = _self.conv_out(sample)
1915
+
1916
+ if not return_dict:
1917
+ return (sample,)
1918
+
1919
+ return SampleOutput(sample=sample)