ACCC1380 commited on
Commit
71ae55a
1 Parent(s): 6be1cbd

Upload lora-scripts/sd-scripts/library/sdxl_lpw_stable_diffusion.py with huggingface_hub

Browse files
lora-scripts/sd-scripts/library/sdxl_lpw_stable_diffusion.py ADDED
@@ -0,0 +1,1347 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # copy from https://github.com/huggingface/diffusers/blob/main/examples/community/lpw_stable_diffusion.py
2
+ # and modify to support SD2.x
3
+
4
+ import inspect
5
+ import re
6
+ from typing import Callable, List, Optional, Union
7
+
8
+ import numpy as np
9
+ import PIL.Image
10
+ import torch
11
+ from packaging import version
12
+ from tqdm import tqdm
13
+ from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
14
+
15
+ from diffusers import SchedulerMixin, StableDiffusionPipeline
16
+ from diffusers.models import AutoencoderKL, UNet2DConditionModel
17
+ from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput, StableDiffusionSafetyChecker
18
+ from diffusers.utils import logging
19
+ from PIL import Image
20
+
21
+ from library import sdxl_model_util, sdxl_train_util, train_util
22
+
23
+
24
+ try:
25
+ from diffusers.utils import PIL_INTERPOLATION
26
+ except ImportError:
27
+ if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
28
+ PIL_INTERPOLATION = {
29
+ "linear": PIL.Image.Resampling.BILINEAR,
30
+ "bilinear": PIL.Image.Resampling.BILINEAR,
31
+ "bicubic": PIL.Image.Resampling.BICUBIC,
32
+ "lanczos": PIL.Image.Resampling.LANCZOS,
33
+ "nearest": PIL.Image.Resampling.NEAREST,
34
+ }
35
+ else:
36
+ PIL_INTERPOLATION = {
37
+ "linear": PIL.Image.LINEAR,
38
+ "bilinear": PIL.Image.BILINEAR,
39
+ "bicubic": PIL.Image.BICUBIC,
40
+ "lanczos": PIL.Image.LANCZOS,
41
+ "nearest": PIL.Image.NEAREST,
42
+ }
43
+ # ------------------------------------------------------------------------------
44
+
45
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
46
+
47
+ re_attention = re.compile(
48
+ r"""
49
+ \\\(|
50
+ \\\)|
51
+ \\\[|
52
+ \\]|
53
+ \\\\|
54
+ \\|
55
+ \(|
56
+ \[|
57
+ :([+-]?[.\d]+)\)|
58
+ \)|
59
+ ]|
60
+ [^\\()\[\]:]+|
61
+ :
62
+ """,
63
+ re.X,
64
+ )
65
+
66
+
67
+ def parse_prompt_attention(text):
68
+ """
69
+ Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
70
+ Accepted tokens are:
71
+ (abc) - increases attention to abc by a multiplier of 1.1
72
+ (abc:3.12) - increases attention to abc by a multiplier of 3.12
73
+ [abc] - decreases attention to abc by a multiplier of 1.1
74
+ \( - literal character '('
75
+ \[ - literal character '['
76
+ \) - literal character ')'
77
+ \] - literal character ']'
78
+ \\ - literal character '\'
79
+ anything else - just text
80
+ >>> parse_prompt_attention('normal text')
81
+ [['normal text', 1.0]]
82
+ >>> parse_prompt_attention('an (important) word')
83
+ [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
84
+ >>> parse_prompt_attention('(unbalanced')
85
+ [['unbalanced', 1.1]]
86
+ >>> parse_prompt_attention('\(literal\]')
87
+ [['(literal]', 1.0]]
88
+ >>> parse_prompt_attention('(unnecessary)(parens)')
89
+ [['unnecessaryparens', 1.1]]
90
+ >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
91
+ [['a ', 1.0],
92
+ ['house', 1.5730000000000004],
93
+ [' ', 1.1],
94
+ ['on', 1.0],
95
+ [' a ', 1.1],
96
+ ['hill', 0.55],
97
+ [', sun, ', 1.1],
98
+ ['sky', 1.4641000000000006],
99
+ ['.', 1.1]]
100
+ """
101
+
102
+ res = []
103
+ round_brackets = []
104
+ square_brackets = []
105
+
106
+ round_bracket_multiplier = 1.1
107
+ square_bracket_multiplier = 1 / 1.1
108
+
109
+ def multiply_range(start_position, multiplier):
110
+ for p in range(start_position, len(res)):
111
+ res[p][1] *= multiplier
112
+
113
+ for m in re_attention.finditer(text):
114
+ text = m.group(0)
115
+ weight = m.group(1)
116
+
117
+ if text.startswith("\\"):
118
+ res.append([text[1:], 1.0])
119
+ elif text == "(":
120
+ round_brackets.append(len(res))
121
+ elif text == "[":
122
+ square_brackets.append(len(res))
123
+ elif weight is not None and len(round_brackets) > 0:
124
+ multiply_range(round_brackets.pop(), float(weight))
125
+ elif text == ")" and len(round_brackets) > 0:
126
+ multiply_range(round_brackets.pop(), round_bracket_multiplier)
127
+ elif text == "]" and len(square_brackets) > 0:
128
+ multiply_range(square_brackets.pop(), square_bracket_multiplier)
129
+ else:
130
+ res.append([text, 1.0])
131
+
132
+ for pos in round_brackets:
133
+ multiply_range(pos, round_bracket_multiplier)
134
+
135
+ for pos in square_brackets:
136
+ multiply_range(pos, square_bracket_multiplier)
137
+
138
+ if len(res) == 0:
139
+ res = [["", 1.0]]
140
+
141
+ # merge runs of identical weights
142
+ i = 0
143
+ while i + 1 < len(res):
144
+ if res[i][1] == res[i + 1][1]:
145
+ res[i][0] += res[i + 1][0]
146
+ res.pop(i + 1)
147
+ else:
148
+ i += 1
149
+
150
+ return res
151
+
152
+
153
+ def get_prompts_with_weights(pipe: StableDiffusionPipeline, prompt: List[str], max_length: int):
154
+ r"""
155
+ Tokenize a list of prompts and return its tokens with weights of each token.
156
+
157
+ No padding, starting or ending token is included.
158
+ """
159
+ tokens = []
160
+ weights = []
161
+ truncated = False
162
+ for text in prompt:
163
+ texts_and_weights = parse_prompt_attention(text)
164
+ text_token = []
165
+ text_weight = []
166
+ for word, weight in texts_and_weights:
167
+ # tokenize and discard the starting and the ending token
168
+ token = pipe.tokenizer(word).input_ids[1:-1]
169
+ text_token += token
170
+ # copy the weight by length of token
171
+ text_weight += [weight] * len(token)
172
+ # stop if the text is too long (longer than truncation limit)
173
+ if len(text_token) > max_length:
174
+ truncated = True
175
+ break
176
+ # truncate
177
+ if len(text_token) > max_length:
178
+ truncated = True
179
+ text_token = text_token[:max_length]
180
+ text_weight = text_weight[:max_length]
181
+ tokens.append(text_token)
182
+ weights.append(text_weight)
183
+ if truncated:
184
+ logger.warning("Prompt was truncated. Try to shorten the prompt or increase max_embeddings_multiples")
185
+ return tokens, weights
186
+
187
+
188
+ def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, pad, no_boseos_middle=True, chunk_length=77):
189
+ r"""
190
+ Pad the tokens (with starting and ending tokens) and weights (with 1.0) to max_length.
191
+ """
192
+ max_embeddings_multiples = (max_length - 2) // (chunk_length - 2)
193
+ weights_length = max_length if no_boseos_middle else max_embeddings_multiples * chunk_length
194
+ for i in range(len(tokens)):
195
+ tokens[i] = [bos] + tokens[i] + [eos] + [pad] * (max_length - 2 - len(tokens[i]))
196
+ if no_boseos_middle:
197
+ weights[i] = [1.0] + weights[i] + [1.0] * (max_length - 1 - len(weights[i]))
198
+ else:
199
+ w = []
200
+ if len(weights[i]) == 0:
201
+ w = [1.0] * weights_length
202
+ else:
203
+ for j in range(max_embeddings_multiples):
204
+ w.append(1.0) # weight for starting token in this chunk
205
+ w += weights[i][j * (chunk_length - 2) : min(len(weights[i]), (j + 1) * (chunk_length - 2))]
206
+ w.append(1.0) # weight for ending token in this chunk
207
+ w += [1.0] * (weights_length - len(w))
208
+ weights[i] = w[:]
209
+
210
+ return tokens, weights
211
+
212
+
213
+ def get_hidden_states(text_encoder, input_ids, is_sdxl_text_encoder2: bool, eos_token_id, device):
214
+ if not is_sdxl_text_encoder2:
215
+ # text_encoder1: same as SD1/2
216
+ enc_out = text_encoder(input_ids.to(text_encoder.device), output_hidden_states=True, return_dict=True)
217
+ hidden_states = enc_out["hidden_states"][11]
218
+ pool = None
219
+ else:
220
+ # text_encoder2
221
+ enc_out = text_encoder(input_ids.to(text_encoder.device), output_hidden_states=True, return_dict=True)
222
+ hidden_states = enc_out["hidden_states"][-2] # penuultimate layer
223
+ # pool = enc_out["text_embeds"]
224
+ pool = train_util.pool_workaround(text_encoder, enc_out["last_hidden_state"], input_ids, eos_token_id)
225
+ hidden_states = hidden_states.to(device)
226
+ if pool is not None:
227
+ pool = pool.to(device)
228
+ return hidden_states, pool
229
+
230
+
231
+ def get_unweighted_text_embeddings(
232
+ pipe: StableDiffusionPipeline,
233
+ text_input: torch.Tensor,
234
+ chunk_length: int,
235
+ clip_skip: int,
236
+ eos: int,
237
+ pad: int,
238
+ is_sdxl_text_encoder2: bool,
239
+ no_boseos_middle: Optional[bool] = True,
240
+ ):
241
+ """
242
+ When the length of tokens is a multiple of the capacity of the text encoder,
243
+ it should be split into chunks and sent to the text encoder individually.
244
+ """
245
+ max_embeddings_multiples = (text_input.shape[1] - 2) // (chunk_length - 2)
246
+ text_pool = None
247
+ if max_embeddings_multiples > 1:
248
+ text_embeddings = []
249
+ for i in range(max_embeddings_multiples):
250
+ # extract the i-th chunk
251
+ text_input_chunk = text_input[:, i * (chunk_length - 2) : (i + 1) * (chunk_length - 2) + 2].clone()
252
+
253
+ # cover the head and the tail by the starting and the ending tokens
254
+ text_input_chunk[:, 0] = text_input[0, 0]
255
+ if pad == eos: # v1
256
+ text_input_chunk[:, -1] = text_input[0, -1]
257
+ else: # v2
258
+ for j in range(len(text_input_chunk)):
259
+ if text_input_chunk[j, -1] != eos and text_input_chunk[j, -1] != pad: # 最後に普通の文字がある
260
+ text_input_chunk[j, -1] = eos
261
+ if text_input_chunk[j, 1] == pad: # BOSだけであとはPAD
262
+ text_input_chunk[j, 1] = eos
263
+
264
+ text_embedding, current_text_pool = get_hidden_states(
265
+ pipe.text_encoder, text_input_chunk, is_sdxl_text_encoder2, eos, pipe.device
266
+ )
267
+ if text_pool is None:
268
+ text_pool = current_text_pool
269
+
270
+ if no_boseos_middle:
271
+ if i == 0:
272
+ # discard the ending token
273
+ text_embedding = text_embedding[:, :-1]
274
+ elif i == max_embeddings_multiples - 1:
275
+ # discard the starting token
276
+ text_embedding = text_embedding[:, 1:]
277
+ else:
278
+ # discard both starting and ending tokens
279
+ text_embedding = text_embedding[:, 1:-1]
280
+
281
+ text_embeddings.append(text_embedding)
282
+ text_embeddings = torch.concat(text_embeddings, axis=1)
283
+ else:
284
+ text_embeddings, text_pool = get_hidden_states(pipe.text_encoder, text_input, is_sdxl_text_encoder2, eos, pipe.device)
285
+ return text_embeddings, text_pool
286
+
287
+
288
+ def get_weighted_text_embeddings(
289
+ pipe, # : SdxlStableDiffusionLongPromptWeightingPipeline,
290
+ prompt: Union[str, List[str]],
291
+ uncond_prompt: Optional[Union[str, List[str]]] = None,
292
+ max_embeddings_multiples: Optional[int] = 3,
293
+ no_boseos_middle: Optional[bool] = False,
294
+ skip_parsing: Optional[bool] = False,
295
+ skip_weighting: Optional[bool] = False,
296
+ clip_skip=None,
297
+ is_sdxl_text_encoder2=False,
298
+ ):
299
+ r"""
300
+ Prompts can be assigned with local weights using brackets. For example,
301
+ prompt 'A (very beautiful) masterpiece' highlights the words 'very beautiful',
302
+ and the embedding tokens corresponding to the words get multiplied by a constant, 1.1.
303
+
304
+ Also, to regularize of the embedding, the weighted embedding would be scaled to preserve the original mean.
305
+
306
+ Args:
307
+ pipe (`StableDiffusionPipeline`):
308
+ Pipe to provide access to the tokenizer and the text encoder.
309
+ prompt (`str` or `List[str]`):
310
+ The prompt or prompts to guide the image generation.
311
+ uncond_prompt (`str` or `List[str]`):
312
+ The unconditional prompt or prompts for guide the image generation. If unconditional prompt
313
+ is provided, the embeddings of prompt and uncond_prompt are concatenated.
314
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
315
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
316
+ no_boseos_middle (`bool`, *optional*, defaults to `False`):
317
+ If the length of text token is multiples of the capacity of text encoder, whether reserve the starting and
318
+ ending token in each of the chunk in the middle.
319
+ skip_parsing (`bool`, *optional*, defaults to `False`):
320
+ Skip the parsing of brackets.
321
+ skip_weighting (`bool`, *optional*, defaults to `False`):
322
+ Skip the weighting. When the parsing is skipped, it is forced True.
323
+ """
324
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
325
+ if isinstance(prompt, str):
326
+ prompt = [prompt]
327
+
328
+ if not skip_parsing:
329
+ prompt_tokens, prompt_weights = get_prompts_with_weights(pipe, prompt, max_length - 2)
330
+ if uncond_prompt is not None:
331
+ if isinstance(uncond_prompt, str):
332
+ uncond_prompt = [uncond_prompt]
333
+ uncond_tokens, uncond_weights = get_prompts_with_weights(pipe, uncond_prompt, max_length - 2)
334
+ else:
335
+ prompt_tokens = [token[1:-1] for token in pipe.tokenizer(prompt, max_length=max_length, truncation=True).input_ids]
336
+ prompt_weights = [[1.0] * len(token) for token in prompt_tokens]
337
+ if uncond_prompt is not None:
338
+ if isinstance(uncond_prompt, str):
339
+ uncond_prompt = [uncond_prompt]
340
+ uncond_tokens = [
341
+ token[1:-1] for token in pipe.tokenizer(uncond_prompt, max_length=max_length, truncation=True).input_ids
342
+ ]
343
+ uncond_weights = [[1.0] * len(token) for token in uncond_tokens]
344
+
345
+ # round up the longest length of tokens to a multiple of (model_max_length - 2)
346
+ max_length = max([len(token) for token in prompt_tokens])
347
+ if uncond_prompt is not None:
348
+ max_length = max(max_length, max([len(token) for token in uncond_tokens]))
349
+
350
+ max_embeddings_multiples = min(
351
+ max_embeddings_multiples,
352
+ (max_length - 1) // (pipe.tokenizer.model_max_length - 2) + 1,
353
+ )
354
+ max_embeddings_multiples = max(1, max_embeddings_multiples)
355
+ max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2
356
+
357
+ # pad the length of tokens and weights
358
+ bos = pipe.tokenizer.bos_token_id
359
+ eos = pipe.tokenizer.eos_token_id
360
+ pad = pipe.tokenizer.pad_token_id
361
+ prompt_tokens, prompt_weights = pad_tokens_and_weights(
362
+ prompt_tokens,
363
+ prompt_weights,
364
+ max_length,
365
+ bos,
366
+ eos,
367
+ pad,
368
+ no_boseos_middle=no_boseos_middle,
369
+ chunk_length=pipe.tokenizer.model_max_length,
370
+ )
371
+ prompt_tokens = torch.tensor(prompt_tokens, dtype=torch.long, device=pipe.device)
372
+ if uncond_prompt is not None:
373
+ uncond_tokens, uncond_weights = pad_tokens_and_weights(
374
+ uncond_tokens,
375
+ uncond_weights,
376
+ max_length,
377
+ bos,
378
+ eos,
379
+ pad,
380
+ no_boseos_middle=no_boseos_middle,
381
+ chunk_length=pipe.tokenizer.model_max_length,
382
+ )
383
+ uncond_tokens = torch.tensor(uncond_tokens, dtype=torch.long, device=pipe.device)
384
+
385
+ # get the embeddings
386
+ text_embeddings, text_pool = get_unweighted_text_embeddings(
387
+ pipe,
388
+ prompt_tokens,
389
+ pipe.tokenizer.model_max_length,
390
+ clip_skip,
391
+ eos,
392
+ pad,
393
+ is_sdxl_text_encoder2,
394
+ no_boseos_middle=no_boseos_middle,
395
+ )
396
+ prompt_weights = torch.tensor(prompt_weights, dtype=text_embeddings.dtype, device=pipe.device)
397
+
398
+ if uncond_prompt is not None:
399
+ uncond_embeddings, uncond_pool = get_unweighted_text_embeddings(
400
+ pipe,
401
+ uncond_tokens,
402
+ pipe.tokenizer.model_max_length,
403
+ clip_skip,
404
+ eos,
405
+ pad,
406
+ is_sdxl_text_encoder2,
407
+ no_boseos_middle=no_boseos_middle,
408
+ )
409
+ uncond_weights = torch.tensor(uncond_weights, dtype=uncond_embeddings.dtype, device=pipe.device)
410
+
411
+ # assign weights to the prompts and normalize in the sense of mean
412
+ # TODO: should we normalize by chunk or in a whole (current implementation)?
413
+ if (not skip_parsing) and (not skip_weighting):
414
+ previous_mean = text_embeddings.float().mean(axis=[-2, -1]).to(text_embeddings.dtype)
415
+ text_embeddings *= prompt_weights.unsqueeze(-1)
416
+ current_mean = text_embeddings.float().mean(axis=[-2, -1]).to(text_embeddings.dtype)
417
+ text_embeddings *= (previous_mean / current_mean).unsqueeze(-1).unsqueeze(-1)
418
+ if uncond_prompt is not None:
419
+ previous_mean = uncond_embeddings.float().mean(axis=[-2, -1]).to(uncond_embeddings.dtype)
420
+ uncond_embeddings *= uncond_weights.unsqueeze(-1)
421
+ current_mean = uncond_embeddings.float().mean(axis=[-2, -1]).to(uncond_embeddings.dtype)
422
+ uncond_embeddings *= (previous_mean / current_mean).unsqueeze(-1).unsqueeze(-1)
423
+
424
+ if uncond_prompt is not None:
425
+ return text_embeddings, text_pool, uncond_embeddings, uncond_pool
426
+ return text_embeddings, text_pool, None, None
427
+
428
+
429
+ def preprocess_image(image):
430
+ w, h = image.size
431
+ w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
432
+ image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"])
433
+ image = np.array(image).astype(np.float32) / 255.0
434
+ image = image[None].transpose(0, 3, 1, 2)
435
+ image = torch.from_numpy(image)
436
+ return 2.0 * image - 1.0
437
+
438
+
439
+ def preprocess_mask(mask, scale_factor=8):
440
+ mask = mask.convert("L")
441
+ w, h = mask.size
442
+ w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
443
+ mask = mask.resize((w // scale_factor, h // scale_factor), resample=PIL_INTERPOLATION["nearest"])
444
+ mask = np.array(mask).astype(np.float32) / 255.0
445
+ mask = np.tile(mask, (4, 1, 1))
446
+ mask = mask[None].transpose(0, 1, 2, 3) # what does this step do?
447
+ mask = 1 - mask # repaint white, keep black
448
+ mask = torch.from_numpy(mask)
449
+ return mask
450
+
451
+
452
+ def prepare_controlnet_image(
453
+ image: PIL.Image.Image,
454
+ width: int,
455
+ height: int,
456
+ batch_size: int,
457
+ num_images_per_prompt: int,
458
+ device: torch.device,
459
+ dtype: torch.dtype,
460
+ do_classifier_free_guidance: bool = False,
461
+ guess_mode: bool = False,
462
+ ):
463
+ if not isinstance(image, torch.Tensor):
464
+ if isinstance(image, PIL.Image.Image):
465
+ image = [image]
466
+
467
+ if isinstance(image[0], PIL.Image.Image):
468
+ images = []
469
+
470
+ for image_ in image:
471
+ image_ = image_.convert("RGB")
472
+ image_ = image_.resize((width, height), resample=PIL_INTERPOLATION["lanczos"])
473
+ image_ = np.array(image_)
474
+ image_ = image_[None, :]
475
+ images.append(image_)
476
+
477
+ image = images
478
+
479
+ image = np.concatenate(image, axis=0)
480
+ image = np.array(image).astype(np.float32) / 255.0
481
+ image = image.transpose(0, 3, 1, 2)
482
+ image = torch.from_numpy(image)
483
+ elif isinstance(image[0], torch.Tensor):
484
+ image = torch.cat(image, dim=0)
485
+
486
+ image_batch_size = image.shape[0]
487
+
488
+ if image_batch_size == 1:
489
+ repeat_by = batch_size
490
+ else:
491
+ # image batch size is the same as prompt batch size
492
+ repeat_by = num_images_per_prompt
493
+
494
+ image = image.repeat_interleave(repeat_by, dim=0)
495
+
496
+ image = image.to(device=device, dtype=dtype)
497
+
498
+ if do_classifier_free_guidance and not guess_mode:
499
+ image = torch.cat([image] * 2)
500
+
501
+ return image
502
+
503
+
504
+ class SdxlStableDiffusionLongPromptWeightingPipeline:
505
+ r"""
506
+ Pipeline for text-to-image generation using Stable Diffusion without tokens length limit, and support parsing
507
+ weighting in prompt.
508
+
509
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
510
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
511
+
512
+ Args:
513
+ vae ([`AutoencoderKL`]):
514
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
515
+ text_encoder ([`CLIPTextModel`]):
516
+ Frozen text-encoder. Stable Diffusion uses the text portion of
517
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
518
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
519
+ tokenizer (`CLIPTokenizer`):
520
+ Tokenizer of class
521
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
522
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
523
+ scheduler ([`SchedulerMixin`]):
524
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
525
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
526
+ safety_checker ([`StableDiffusionSafetyChecker`]):
527
+ Classification module that estimates whether generated images could be considered offensive or harmful.
528
+ Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
529
+ feature_extractor ([`CLIPFeatureExtractor`]):
530
+ Model that extracts features from generated images to be used as inputs for the `safety_checker`.
531
+ """
532
+
533
+ # if version.parse(version.parse(diffusers.__version__).base_version) >= version.parse("0.9.0"):
534
+
535
+ def __init__(
536
+ self,
537
+ vae: AutoencoderKL,
538
+ text_encoder: List[CLIPTextModel],
539
+ tokenizer: List[CLIPTokenizer],
540
+ unet: UNet2DConditionModel,
541
+ scheduler: SchedulerMixin,
542
+ # clip_skip: int,
543
+ safety_checker: StableDiffusionSafetyChecker,
544
+ feature_extractor: CLIPFeatureExtractor,
545
+ requires_safety_checker: bool = True,
546
+ clip_skip: int = 1,
547
+ ):
548
+ # clip skip is ignored currently
549
+ self.tokenizer = tokenizer[0]
550
+ self.text_encoder = text_encoder[0]
551
+ self.unet = unet
552
+ self.scheduler = scheduler
553
+ self.safety_checker = safety_checker
554
+ self.feature_extractor = feature_extractor
555
+ self.requires_safety_checker = requires_safety_checker
556
+ self.vae = vae
557
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
558
+ self.progress_bar = lambda x: tqdm(x, leave=False)
559
+
560
+ self.clip_skip = clip_skip
561
+ self.tokenizers = tokenizer
562
+ self.text_encoders = text_encoder
563
+
564
+ # self.__init__additional__()
565
+
566
+ # def __init__additional__(self):
567
+ # if not hasattr(self, "vae_scale_factor"):
568
+ # setattr(self, "vae_scale_factor", 2 ** (len(self.vae.config.block_out_channels) - 1))
569
+
570
+ def to(self, device=None, dtype=None):
571
+ if device is not None:
572
+ self.device = device
573
+ # self.vae.to(device=self.device)
574
+ if dtype is not None:
575
+ self.dtype = dtype
576
+
577
+ # do not move Text Encoders to device, because Text Encoder should be on CPU
578
+
579
+ @property
580
+ def _execution_device(self):
581
+ r"""
582
+ Returns the device on which the pipeline's models will be executed. After calling
583
+ `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
584
+ hooks.
585
+ """
586
+ if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
587
+ return self.device
588
+ for module in self.unet.modules():
589
+ if (
590
+ hasattr(module, "_hf_hook")
591
+ and hasattr(module._hf_hook, "execution_device")
592
+ and module._hf_hook.execution_device is not None
593
+ ):
594
+ return torch.device(module._hf_hook.execution_device)
595
+ return self.device
596
+
597
+ def _encode_prompt(
598
+ self,
599
+ prompt,
600
+ device,
601
+ num_images_per_prompt,
602
+ do_classifier_free_guidance,
603
+ negative_prompt,
604
+ max_embeddings_multiples,
605
+ is_sdxl_text_encoder2,
606
+ ):
607
+ r"""
608
+ Encodes the prompt into text encoder hidden states.
609
+
610
+ Args:
611
+ prompt (`str` or `list(int)`):
612
+ prompt to be encoded
613
+ device: (`torch.device`):
614
+ torch device
615
+ num_images_per_prompt (`int`):
616
+ number of images that should be generated per prompt
617
+ do_classifier_free_guidance (`bool`):
618
+ whether to use classifier free guidance or not
619
+ negative_prompt (`str` or `List[str]`):
620
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
621
+ if `guidance_scale` is less than `1`).
622
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
623
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
624
+ """
625
+ batch_size = len(prompt) if isinstance(prompt, list) else 1
626
+
627
+ if negative_prompt is None:
628
+ negative_prompt = [""] * batch_size
629
+ elif isinstance(negative_prompt, str):
630
+ negative_prompt = [negative_prompt] * batch_size
631
+ if batch_size != len(negative_prompt):
632
+ raise ValueError(
633
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
634
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
635
+ " the batch size of `prompt`."
636
+ )
637
+
638
+ text_embeddings, text_pool, uncond_embeddings, uncond_pool = get_weighted_text_embeddings(
639
+ pipe=self,
640
+ prompt=prompt,
641
+ uncond_prompt=negative_prompt if do_classifier_free_guidance else None,
642
+ max_embeddings_multiples=max_embeddings_multiples,
643
+ clip_skip=self.clip_skip,
644
+ is_sdxl_text_encoder2=is_sdxl_text_encoder2,
645
+ )
646
+ bs_embed, seq_len, _ = text_embeddings.shape
647
+ text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) # ??
648
+ text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
649
+ if text_pool is not None:
650
+ text_pool = text_pool.repeat(1, num_images_per_prompt)
651
+ text_pool = text_pool.view(bs_embed * num_images_per_prompt, -1)
652
+
653
+ if do_classifier_free_guidance:
654
+ bs_embed, seq_len, _ = uncond_embeddings.shape
655
+ uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1)
656
+ uncond_embeddings = uncond_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
657
+ if uncond_pool is not None:
658
+ uncond_pool = uncond_pool.repeat(1, num_images_per_prompt)
659
+ uncond_pool = uncond_pool.view(bs_embed * num_images_per_prompt, -1)
660
+
661
+ return text_embeddings, text_pool, uncond_embeddings, uncond_pool
662
+
663
+ return text_embeddings, text_pool, None, None
664
+
665
+ def check_inputs(self, prompt, height, width, strength, callback_steps):
666
+ if not isinstance(prompt, str) and not isinstance(prompt, list):
667
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
668
+
669
+ if strength < 0 or strength > 1:
670
+ raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}")
671
+
672
+ if height % 8 != 0 or width % 8 != 0:
673
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
674
+
675
+ if (callback_steps is None) or (
676
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
677
+ ):
678
+ raise ValueError(
679
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}."
680
+ )
681
+
682
+ def get_timesteps(self, num_inference_steps, strength, device, is_text2img):
683
+ if is_text2img:
684
+ return self.scheduler.timesteps.to(device), num_inference_steps
685
+ else:
686
+ # get the original timestep using init_timestep
687
+ offset = self.scheduler.config.get("steps_offset", 0)
688
+ init_timestep = int(num_inference_steps * strength) + offset
689
+ init_timestep = min(init_timestep, num_inference_steps)
690
+
691
+ t_start = max(num_inference_steps - init_timestep + offset, 0)
692
+ timesteps = self.scheduler.timesteps[t_start:].to(device)
693
+ return timesteps, num_inference_steps - t_start
694
+
695
+ def run_safety_checker(self, image, device, dtype):
696
+ if self.safety_checker is not None:
697
+ safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
698
+ image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_checker_input.pixel_values.to(dtype))
699
+ else:
700
+ has_nsfw_concept = None
701
+ return image, has_nsfw_concept
702
+
703
+ def decode_latents(self, latents):
704
+ with torch.no_grad():
705
+ latents = 1 / sdxl_model_util.VAE_SCALE_FACTOR * latents
706
+
707
+ # print("post_quant_conv dtype:", self.vae.post_quant_conv.weight.dtype) # torch.float32
708
+ # x = torch.nn.functional.conv2d(latents, self.vae.post_quant_conv.weight.detach(), stride=1, padding=0)
709
+ # print("latents dtype:", latents.dtype, "x dtype:", x.dtype) # torch.float32, torch.float16
710
+ # self.vae.to("cpu")
711
+ # self.vae.set_use_memory_efficient_attention_xformers(False)
712
+ # image = self.vae.decode(latents.to("cpu")).sample
713
+
714
+ image = self.vae.decode(latents.to(self.vae.dtype)).sample
715
+ image = (image / 2 + 0.5).clamp(0, 1)
716
+ # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
717
+ image = image.cpu().permute(0, 2, 3, 1).float().numpy()
718
+ return image
719
+
720
+ def prepare_extra_step_kwargs(self, generator, eta):
721
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
722
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
723
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
724
+ # and should be between [0, 1]
725
+
726
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
727
+ extra_step_kwargs = {}
728
+ if accepts_eta:
729
+ extra_step_kwargs["eta"] = eta
730
+
731
+ # check if the scheduler accepts generator
732
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
733
+ if accepts_generator:
734
+ extra_step_kwargs["generator"] = generator
735
+ return extra_step_kwargs
736
+
737
+ def prepare_latents(self, image, timestep, batch_size, height, width, dtype, device, generator, latents=None):
738
+ if image is None:
739
+ shape = (
740
+ batch_size,
741
+ self.unet.in_channels,
742
+ height // self.vae_scale_factor,
743
+ width // self.vae_scale_factor,
744
+ )
745
+
746
+ if latents is None:
747
+ if device.type == "mps":
748
+ # randn does not work reproducibly on mps
749
+ latents = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device)
750
+ else:
751
+ latents = torch.randn(shape, generator=generator, device=device, dtype=dtype)
752
+ else:
753
+ if latents.shape != shape:
754
+ raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
755
+ latents = latents.to(device)
756
+
757
+ # scale the initial noise by the standard deviation required by the scheduler
758
+ latents = latents * self.scheduler.init_noise_sigma
759
+ return latents, None, None
760
+ else:
761
+ init_latent_dist = self.vae.encode(image).latent_dist
762
+ init_latents = init_latent_dist.sample(generator=generator)
763
+ init_latents = sdxl_model_util.VAE_SCALE_FACTOR * init_latents
764
+ init_latents = torch.cat([init_latents] * batch_size, dim=0)
765
+ init_latents_orig = init_latents
766
+ shape = init_latents.shape
767
+
768
+ # add noise to latents using the timesteps
769
+ if device.type == "mps":
770
+ noise = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device)
771
+ else:
772
+ noise = torch.randn(shape, generator=generator, device=device, dtype=dtype)
773
+ latents = self.scheduler.add_noise(init_latents, noise, timestep)
774
+ return latents, init_latents_orig, noise
775
+
776
+ @torch.no_grad()
777
+ def __call__(
778
+ self,
779
+ prompt: Union[str, List[str]],
780
+ negative_prompt: Optional[Union[str, List[str]]] = None,
781
+ image: Union[torch.FloatTensor, PIL.Image.Image] = None,
782
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image] = None,
783
+ height: int = 512,
784
+ width: int = 512,
785
+ num_inference_steps: int = 50,
786
+ guidance_scale: float = 7.5,
787
+ strength: float = 0.8,
788
+ num_images_per_prompt: Optional[int] = 1,
789
+ eta: float = 0.0,
790
+ generator: Optional[torch.Generator] = None,
791
+ latents: Optional[torch.FloatTensor] = None,
792
+ max_embeddings_multiples: Optional[int] = 3,
793
+ output_type: Optional[str] = "pil",
794
+ return_dict: bool = True,
795
+ controlnet=None,
796
+ controlnet_image=None,
797
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
798
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
799
+ callback_steps: int = 1,
800
+ ):
801
+ r"""
802
+ Function invoked when calling the pipeline for generation.
803
+
804
+ Args:
805
+ prompt (`str` or `List[str]`):
806
+ The prompt or prompts to guide the image generation.
807
+ negative_prompt (`str` or `List[str]`, *optional*):
808
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
809
+ if `guidance_scale` is less than `1`).
810
+ image (`torch.FloatTensor` or `PIL.Image.Image`):
811
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
812
+ process.
813
+ mask_image (`torch.FloatTensor` or `PIL.Image.Image`):
814
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
815
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
816
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
817
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
818
+ height (`int`, *optional*, defaults to 512):
819
+ The height in pixels of the generated image.
820
+ width (`int`, *optional*, defaults to 512):
821
+ The width in pixels of the generated image.
822
+ num_inference_steps (`int`, *optional*, defaults to 50):
823
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
824
+ expense of slower inference.
825
+ guidance_scale (`float`, *optional*, defaults to 7.5):
826
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
827
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
828
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
829
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
830
+ usually at the expense of lower image quality.
831
+ strength (`float`, *optional*, defaults to 0.8):
832
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
833
+ `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
834
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
835
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
836
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
837
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
838
+ The number of images to generate per prompt.
839
+ eta (`float`, *optional*, defaults to 0.0):
840
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
841
+ [`schedulers.DDIMScheduler`], will be ignored for others.
842
+ generator (`torch.Generator`, *optional*):
843
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
844
+ deterministic.
845
+ latents (`torch.FloatTensor`, *optional*):
846
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
847
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
848
+ tensor will ge generated by sampling using the supplied random `generator`.
849
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
850
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
851
+ output_type (`str`, *optional*, defaults to `"pil"`):
852
+ The output format of the generate image. Choose between
853
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
854
+ return_dict (`bool`, *optional*, defaults to `True`):
855
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
856
+ plain tuple.
857
+ controlnet (`diffusers.ControlNetModel`, *optional*):
858
+ A controlnet model to be used for the inference. If not provided, controlnet will be disabled.
859
+ controlnet_image (`torch.FloatTensor` or `PIL.Image.Image`, *optional*):
860
+ `Image`, or tensor representing an image batch, to be used as the starting point for the controlnet
861
+ inference.
862
+ callback (`Callable`, *optional*):
863
+ A function that will be called every `callback_steps` steps during inference. The function will be
864
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
865
+ is_cancelled_callback (`Callable`, *optional*):
866
+ A function that will be called every `callback_steps` steps during inference. If the function returns
867
+ `True`, the inference will be cancelled.
868
+ callback_steps (`int`, *optional*, defaults to 1):
869
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
870
+ called at every step.
871
+
872
+ Returns:
873
+ `None` if cancelled by `is_cancelled_callback`,
874
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
875
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
876
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
877
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
878
+ (nsfw) content, according to the `safety_checker`.
879
+ """
880
+ if controlnet is not None and controlnet_image is None:
881
+ raise ValueError("controlnet_image must be provided if controlnet is not None.")
882
+
883
+ # 0. Default height and width to unet
884
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
885
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
886
+
887
+ # 1. Check inputs. Raise error if not correct
888
+ self.check_inputs(prompt, height, width, strength, callback_steps)
889
+
890
+ # 2. Define call parameters
891
+ batch_size = 1 if isinstance(prompt, str) else len(prompt)
892
+ device = self._execution_device
893
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
894
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
895
+ # corresponds to doing no classifier free guidance.
896
+ do_classifier_free_guidance = guidance_scale > 1.0
897
+
898
+ # 3. Encode input prompt
899
+ # 実装を簡単にするためにtokenzer/text encoderを切り替えて二回呼び出す
900
+ # To simplify the implementation, switch the tokenzer/text encoder and call it twice
901
+ text_embeddings_list = []
902
+ text_pool = None
903
+ uncond_embeddings_list = []
904
+ uncond_pool = None
905
+ for i in range(len(self.tokenizers)):
906
+ self.tokenizer = self.tokenizers[i]
907
+ self.text_encoder = self.text_encoders[i]
908
+
909
+ text_embeddings, tp1, uncond_embeddings, up1 = self._encode_prompt(
910
+ prompt,
911
+ device,
912
+ num_images_per_prompt,
913
+ do_classifier_free_guidance,
914
+ negative_prompt,
915
+ max_embeddings_multiples,
916
+ is_sdxl_text_encoder2=i == 1,
917
+ )
918
+ text_embeddings_list.append(text_embeddings)
919
+ uncond_embeddings_list.append(uncond_embeddings)
920
+
921
+ if tp1 is not None:
922
+ text_pool = tp1
923
+ if up1 is not None:
924
+ uncond_pool = up1
925
+
926
+ unet_dtype = self.unet.dtype
927
+ dtype = unet_dtype
928
+ if hasattr(dtype, "itemsize") and dtype.itemsize == 1: # fp8
929
+ dtype = torch.float16
930
+ self.unet.to(dtype)
931
+
932
+ # 4. Preprocess image and mask
933
+ if isinstance(image, PIL.Image.Image):
934
+ image = preprocess_image(image)
935
+ if image is not None:
936
+ image = image.to(device=self.device, dtype=dtype)
937
+ if isinstance(mask_image, PIL.Image.Image):
938
+ mask_image = preprocess_mask(mask_image, self.vae_scale_factor)
939
+ if mask_image is not None:
940
+ mask = mask_image.to(device=self.device, dtype=dtype)
941
+ mask = torch.cat([mask] * batch_size * num_images_per_prompt)
942
+ else:
943
+ mask = None
944
+
945
+ # ControlNet is not working yet in SDXL, but keep the code here for future use
946
+ if controlnet_image is not None:
947
+ controlnet_image = prepare_controlnet_image(
948
+ controlnet_image, width, height, batch_size, 1, self.device, controlnet.dtype, do_classifier_free_guidance, False
949
+ )
950
+
951
+ # 5. set timesteps
952
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
953
+ timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device, image is None)
954
+ latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
955
+
956
+ # 6. Prepare latent variables
957
+ latents, init_latents_orig, noise = self.prepare_latents(
958
+ image,
959
+ latent_timestep,
960
+ batch_size * num_images_per_prompt,
961
+ height,
962
+ width,
963
+ dtype,
964
+ device,
965
+ generator,
966
+ latents,
967
+ )
968
+
969
+ # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
970
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
971
+
972
+ # create size embs and concat embeddings for SDXL
973
+ orig_size = torch.tensor([height, width]).repeat(batch_size * num_images_per_prompt, 1).to(dtype)
974
+ crop_size = torch.zeros_like(orig_size)
975
+ target_size = orig_size
976
+ embs = sdxl_train_util.get_size_embeddings(orig_size, crop_size, target_size, device).to(dtype)
977
+
978
+ # make conditionings
979
+ if do_classifier_free_guidance:
980
+ text_embeddings = torch.cat(text_embeddings_list, dim=2)
981
+ uncond_embeddings = torch.cat(uncond_embeddings_list, dim=2)
982
+ text_embedding = torch.cat([uncond_embeddings, text_embeddings]).to(dtype)
983
+
984
+ cond_vector = torch.cat([text_pool, embs], dim=1)
985
+ uncond_vector = torch.cat([uncond_pool, embs], dim=1)
986
+ vector_embedding = torch.cat([uncond_vector, cond_vector]).to(dtype)
987
+ else:
988
+ text_embedding = torch.cat(text_embeddings_list, dim=2).to(dtype)
989
+ vector_embedding = torch.cat([text_pool, embs], dim=1).to(dtype)
990
+
991
+ # 8. Denoising loop
992
+ for i, t in enumerate(self.progress_bar(timesteps)):
993
+ # expand the latents if we are doing classifier free guidance
994
+ latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
995
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
996
+
997
+ unet_additional_args = {}
998
+ if controlnet is not None:
999
+ down_block_res_samples, mid_block_res_sample = controlnet(
1000
+ latent_model_input,
1001
+ t,
1002
+ encoder_hidden_states=text_embeddings,
1003
+ controlnet_cond=controlnet_image,
1004
+ conditioning_scale=1.0,
1005
+ guess_mode=False,
1006
+ return_dict=False,
1007
+ )
1008
+ unet_additional_args["down_block_additional_residuals"] = down_block_res_samples
1009
+ unet_additional_args["mid_block_additional_residual"] = mid_block_res_sample
1010
+
1011
+ # predict the noise residual
1012
+ noise_pred = self.unet(latent_model_input, t, text_embedding, vector_embedding)
1013
+ noise_pred = noise_pred.to(dtype) # U-Net changes dtype in LoRA training
1014
+
1015
+ # perform guidance
1016
+ if do_classifier_free_guidance:
1017
+ noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
1018
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
1019
+
1020
+ # compute the previous noisy sample x_t -> x_t-1
1021
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
1022
+
1023
+ if mask is not None:
1024
+ # masking
1025
+ init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, torch.tensor([t]))
1026
+ latents = (init_latents_proper * mask) + (latents * (1 - mask))
1027
+
1028
+ # call the callback, if provided
1029
+ if i % callback_steps == 0:
1030
+ if callback is not None:
1031
+ callback(i, t, latents)
1032
+ if is_cancelled_callback is not None and is_cancelled_callback():
1033
+ return None
1034
+
1035
+ self.unet.to(unet_dtype)
1036
+ return latents
1037
+
1038
+ def latents_to_image(self, latents):
1039
+ # 9. Post-processing
1040
+ image = self.decode_latents(latents.to(self.vae.dtype))
1041
+ image = self.numpy_to_pil(image)
1042
+ return image
1043
+
1044
+ # copy from pil_utils.py
1045
+ def numpy_to_pil(self, images: np.ndarray) -> Image.Image:
1046
+ """
1047
+ Convert a numpy image or a batch of images to a PIL image.
1048
+ """
1049
+ if images.ndim == 3:
1050
+ images = images[None, ...]
1051
+ images = (images * 255).round().astype("uint8")
1052
+ if images.shape[-1] == 1:
1053
+ # special case for grayscale (single channel) images
1054
+ pil_images = [Image.fromarray(image.squeeze(), mode="L") for image in images]
1055
+ else:
1056
+ pil_images = [Image.fromarray(image) for image in images]
1057
+
1058
+ return pil_images
1059
+
1060
+ def text2img(
1061
+ self,
1062
+ prompt: Union[str, List[str]],
1063
+ negative_prompt: Optional[Union[str, List[str]]] = None,
1064
+ height: int = 512,
1065
+ width: int = 512,
1066
+ num_inference_steps: int = 50,
1067
+ guidance_scale: float = 7.5,
1068
+ num_images_per_prompt: Optional[int] = 1,
1069
+ eta: float = 0.0,
1070
+ generator: Optional[torch.Generator] = None,
1071
+ latents: Optional[torch.FloatTensor] = None,
1072
+ max_embeddings_multiples: Optional[int] = 3,
1073
+ output_type: Optional[str] = "pil",
1074
+ return_dict: bool = True,
1075
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
1076
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
1077
+ callback_steps: int = 1,
1078
+ ):
1079
+ r"""
1080
+ Function for text-to-image generation.
1081
+ Args:
1082
+ prompt (`str` or `List[str]`):
1083
+ The prompt or prompts to guide the image generation.
1084
+ negative_prompt (`str` or `List[str]`, *optional*):
1085
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
1086
+ if `guidance_scale` is less than `1`).
1087
+ height (`int`, *optional*, defaults to 512):
1088
+ The height in pixels of the generated image.
1089
+ width (`int`, *optional*, defaults to 512):
1090
+ The width in pixels of the generated image.
1091
+ num_inference_steps (`int`, *optional*, defaults to 50):
1092
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
1093
+ expense of slower inference.
1094
+ guidance_scale (`float`, *optional*, defaults to 7.5):
1095
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1096
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1097
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1098
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1099
+ usually at the expense of lower image quality.
1100
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1101
+ The number of images to generate per prompt.
1102
+ eta (`float`, *optional*, defaults to 0.0):
1103
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1104
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1105
+ generator (`torch.Generator`, *optional*):
1106
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
1107
+ deterministic.
1108
+ latents (`torch.FloatTensor`, *optional*):
1109
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
1110
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
1111
+ tensor will ge generated by sampling using the supplied random `generator`.
1112
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1113
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1114
+ output_type (`str`, *optional*, defaults to `"pil"`):
1115
+ The output format of the generate image. Choose between
1116
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1117
+ return_dict (`bool`, *optional*, defaults to `True`):
1118
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1119
+ plain tuple.
1120
+ callback (`Callable`, *optional*):
1121
+ A function that will be called every `callback_steps` steps during inference. The function will be
1122
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
1123
+ is_cancelled_callback (`Callable`, *optional*):
1124
+ A function that will be called every `callback_steps` steps during inference. If the function returns
1125
+ `True`, the inference will be cancelled.
1126
+ callback_steps (`int`, *optional*, defaults to 1):
1127
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1128
+ called at every step.
1129
+ Returns:
1130
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
1131
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1132
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1133
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1134
+ (nsfw) content, according to the `safety_checker`.
1135
+ """
1136
+ return self.__call__(
1137
+ prompt=prompt,
1138
+ negative_prompt=negative_prompt,
1139
+ height=height,
1140
+ width=width,
1141
+ num_inference_steps=num_inference_steps,
1142
+ guidance_scale=guidance_scale,
1143
+ num_images_per_prompt=num_images_per_prompt,
1144
+ eta=eta,
1145
+ generator=generator,
1146
+ latents=latents,
1147
+ max_embeddings_multiples=max_embeddings_multiples,
1148
+ output_type=output_type,
1149
+ return_dict=return_dict,
1150
+ callback=callback,
1151
+ is_cancelled_callback=is_cancelled_callback,
1152
+ callback_steps=callback_steps,
1153
+ )
1154
+
1155
+ def img2img(
1156
+ self,
1157
+ image: Union[torch.FloatTensor, PIL.Image.Image],
1158
+ prompt: Union[str, List[str]],
1159
+ negative_prompt: Optional[Union[str, List[str]]] = None,
1160
+ strength: float = 0.8,
1161
+ num_inference_steps: Optional[int] = 50,
1162
+ guidance_scale: Optional[float] = 7.5,
1163
+ num_images_per_prompt: Optional[int] = 1,
1164
+ eta: Optional[float] = 0.0,
1165
+ generator: Optional[torch.Generator] = None,
1166
+ max_embeddings_multiples: Optional[int] = 3,
1167
+ output_type: Optional[str] = "pil",
1168
+ return_dict: bool = True,
1169
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
1170
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
1171
+ callback_steps: int = 1,
1172
+ ):
1173
+ r"""
1174
+ Function for image-to-image generation.
1175
+ Args:
1176
+ image (`torch.FloatTensor` or `PIL.Image.Image`):
1177
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
1178
+ process.
1179
+ prompt (`str` or `List[str]`):
1180
+ The prompt or prompts to guide the image generation.
1181
+ negative_prompt (`str` or `List[str]`, *optional*):
1182
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
1183
+ if `guidance_scale` is less than `1`).
1184
+ strength (`float`, *optional*, defaults to 0.8):
1185
+ Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
1186
+ `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
1187
+ number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
1188
+ noise will be maximum and the denoising process will run for the full number of iterations specified in
1189
+ `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
1190
+ num_inference_steps (`int`, *optional*, defaults to 50):
1191
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
1192
+ expense of slower inference. This parameter will be modulated by `strength`.
1193
+ guidance_scale (`float`, *optional*, defaults to 7.5):
1194
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1195
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1196
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1197
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1198
+ usually at the expense of lower image quality.
1199
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1200
+ The number of images to generate per prompt.
1201
+ eta (`float`, *optional*, defaults to 0.0):
1202
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1203
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1204
+ generator (`torch.Generator`, *optional*):
1205
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
1206
+ deterministic.
1207
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1208
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1209
+ output_type (`str`, *optional*, defaults to `"pil"`):
1210
+ The output format of the generate image. Choose between
1211
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1212
+ return_dict (`bool`, *optional*, defaults to `True`):
1213
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1214
+ plain tuple.
1215
+ callback (`Callable`, *optional*):
1216
+ A function that will be called every `callback_steps` steps during inference. The function will be
1217
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
1218
+ is_cancelled_callback (`Callable`, *optional*):
1219
+ A function that will be called every `callback_steps` steps during inference. If the function returns
1220
+ `True`, the inference will be cancelled.
1221
+ callback_steps (`int`, *optional*, defaults to 1):
1222
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1223
+ called at every step.
1224
+ Returns:
1225
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
1226
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1227
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1228
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1229
+ (nsfw) content, according to the `safety_checker`.
1230
+ """
1231
+ return self.__call__(
1232
+ prompt=prompt,
1233
+ negative_prompt=negative_prompt,
1234
+ image=image,
1235
+ num_inference_steps=num_inference_steps,
1236
+ guidance_scale=guidance_scale,
1237
+ strength=strength,
1238
+ num_images_per_prompt=num_images_per_prompt,
1239
+ eta=eta,
1240
+ generator=generator,
1241
+ max_embeddings_multiples=max_embeddings_multiples,
1242
+ output_type=output_type,
1243
+ return_dict=return_dict,
1244
+ callback=callback,
1245
+ is_cancelled_callback=is_cancelled_callback,
1246
+ callback_steps=callback_steps,
1247
+ )
1248
+
1249
+ def inpaint(
1250
+ self,
1251
+ image: Union[torch.FloatTensor, PIL.Image.Image],
1252
+ mask_image: Union[torch.FloatTensor, PIL.Image.Image],
1253
+ prompt: Union[str, List[str]],
1254
+ negative_prompt: Optional[Union[str, List[str]]] = None,
1255
+ strength: float = 0.8,
1256
+ num_inference_steps: Optional[int] = 50,
1257
+ guidance_scale: Optional[float] = 7.5,
1258
+ num_images_per_prompt: Optional[int] = 1,
1259
+ eta: Optional[float] = 0.0,
1260
+ generator: Optional[torch.Generator] = None,
1261
+ max_embeddings_multiples: Optional[int] = 3,
1262
+ output_type: Optional[str] = "pil",
1263
+ return_dict: bool = True,
1264
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
1265
+ is_cancelled_callback: Optional[Callable[[], bool]] = None,
1266
+ callback_steps: int = 1,
1267
+ ):
1268
+ r"""
1269
+ Function for inpaint.
1270
+ Args:
1271
+ image (`torch.FloatTensor` or `PIL.Image.Image`):
1272
+ `Image`, or tensor representing an image batch, that will be used as the starting point for the
1273
+ process. This is the image whose masked region will be inpainted.
1274
+ mask_image (`torch.FloatTensor` or `PIL.Image.Image`):
1275
+ `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
1276
+ replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a
1277
+ PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should
1278
+ contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`.
1279
+ prompt (`str` or `List[str]`):
1280
+ The prompt or prompts to guide the image generation.
1281
+ negative_prompt (`str` or `List[str]`, *optional*):
1282
+ The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
1283
+ if `guidance_scale` is less than `1`).
1284
+ strength (`float`, *optional*, defaults to 0.8):
1285
+ Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1. When `strength`
1286
+ is 1, the denoising process will be run on the masked area for the full number of iterations specified
1287
+ in `num_inference_steps`. `image` will be used as a reference for the masked area, adding more
1288
+ noise to that region the larger the `strength`. If `strength` is 0, no inpainting will occur.
1289
+ num_inference_steps (`int`, *optional*, defaults to 50):
1290
+ The reference number of denoising steps. More denoising steps usually lead to a higher quality image at
1291
+ the expense of slower inference. This parameter will be modulated by `strength`, as explained above.
1292
+ guidance_scale (`float`, *optional*, defaults to 7.5):
1293
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
1294
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
1295
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
1296
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
1297
+ usually at the expense of lower image quality.
1298
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
1299
+ The number of images to generate per prompt.
1300
+ eta (`float`, *optional*, defaults to 0.0):
1301
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
1302
+ [`schedulers.DDIMScheduler`], will be ignored for others.
1303
+ generator (`torch.Generator`, *optional*):
1304
+ A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
1305
+ deterministic.
1306
+ max_embeddings_multiples (`int`, *optional*, defaults to `3`):
1307
+ The max multiple length of prompt embeddings compared to the max output length of text encoder.
1308
+ output_type (`str`, *optional*, defaults to `"pil"`):
1309
+ The output format of the generate image. Choose between
1310
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
1311
+ return_dict (`bool`, *optional*, defaults to `True`):
1312
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
1313
+ plain tuple.
1314
+ callback (`Callable`, *optional*):
1315
+ A function that will be called every `callback_steps` steps during inference. The function will be
1316
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
1317
+ is_cancelled_callback (`Callable`, *optional*):
1318
+ A function that will be called every `callback_steps` steps during inference. If the function returns
1319
+ `True`, the inference will be cancelled.
1320
+ callback_steps (`int`, *optional*, defaults to 1):
1321
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
1322
+ called at every step.
1323
+ Returns:
1324
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
1325
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
1326
+ When returning a tuple, the first element is a list with the generated images, and the second element is a
1327
+ list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
1328
+ (nsfw) content, according to the `safety_checker`.
1329
+ """
1330
+ return self.__call__(
1331
+ prompt=prompt,
1332
+ negative_prompt=negative_prompt,
1333
+ image=image,
1334
+ mask_image=mask_image,
1335
+ num_inference_steps=num_inference_steps,
1336
+ guidance_scale=guidance_scale,
1337
+ strength=strength,
1338
+ num_images_per_prompt=num_images_per_prompt,
1339
+ eta=eta,
1340
+ generator=generator,
1341
+ max_embeddings_multiples=max_embeddings_multiples,
1342
+ output_type=output_type,
1343
+ return_dict=return_dict,
1344
+ callback=callback,
1345
+ is_cancelled_callback=is_cancelled_callback,
1346
+ callback_steps=callback_steps,
1347
+ )