feat: add early return for prompt expansion when no new tokens should be added
closes https://github.com/lllyasviel/Fooocus/issues/2278, also removes comma at the end added before tokenizer
This commit is contained in:
parent
a78f66ffb5
commit
f8ca04a406
@ -112,6 +112,9 @@ class FooocusExpansion:
|
||||
max_token_length = 75 * int(math.ceil(float(current_token_length) / 75.0))
|
||||
max_new_tokens = max_token_length - current_token_length
|
||||
|
||||
if max_new_tokens == 0:
|
||||
return prompt[:-1]
|
||||
|
||||
# https://huggingface.co/blog/introducing-csearch
|
||||
# https://huggingface.co/docs/transformers/generation_strategies
|
||||
features = self.model.generate(**tokenized_kwargs,
|
||||
|
Loading…
Reference in New Issue
Block a user