feat: add metadata to images (#1940)
* feat: add metadata logging for images inspired by https://github.com/MoonRide303/Fooocus-MRE * feat: add config and checkbox for save_metadata_to_images * feat: add argument disable_metadata * feat: add support for A1111 metadata schemacf2772fab0/modules/processing.py (L672)
* feat: add model hash support for a1111 * feat: use resolved prompts with included expansion and styles for a1111 metadata * fix: code cleanup and resolved prompt fixes * feat: add config metadata_created_by * fix: use stting isntead of quote wrap for A1111 created_by * fix: correctlyy hide/show metadata schema on app start * fix: do not generate hashes when arg --disable-metadata is used * refactor: rename metadata_schema to metadata_scheme * fix: use pnginfo "parameters" insteadf of "Comments" see https://github.com/RupertAvery/DiffusionToolkit/issues/202 andcf2772fab0/modules/processing.py (L939)
* feat: add resolved prompts to metadata * fix: use correct default value in metadata check for created_by * wip: add metadata mapping, reading and writing applying data after reading currently not functional for A1111 * feat: rename metadata tab and import button label * feat: map basic information for scheme A1111 * wip: optimize handling for metadata in Gradio calls * feat: add enums for Performance, Steps and StepsUOV also move MetadataSchema enum to prevent circular dependency * fix: correctly map resolution, use empty styles for A1111 * chore: code cleanup * feat: add A1111 prompt style detection only detects one style as Fooocus doesn't wrap {prompt} with the whole style, but has a separate prompt string for each style * wip: add prompt style extraction for A1111 scheme * feat: sort styles after metadata import * refactor: use central flag for LoRA count * refactor: use central flag for ControlNet image count * fix: use correct LoRA mapping, add fallback for backwards compatibility * feat: add created_by again * feat: add prefix "Fooocus" to version * wip: code cleanup, update todos * fix: use correct order to read LoRA in meta parser * wip: code cleanup, update todos * feat: make sha256 with length 10 default * feat: add lora handling to A1111 scheme * feat: override existing LoRA values when importing, would cause images to differ * fix: correctly extract prompt style when only prompt expansion is selected * feat: allow model / LoRA loading from subfolders * feat: code cleanup, do not queue metadata preview on image upload * refactor: add flag for refiner_swap_method * feat: add metadata handling for all non-img2img parameters * refactor: code cleanup * chore: use str as return type in calculate_sha256 * feat: add hash cache to metadata * chore: code cleanup * feat: add method get_scheme to Metadata * fix: align handling for scheme Fooocus by removing lcm lora from json parsing * refactor: add step before parsing to set data in parser - add constructor for MetadataSchema class - remove showable and copyable from log output - add functional hash cache (model hashing takes about 5 seconds, only required once per model, using hash lazy loading) * feat: sort metadata attributes before writing to image * feat: add translations and hint for image prompt parameters * chore: check and remove ToDo's * refactor: merge metadata.py into meta_parser.py * fix: add missing refiner in A1111 parse_json * wip: add TODO for ultiline prompt style resolution * fix: remove sorting for A1111, change performance key position fixes https://github.com/lllyasviel/Fooocus/pull/1940#issuecomment-1924444633 * fix: add workaround for multiline prompts * feat: add sampler mapping * feat: prevent config reset by renaming metadata_scheme to match config options * chore: remove remaining todos after analysis refiner is added when set restoring multiline prompts has been resolved by using separate parameters "raw_prompt" and "raw_negative_prompt" * chore: specify too broad exception types * feat: add mapping for _gpu samplers to cpu samplers gpu samplers are less deterministic than cpu but in general similar, see https://www.reddit.com/r/comfyui/comments/15hayzo/comment/juqcpep/ * feat: add better handling for image import with empty metadata * fix: parse adaptive_cfg as float instead of string * chore: loosen strict type for parse_json, fix indent * chore: make steps enums more strict * feat: only override steps if metadata value is not in steps enum or in steps enum and performance is not the same * fix: handle empty strings in metadata e.g. raw negative prompt when none is set
This commit is contained in:
parent
d3113f5c3f
commit
ba9eadbcda
@ -20,7 +20,10 @@ args_parser.parser.add_argument("--disable-image-log", action='store_true',
|
||||
help="Prevent writing images and logs to hard drive.")
|
||||
|
||||
args_parser.parser.add_argument("--disable-analytics", action='store_true',
|
||||
help="Disables analytics for Gradio", default=False)
|
||||
help="Disables analytics for Gradio.")
|
||||
|
||||
args_parser.parser.add_argument("--disable-metadata", action='store_true',
|
||||
help="Disables saving metadata to images.")
|
||||
|
||||
args_parser.parser.add_argument("--disable-preset-download", action='store_true',
|
||||
help="Disables downloading models for presets", default=False)
|
||||
|
@ -374,5 +374,12 @@
|
||||
"* Powered by Fooocus Inpaint Engine (beta)": "* Powered by Fooocus Inpaint Engine (beta)",
|
||||
"Fooocus Enhance": "Fooocus Enhance",
|
||||
"Fooocus Cinematic": "Fooocus Cinematic",
|
||||
"Fooocus Sharp": "Fooocus Sharp"
|
||||
"Fooocus Sharp": "Fooocus Sharp",
|
||||
"Drag any image generated by Fooocus here": "Drag any image generated by Fooocus here",
|
||||
"Metadata": "Metadata",
|
||||
"Apply Metadata": "Apply Metadata",
|
||||
"Metadata Scheme": "Metadata Scheme",
|
||||
"Image Prompt parameters are not included. Use a1111 for compatibility with Civitai.": "Image Prompt parameters are not included. Use a1111 for compatibility with Civitai.",
|
||||
"fooocus (json)": "fooocus (json)",
|
||||
"a1111 (plain text)": "a1111 (plain text)"
|
||||
}
|
@ -19,6 +19,7 @@ async_tasks = []
|
||||
def worker():
|
||||
global async_tasks
|
||||
|
||||
import os
|
||||
import traceback
|
||||
import math
|
||||
import numpy as np
|
||||
@ -39,6 +40,7 @@ def worker():
|
||||
import extras.ip_adapter as ip_adapter
|
||||
import extras.face_crop
|
||||
import fooocus_version
|
||||
import args_manager
|
||||
|
||||
from modules.sdxl_styles import apply_style, apply_wildcards, fooocus_expansion, apply_arrays
|
||||
from modules.private_logger import log
|
||||
@ -46,6 +48,8 @@ def worker():
|
||||
from modules.util import remove_empty_str, HWC3, resize_image, \
|
||||
get_image_shape_ceil, set_image_shape_ceil, get_shape_ceil, resample_image, erode_or_dilate, ordinal_suffix
|
||||
from modules.upscaler import perform_upscale
|
||||
from modules.flags import Performance
|
||||
from modules.meta_parser import get_metadata_parser, MetadataScheme
|
||||
|
||||
pid = os.getpid()
|
||||
print(f'Started worker with PID {pid}')
|
||||
@ -135,7 +139,7 @@ def worker():
|
||||
prompt = args.pop()
|
||||
negative_prompt = args.pop()
|
||||
style_selections = args.pop()
|
||||
performance_selection = args.pop()
|
||||
performance_selection = Performance(args.pop())
|
||||
aspect_ratios_selection = args.pop()
|
||||
image_number = args.pop()
|
||||
image_seed = args.pop()
|
||||
@ -153,6 +157,7 @@ def worker():
|
||||
inpaint_input_image = args.pop()
|
||||
inpaint_additional_prompt = args.pop()
|
||||
inpaint_mask_image_upload = args.pop()
|
||||
|
||||
disable_preview = args.pop()
|
||||
disable_intermediate_results = args.pop()
|
||||
disable_seed_increment = args.pop()
|
||||
@ -190,8 +195,11 @@ def worker():
|
||||
invert_mask_checkbox = args.pop()
|
||||
inpaint_erode_or_dilate = args.pop()
|
||||
|
||||
save_metadata_to_images = args.pop() if not args_manager.args.disable_metadata else False
|
||||
metadata_scheme = MetadataScheme(args.pop()) if not args_manager.args.disable_metadata else MetadataScheme.FOOOCUS
|
||||
|
||||
cn_tasks = {x: [] for x in flags.ip_list}
|
||||
for _ in range(4):
|
||||
for _ in range(flags.controlnet_image_count):
|
||||
cn_img = args.pop()
|
||||
cn_stop = args.pop()
|
||||
cn_weight = args.pop()
|
||||
@ -216,17 +224,9 @@ def worker():
|
||||
print(f'Refiner disabled because base model and refiner are same.')
|
||||
refiner_model_name = 'None'
|
||||
|
||||
assert performance_selection in ['Speed', 'Quality', 'Extreme Speed']
|
||||
steps = performance_selection.steps()
|
||||
|
||||
steps = 30
|
||||
|
||||
if performance_selection == 'Speed':
|
||||
steps = 30
|
||||
|
||||
if performance_selection == 'Quality':
|
||||
steps = 60
|
||||
|
||||
if performance_selection == 'Extreme Speed':
|
||||
if performance_selection == Performance.EXTREME_SPEED:
|
||||
print('Enter LCM mode.')
|
||||
progressbar(async_task, 1, 'Downloading LCM components ...')
|
||||
loras += [(modules.config.downloading_sdxl_lcm_lora(), 1.0)]
|
||||
@ -244,7 +244,6 @@ def worker():
|
||||
adm_scaler_positive = 1.0
|
||||
adm_scaler_negative = 1.0
|
||||
adm_scaler_end = 0.0
|
||||
steps = 8
|
||||
|
||||
print(f'[Parameters] Adaptive CFG = {adaptive_cfg}')
|
||||
print(f'[Parameters] Sharpness = {sharpness}')
|
||||
@ -305,16 +304,7 @@ def worker():
|
||||
if 'fast' in uov_method:
|
||||
skip_prompt_processing = True
|
||||
else:
|
||||
steps = 18
|
||||
|
||||
if performance_selection == 'Speed':
|
||||
steps = 18
|
||||
|
||||
if performance_selection == 'Quality':
|
||||
steps = 36
|
||||
|
||||
if performance_selection == 'Extreme Speed':
|
||||
steps = 8
|
||||
steps = performance_selection.steps_uov()
|
||||
|
||||
progressbar(async_task, 1, 'Downloading upscale models ...')
|
||||
modules.config.downloading_upscale_model()
|
||||
@ -830,31 +820,50 @@ def worker():
|
||||
|
||||
img_paths = []
|
||||
for x in imgs:
|
||||
d = [
|
||||
('Prompt', task['log_positive_prompt']),
|
||||
('Negative Prompt', task['log_negative_prompt']),
|
||||
('Fooocus V2 Expansion', task['expansion']),
|
||||
('Styles', str(raw_style_selections)),
|
||||
('Performance', performance_selection),
|
||||
('Resolution', str((width, height))),
|
||||
('Sharpness', sharpness),
|
||||
('Guidance Scale', guidance_scale),
|
||||
('ADM Guidance', str((
|
||||
modules.patch.patch_settings[pid].positive_adm_scale,
|
||||
modules.patch.patch_settings[pid].negative_adm_scale,
|
||||
modules.patch.patch_settings[pid].adm_scaler_end))),
|
||||
('Base Model', base_model_name),
|
||||
('Refiner Model', refiner_model_name),
|
||||
('Refiner Switch', refiner_switch),
|
||||
('Sampler', sampler_name),
|
||||
('Scheduler', scheduler_name),
|
||||
('Seed', task['task_seed']),
|
||||
]
|
||||
d = [('Prompt', 'prompt', task['log_positive_prompt']),
|
||||
('Negative Prompt', 'negative_prompt', task['log_negative_prompt']),
|
||||
('Fooocus V2 Expansion', 'prompt_expansion', task['expansion']),
|
||||
('Styles', 'styles', str(raw_style_selections)),
|
||||
('Performance', 'performance', performance_selection.value),
|
||||
('Resolution', 'resolution', str((width, height))),
|
||||
('Guidance Scale', 'guidance_scale', guidance_scale),
|
||||
('Sharpness', 'sharpness', sharpness),
|
||||
('ADM Guidance', 'adm_guidance', str((
|
||||
modules.patch.patch_settings[pid].positive_adm_scale,
|
||||
modules.patch.patch_settings[pid].negative_adm_scale,
|
||||
modules.patch.patch_settings[pid].adm_scaler_end))),
|
||||
('Base Model', 'base_model', base_model_name),
|
||||
('Refiner Model', 'refiner_model', refiner_model_name),
|
||||
('Refiner Switch', 'refiner_switch', refiner_switch)]
|
||||
|
||||
if refiner_model_name != 'None':
|
||||
if overwrite_switch > 0:
|
||||
d.append(('Overwrite Switch', 'overwrite_switch', overwrite_switch))
|
||||
if refiner_swap_method != flags.refiner_swap_method:
|
||||
d.append(('Refiner Swap Method', 'refiner_swap_method', refiner_swap_method))
|
||||
if modules.patch.patch_settings[pid].adaptive_cfg != modules.config.default_cfg_tsnr:
|
||||
d.append(('CFG Mimicking from TSNR', 'adaptive_cfg', modules.patch.patch_settings[pid].adaptive_cfg))
|
||||
|
||||
d.append(('Sampler', 'sampler', sampler_name))
|
||||
d.append(('Scheduler', 'scheduler', scheduler_name))
|
||||
d.append(('Seed', 'seed', task['task_seed']))
|
||||
|
||||
if freeu_enabled:
|
||||
d.append(('FreeU', 'freeu', str((freeu_b1, freeu_b2, freeu_s1, freeu_s2))))
|
||||
|
||||
metadata_parser = None
|
||||
if save_metadata_to_images:
|
||||
metadata_parser = modules.meta_parser.get_metadata_parser(metadata_scheme)
|
||||
metadata_parser.set_data(task['log_positive_prompt'], task['positive'],
|
||||
task['log_negative_prompt'], task['negative'],
|
||||
steps, base_model_name, refiner_model_name, loras)
|
||||
|
||||
for li, (n, w) in enumerate(loras):
|
||||
if n != 'None':
|
||||
d.append((f'LoRA {li + 1}', f'{n} : {w}'))
|
||||
d.append(('Version', 'v' + fooocus_version.version))
|
||||
img_paths.append(log(x, d))
|
||||
d.append((f'LoRA {li + 1}', f'lora_combined_{li + 1}', f'{n} : {w}'))
|
||||
|
||||
d.append(('Version', 'version', 'Fooocus v' + fooocus_version.version))
|
||||
img_paths.append(log(x, d, metadata_parser))
|
||||
|
||||
yield_result(async_task, img_paths, do_not_show_finished_images=len(tasks) == 1 or disable_intermediate_results)
|
||||
except ldm_patched.modules.model_management.InterruptProcessingException as e:
|
||||
|
@ -8,7 +8,7 @@ import modules.sdxl_styles
|
||||
|
||||
from modules.model_loader import load_file_from_url
|
||||
from modules.util import get_files_from_folder, makedirs_with_log
|
||||
|
||||
from modules.flags import Performance, MetadataScheme
|
||||
|
||||
config_path = os.path.abspath("./config.txt")
|
||||
config_example_path = os.path.abspath("config_modification_tutorial.txt")
|
||||
@ -293,8 +293,8 @@ default_prompt = get_config_item_or_set_default(
|
||||
)
|
||||
default_performance = get_config_item_or_set_default(
|
||||
key='default_performance',
|
||||
default_value='Speed',
|
||||
validator=lambda x: x in modules.flags.performance_selections
|
||||
default_value=Performance.SPEED.value,
|
||||
validator=lambda x: x in Performance.list()
|
||||
)
|
||||
default_advanced_checkbox = get_config_item_or_set_default(
|
||||
key='default_advanced_checkbox',
|
||||
@ -369,6 +369,21 @@ example_inpaint_prompts = get_config_item_or_set_default(
|
||||
],
|
||||
validator=lambda x: isinstance(x, list) and all(isinstance(v, str) for v in x)
|
||||
)
|
||||
default_save_metadata_to_images = get_config_item_or_set_default(
|
||||
key='default_save_metadata_to_images',
|
||||
default_value=False,
|
||||
validator=lambda x: isinstance(x, bool)
|
||||
)
|
||||
default_metadata_scheme = get_config_item_or_set_default(
|
||||
key='default_metadata_scheme',
|
||||
default_value=MetadataScheme.FOOOCUS.value,
|
||||
validator=lambda x: x in [y[1] for y in modules.flags.metadata_scheme if y[1] == x]
|
||||
)
|
||||
metadata_created_by = get_config_item_or_set_default(
|
||||
key='metadata_created_by',
|
||||
default_value='',
|
||||
validator=lambda x: isinstance(x, str)
|
||||
)
|
||||
|
||||
example_inpaint_prompts = [[x] for x in example_inpaint_prompts]
|
||||
|
||||
@ -391,6 +406,7 @@ possible_preset_keys = [
|
||||
"default_prompt_negative",
|
||||
"default_styles",
|
||||
"default_aspect_ratio",
|
||||
"default_save_metadata_to_images",
|
||||
"checkpoint_downloads",
|
||||
"embeddings_downloads",
|
||||
"lora_downloads",
|
||||
|
@ -1,3 +1,5 @@
|
||||
from enum import IntEnum, Enum
|
||||
|
||||
disabled = 'Disabled'
|
||||
enabled = 'Enabled'
|
||||
subtle_variation = 'Vary (Subtle)'
|
||||
@ -10,16 +12,49 @@ uov_list = [
|
||||
disabled, subtle_variation, strong_variation, upscale_15, upscale_2, upscale_fast
|
||||
]
|
||||
|
||||
KSAMPLER_NAMES = ["euler", "euler_ancestral", "heun", "heunpp2","dpm_2", "dpm_2_ancestral",
|
||||
"lms", "dpm_fast", "dpm_adaptive", "dpmpp_2s_ancestral", "dpmpp_sde", "dpmpp_sde_gpu",
|
||||
"dpmpp_2m", "dpmpp_2m_sde", "dpmpp_2m_sde_gpu", "dpmpp_3m_sde", "dpmpp_3m_sde_gpu", "ddpm", "lcm"]
|
||||
CIVITAI_NO_KARRAS = ["euler", "euler_ancestral", "heun", "dpm_fast", "dpm_adaptive", "ddim", "uni_pc"]
|
||||
|
||||
# fooocus: a1111 (Civitai)
|
||||
KSAMPLER = {
|
||||
"euler": "Euler",
|
||||
"euler_ancestral": "Euler a",
|
||||
"heun": "Heun",
|
||||
"heunpp2": "",
|
||||
"dpm_2": "DPM2",
|
||||
"dpm_2_ancestral": "DPM2 a",
|
||||
"lms": "LMS",
|
||||
"dpm_fast": "DPM fast",
|
||||
"dpm_adaptive": "DPM adaptive",
|
||||
"dpmpp_2s_ancestral": "DPM++ 2S a",
|
||||
"dpmpp_sde": "DPM++ SDE",
|
||||
"dpmpp_sde_gpu": "DPM++ SDE",
|
||||
"dpmpp_2m": "DPM++ 2M",
|
||||
"dpmpp_2m_sde": "DPM++ 2M SDE",
|
||||
"dpmpp_2m_sde_gpu": "DPM++ 2M SDE",
|
||||
"dpmpp_3m_sde": "",
|
||||
"dpmpp_3m_sde_gpu": "",
|
||||
"ddpm": "",
|
||||
"lcm": "LCM"
|
||||
}
|
||||
|
||||
SAMPLER_EXTRA = {
|
||||
"ddim": "DDIM",
|
||||
"uni_pc": "UniPC",
|
||||
"uni_pc_bh2": ""
|
||||
}
|
||||
|
||||
SAMPLERS = KSAMPLER | SAMPLER_EXTRA
|
||||
|
||||
KSAMPLER_NAMES = list(KSAMPLER.keys())
|
||||
|
||||
SCHEDULER_NAMES = ["normal", "karras", "exponential", "sgm_uniform", "simple", "ddim_uniform", "lcm", "turbo"]
|
||||
SAMPLER_NAMES = KSAMPLER_NAMES + ["ddim", "uni_pc", "uni_pc_bh2"]
|
||||
SAMPLER_NAMES = KSAMPLER_NAMES + list(SAMPLER_EXTRA.keys())
|
||||
|
||||
sampler_list = SAMPLER_NAMES
|
||||
scheduler_list = SCHEDULER_NAMES
|
||||
|
||||
refiner_swap_method = 'joint'
|
||||
|
||||
cn_ip = "ImagePrompt"
|
||||
cn_ip_face = "FaceSwap"
|
||||
cn_canny = "PyraCanny"
|
||||
@ -33,8 +68,6 @@ default_parameters = {
|
||||
} # stop, weight
|
||||
|
||||
inpaint_engine_versions = ['None', 'v1', 'v2.5', 'v2.6']
|
||||
performance_selections = ['Speed', 'Quality', 'Extreme Speed']
|
||||
|
||||
inpaint_option_default = 'Inpaint or Outpaint (default)'
|
||||
inpaint_option_detail = 'Improve Detail (face, hand, eyes, etc.)'
|
||||
inpaint_option_modify = 'Modify Content (add objects, change background, etc.)'
|
||||
@ -42,3 +75,49 @@ inpaint_options = [inpaint_option_default, inpaint_option_detail, inpaint_option
|
||||
|
||||
desc_type_photo = 'Photograph'
|
||||
desc_type_anime = 'Art/Anime'
|
||||
|
||||
|
||||
class MetadataScheme(Enum):
|
||||
FOOOCUS = 'fooocus'
|
||||
A1111 = 'a1111'
|
||||
|
||||
|
||||
metadata_scheme = [
|
||||
(f'{MetadataScheme.FOOOCUS.value} (json)', MetadataScheme.FOOOCUS.value),
|
||||
(f'{MetadataScheme.A1111.value} (plain text)', MetadataScheme.A1111.value),
|
||||
]
|
||||
|
||||
lora_count = 5
|
||||
|
||||
controlnet_image_count = 4
|
||||
|
||||
|
||||
class Steps(IntEnum):
|
||||
QUALITY = 60
|
||||
SPEED = 30
|
||||
EXTREME_SPEED = 8
|
||||
|
||||
|
||||
class StepsUOV(IntEnum):
|
||||
QUALITY = 36
|
||||
SPEED = 18
|
||||
EXTREME_SPEED = 8
|
||||
|
||||
|
||||
class Performance(Enum):
|
||||
QUALITY = 'Quality'
|
||||
SPEED = 'Speed'
|
||||
EXTREME_SPEED = 'Extreme Speed'
|
||||
|
||||
@classmethod
|
||||
def list(cls) -> list:
|
||||
return list(map(lambda c: c.value, cls))
|
||||
|
||||
def steps(self) -> int | None:
|
||||
return Steps[self.name].value if Steps[self.name] else None
|
||||
|
||||
def steps_uov(self) -> int | None:
|
||||
return StepsUOV[self.name].value if Steps[self.name] else None
|
||||
|
||||
|
||||
performance_selections = Performance.list()
|
||||
|
@ -1,45 +1,113 @@
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
from abc import ABC, abstractmethod
|
||||
from pathlib import Path
|
||||
|
||||
import gradio as gr
|
||||
from PIL import Image
|
||||
|
||||
import modules.config
|
||||
import modules.sdxl_styles
|
||||
from modules.flags import MetadataScheme, Performance, Steps
|
||||
from modules.flags import SAMPLERS, CIVITAI_NO_KARRAS
|
||||
from modules.util import quote, unquote, extract_styles_from_prompt, is_json, get_file_from_folder_list, calculate_sha256
|
||||
|
||||
re_param_code = r'\s*(\w[\w \-/]+):\s*("(?:\\.|[^\\"])+"|[^,]*)(?:,|$)'
|
||||
re_param = re.compile(re_param_code)
|
||||
re_imagesize = re.compile(r"^(\d+)x(\d+)$")
|
||||
|
||||
hash_cache = {}
|
||||
|
||||
|
||||
def load_parameter_button_click(raw_prompt_txt, is_generating):
|
||||
loaded_parameter_dict = json.loads(raw_prompt_txt)
|
||||
def load_parameter_button_click(raw_metadata: dict | str, is_generating: bool):
|
||||
loaded_parameter_dict = raw_metadata
|
||||
if isinstance(raw_metadata, str):
|
||||
loaded_parameter_dict = json.loads(raw_metadata)
|
||||
assert isinstance(loaded_parameter_dict, dict)
|
||||
|
||||
results = [True, 1]
|
||||
results = [len(loaded_parameter_dict) > 0, 1]
|
||||
|
||||
get_str('prompt', 'Prompt', loaded_parameter_dict, results)
|
||||
get_str('negative_prompt', 'Negative Prompt', loaded_parameter_dict, results)
|
||||
get_list('styles', 'Styles', loaded_parameter_dict, results)
|
||||
get_str('performance', 'Performance', loaded_parameter_dict, results)
|
||||
get_steps('steps', 'Steps', loaded_parameter_dict, results)
|
||||
get_float('overwrite_switch', 'Overwrite Switch', loaded_parameter_dict, results)
|
||||
get_resolution('resolution', 'Resolution', loaded_parameter_dict, results)
|
||||
get_float('guidance_scale', 'Guidance Scale', loaded_parameter_dict, results)
|
||||
get_float('sharpness', 'Sharpness', loaded_parameter_dict, results)
|
||||
get_adm_guidance('adm_guidance', 'ADM Guidance', loaded_parameter_dict, results)
|
||||
get_str('refiner_swap_method', 'Refiner Swap Method', loaded_parameter_dict, results)
|
||||
get_float('adaptive_cfg', 'CFG Mimicking from TSNR', loaded_parameter_dict, results)
|
||||
get_str('base_model', 'Base Model', loaded_parameter_dict, results)
|
||||
get_str('refiner_model', 'Refiner Model', loaded_parameter_dict, results)
|
||||
get_float('refiner_switch', 'Refiner Switch', loaded_parameter_dict, results)
|
||||
get_str('sampler', 'Sampler', loaded_parameter_dict, results)
|
||||
get_str('scheduler', 'Scheduler', loaded_parameter_dict, results)
|
||||
get_seed('seed', 'Seed', loaded_parameter_dict, results)
|
||||
|
||||
if is_generating:
|
||||
results.append(gr.update())
|
||||
else:
|
||||
results.append(gr.update(visible=True))
|
||||
|
||||
results.append(gr.update(visible=False))
|
||||
|
||||
get_freeu('freeu', 'FreeU', loaded_parameter_dict, results)
|
||||
|
||||
for i in range(modules.config.default_max_lora_number):
|
||||
get_lora(f'lora_combined_{i + 1}', f'LoRA {i + 1}', loaded_parameter_dict, results)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def get_str(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Prompt', None)
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
assert isinstance(h, str)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Negative Prompt', None)
|
||||
assert isinstance(h, str)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
def get_list(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Styles', None)
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
h = eval(h)
|
||||
assert isinstance(h, list)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
|
||||
def get_float(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Performance', None)
|
||||
assert isinstance(h, str)
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
assert h is not None
|
||||
h = float(h)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
|
||||
def get_steps(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Resolution', None)
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
assert h is not None
|
||||
h = int(h)
|
||||
# if not in steps or in steps and performance is not the same
|
||||
if h not in iter(Steps) or Steps(h).name.casefold() != source_dict.get('performance', '').replace(' ', '_').casefold():
|
||||
results.append(h)
|
||||
return
|
||||
results.append(-1)
|
||||
except:
|
||||
results.append(-1)
|
||||
|
||||
|
||||
def get_resolution(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
width, height = eval(h)
|
||||
formatted = modules.config.add_ratio(f'{width}*{height}')
|
||||
if formatted in modules.config.available_aspect_ratios:
|
||||
@ -55,24 +123,22 @@ def load_parameter_button_click(raw_prompt_txt, is_generating):
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
|
||||
|
||||
def get_seed(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Sharpness', None)
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
assert h is not None
|
||||
h = float(h)
|
||||
h = int(h)
|
||||
results.append(False)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Guidance Scale', None)
|
||||
assert h is not None
|
||||
h = float(h)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
|
||||
def get_adm_guidance(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = loaded_parameter_dict.get('ADM Guidance', None)
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
p, n, e = eval(h)
|
||||
results.append(float(p))
|
||||
results.append(float(n))
|
||||
@ -82,69 +148,368 @@ def load_parameter_button_click(raw_prompt_txt, is_generating):
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Base Model', None)
|
||||
assert isinstance(h, str)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
def get_freeu(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Refiner Model', None)
|
||||
assert isinstance(h, str)
|
||||
results.append(h)
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
b1, b2, s1, s2 = eval(h)
|
||||
results.append(True)
|
||||
results.append(float(b1))
|
||||
results.append(float(b2))
|
||||
results.append(float(s1))
|
||||
results.append(float(s2))
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Refiner Switch', None)
|
||||
assert h is not None
|
||||
h = float(h)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Sampler', None)
|
||||
assert isinstance(h, str)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Scheduler', None)
|
||||
assert isinstance(h, str)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
try:
|
||||
h = loaded_parameter_dict.get('Seed', None)
|
||||
assert h is not None
|
||||
h = int(h)
|
||||
results.append(False)
|
||||
results.append(h)
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
|
||||
|
||||
def get_lora(key: str, fallback: str | None, source_dict: dict, results: list):
|
||||
try:
|
||||
n, w = source_dict.get(key, source_dict.get(fallback)).split(' : ')
|
||||
w = float(w)
|
||||
results.append(True)
|
||||
results.append(n)
|
||||
results.append(w)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
results.append(True)
|
||||
results.append('None')
|
||||
results.append(1)
|
||||
|
||||
if is_generating:
|
||||
results.append(gr.update())
|
||||
else:
|
||||
results.append(gr.update(visible=True))
|
||||
|
||||
results.append(gr.update(visible=False))
|
||||
|
||||
for i in range(1, modules.config.default_max_lora_number + 1):
|
||||
try:
|
||||
n, w = loaded_parameter_dict.get(f'LoRA {i}', ' : ').split(' : ')
|
||||
w = float(w)
|
||||
results.append(True)
|
||||
results.append(n)
|
||||
results.append(w)
|
||||
except:
|
||||
results.append(True)
|
||||
results.append('None')
|
||||
results.append(1.0)
|
||||
def get_sha256(filepath):
|
||||
global hash_cache
|
||||
|
||||
return results
|
||||
if filepath not in hash_cache:
|
||||
hash_cache[filepath] = calculate_sha256(filepath)
|
||||
|
||||
return hash_cache[filepath]
|
||||
|
||||
|
||||
class MetadataParser(ABC):
|
||||
def __init__(self):
|
||||
self.raw_prompt: str = ''
|
||||
self.full_prompt: str = ''
|
||||
self.raw_negative_prompt: str = ''
|
||||
self.full_negative_prompt: str = ''
|
||||
self.steps: int = 30
|
||||
self.base_model_name: str = ''
|
||||
self.base_model_hash: str = ''
|
||||
self.refiner_model_name: str = ''
|
||||
self.refiner_model_hash: str = ''
|
||||
self.loras: list = []
|
||||
|
||||
@abstractmethod
|
||||
def get_scheme(self) -> MetadataScheme:
|
||||
raise NotImplementedError
|
||||
|
||||
@abstractmethod
|
||||
def parse_json(self, metadata: dict | str) -> dict:
|
||||
raise NotImplementedError
|
||||
|
||||
@abstractmethod
|
||||
def parse_string(self, metadata: dict) -> str:
|
||||
raise NotImplementedError
|
||||
|
||||
def set_data(self, raw_prompt, full_prompt, raw_negative_prompt, full_negative_prompt, steps, base_model_name, refiner_model_name, loras):
|
||||
self.raw_prompt = raw_prompt
|
||||
self.full_prompt = full_prompt
|
||||
self.raw_negative_prompt = raw_negative_prompt
|
||||
self.full_negative_prompt = full_negative_prompt
|
||||
self.steps = steps
|
||||
self.base_model_name = Path(base_model_name).stem
|
||||
|
||||
base_model_path = get_file_from_folder_list(base_model_name, modules.config.paths_checkpoints)
|
||||
self.base_model_hash = get_sha256(base_model_path)
|
||||
|
||||
if refiner_model_name not in ['', 'None']:
|
||||
self.refiner_model_name = Path(refiner_model_name).stem
|
||||
refiner_model_path = get_file_from_folder_list(refiner_model_name, modules.config.paths_checkpoints)
|
||||
self.refiner_model_hash = get_sha256(refiner_model_path)
|
||||
|
||||
self.loras = []
|
||||
for (lora_name, lora_weight) in loras:
|
||||
if lora_name != 'None':
|
||||
lora_path = get_file_from_folder_list(lora_name, modules.config.paths_loras)
|
||||
lora_hash = get_sha256(lora_path)
|
||||
self.loras.append((Path(lora_name).stem, lora_weight, lora_hash))
|
||||
|
||||
|
||||
class A1111MetadataParser(MetadataParser):
|
||||
def get_scheme(self) -> MetadataScheme:
|
||||
return MetadataScheme.A1111
|
||||
|
||||
fooocus_to_a1111 = {
|
||||
'raw_prompt': 'Raw prompt',
|
||||
'raw_negative_prompt': 'Raw negative prompt',
|
||||
'negative_prompt': 'Negative prompt',
|
||||
'styles': 'Styles',
|
||||
'performance': 'Performance',
|
||||
'steps': 'Steps',
|
||||
'sampler': 'Sampler',
|
||||
'scheduler': 'Scheduler',
|
||||
'guidance_scale': 'CFG scale',
|
||||
'seed': 'Seed',
|
||||
'resolution': 'Size',
|
||||
'sharpness': 'Sharpness',
|
||||
'adm_guidance': 'ADM Guidance',
|
||||
'refiner_swap_method': 'Refiner Swap Method',
|
||||
'adaptive_cfg': 'Adaptive CFG',
|
||||
'overwrite_switch': 'Overwrite Switch',
|
||||
'freeu': 'FreeU',
|
||||
'base_model': 'Model',
|
||||
'base_model_hash': 'Model hash',
|
||||
'refiner_model': 'Refiner',
|
||||
'refiner_model_hash': 'Refiner hash',
|
||||
'lora_hashes': 'Lora hashes',
|
||||
'lora_weights': 'Lora weights',
|
||||
'created_by': 'User',
|
||||
'version': 'Version'
|
||||
}
|
||||
|
||||
def parse_json(self, metadata: str) -> dict:
|
||||
metadata_prompt = ''
|
||||
metadata_negative_prompt = ''
|
||||
|
||||
done_with_prompt = False
|
||||
|
||||
*lines, lastline = metadata.strip().split("\n")
|
||||
if len(re_param.findall(lastline)) < 3:
|
||||
lines.append(lastline)
|
||||
lastline = ''
|
||||
|
||||
for line in lines:
|
||||
line = line.strip()
|
||||
if line.startswith(f"{self.fooocus_to_a1111['negative_prompt']}:"):
|
||||
done_with_prompt = True
|
||||
line = line[len(f"{self.fooocus_to_a1111['negative_prompt']}:"):].strip()
|
||||
if done_with_prompt:
|
||||
metadata_negative_prompt += ('' if metadata_negative_prompt == '' else "\n") + line
|
||||
else:
|
||||
metadata_prompt += ('' if metadata_prompt == '' else "\n") + line
|
||||
|
||||
found_styles, prompt, negative_prompt = extract_styles_from_prompt(metadata_prompt, metadata_negative_prompt)
|
||||
|
||||
data = {
|
||||
'prompt': prompt,
|
||||
'negative_prompt': negative_prompt
|
||||
}
|
||||
|
||||
for k, v in re_param.findall(lastline):
|
||||
try:
|
||||
if v != '' and v[0] == '"' and v[-1] == '"':
|
||||
v = unquote(v)
|
||||
|
||||
m = re_imagesize.match(v)
|
||||
if m is not None:
|
||||
data['resolution'] = str((m.group(1), m.group(2)))
|
||||
else:
|
||||
data[list(self.fooocus_to_a1111.keys())[list(self.fooocus_to_a1111.values()).index(k)]] = v
|
||||
except Exception:
|
||||
print(f"Error parsing \"{k}: {v}\"")
|
||||
|
||||
# workaround for multiline prompts
|
||||
if 'raw_prompt' in data:
|
||||
data['prompt'] = data['raw_prompt']
|
||||
raw_prompt = data['raw_prompt'].replace("\n", ', ')
|
||||
if metadata_prompt != raw_prompt and modules.sdxl_styles.fooocus_expansion not in found_styles:
|
||||
found_styles.append(modules.sdxl_styles.fooocus_expansion)
|
||||
|
||||
if 'raw_negative_prompt' in data:
|
||||
data['negative_prompt'] = data['raw_negative_prompt']
|
||||
|
||||
data['styles'] = str(found_styles)
|
||||
|
||||
# try to load performance based on steps, fallback for direct A1111 imports
|
||||
if 'steps' in data and 'performance' not in data:
|
||||
try:
|
||||
data['performance'] = Performance[Steps(int(data['steps'])).name].value
|
||||
except ValueError | KeyError:
|
||||
pass
|
||||
|
||||
if 'sampler' in data:
|
||||
data['sampler'] = data['sampler'].replace(' Karras', '')
|
||||
# get key
|
||||
for k, v in SAMPLERS.items():
|
||||
if v == data['sampler']:
|
||||
data['sampler'] = k
|
||||
break
|
||||
|
||||
for key in ['base_model', 'refiner_model']:
|
||||
if key in data:
|
||||
for filename in modules.config.model_filenames:
|
||||
path = Path(filename)
|
||||
if data[key] == path.stem:
|
||||
data[key] = filename
|
||||
break
|
||||
|
||||
if 'lora_hashes' in data:
|
||||
lora_filenames = modules.config.lora_filenames.copy()
|
||||
lora_filenames.remove(modules.config.downloading_sdxl_lcm_lora())
|
||||
for li, lora in enumerate(data['lora_hashes'].split(', ')):
|
||||
lora_name, lora_hash, lora_weight = lora.split(': ')
|
||||
for filename in lora_filenames:
|
||||
path = Path(filename)
|
||||
if lora_name == path.stem:
|
||||
data[f'lora_combined_{li + 1}'] = f'{filename} : {lora_weight}'
|
||||
break
|
||||
|
||||
return data
|
||||
|
||||
def parse_string(self, metadata: dict) -> str:
|
||||
data = {k: v for _, k, v in metadata}
|
||||
|
||||
width, height = eval(data['resolution'])
|
||||
|
||||
sampler = data['sampler']
|
||||
scheduler = data['scheduler']
|
||||
if sampler in SAMPLERS and SAMPLERS[sampler] != '':
|
||||
sampler = SAMPLERS[sampler]
|
||||
if sampler not in CIVITAI_NO_KARRAS and scheduler == 'karras':
|
||||
sampler += f' Karras'
|
||||
|
||||
generation_params = {
|
||||
self.fooocus_to_a1111['steps']: self.steps,
|
||||
self.fooocus_to_a1111['sampler']: sampler,
|
||||
self.fooocus_to_a1111['seed']: data['seed'],
|
||||
self.fooocus_to_a1111['resolution']: f'{width}x{height}',
|
||||
self.fooocus_to_a1111['guidance_scale']: data['guidance_scale'],
|
||||
self.fooocus_to_a1111['sharpness']: data['sharpness'],
|
||||
self.fooocus_to_a1111['adm_guidance']: data['adm_guidance'],
|
||||
self.fooocus_to_a1111['base_model']: Path(data['base_model']).stem,
|
||||
self.fooocus_to_a1111['base_model_hash']: self.base_model_hash,
|
||||
|
||||
self.fooocus_to_a1111['performance']: data['performance'],
|
||||
self.fooocus_to_a1111['scheduler']: scheduler,
|
||||
# workaround for multiline prompts
|
||||
self.fooocus_to_a1111['raw_prompt']: self.raw_prompt,
|
||||
self.fooocus_to_a1111['raw_negative_prompt']: self.raw_negative_prompt,
|
||||
}
|
||||
|
||||
if self.refiner_model_name not in ['', 'None']:
|
||||
generation_params |= {
|
||||
self.fooocus_to_a1111['refiner_model']: self.refiner_model_name,
|
||||
self.fooocus_to_a1111['refiner_model_hash']: self.refiner_model_hash
|
||||
}
|
||||
|
||||
for key in ['adaptive_cfg', 'overwrite_switch', 'refiner_swap_method', 'freeu']:
|
||||
if key in data:
|
||||
generation_params[self.fooocus_to_a1111[key]] = data[key]
|
||||
|
||||
lora_hashes = []
|
||||
for index, (lora_name, lora_weight, lora_hash) in enumerate(self.loras):
|
||||
# workaround for Fooocus not knowing LoRA name in LoRA metadata
|
||||
lora_hashes.append(f'{lora_name}: {lora_hash}: {lora_weight}')
|
||||
lora_hashes_string = ', '.join(lora_hashes)
|
||||
|
||||
generation_params |= {
|
||||
self.fooocus_to_a1111['lora_hashes']: lora_hashes_string,
|
||||
self.fooocus_to_a1111['version']: data['version']
|
||||
}
|
||||
|
||||
if modules.config.metadata_created_by != '':
|
||||
generation_params[self.fooocus_to_a1111['created_by']] = modules.config.metadata_created_by
|
||||
|
||||
generation_params_text = ", ".join(
|
||||
[k if k == v else f'{k}: {quote(v)}' for k, v in generation_params.items() if
|
||||
v is not None])
|
||||
positive_prompt_resolved = ', '.join(self.full_prompt)
|
||||
negative_prompt_resolved = ', '.join(self.full_negative_prompt)
|
||||
negative_prompt_text = f"\nNegative prompt: {negative_prompt_resolved}" if negative_prompt_resolved else ""
|
||||
return f"{positive_prompt_resolved}{negative_prompt_text}\n{generation_params_text}".strip()
|
||||
|
||||
|
||||
class FooocusMetadataParser(MetadataParser):
|
||||
def get_scheme(self) -> MetadataScheme:
|
||||
return MetadataScheme.FOOOCUS
|
||||
|
||||
def parse_json(self, metadata: dict) -> dict:
|
||||
model_filenames = modules.config.model_filenames.copy()
|
||||
lora_filenames = modules.config.lora_filenames.copy()
|
||||
lora_filenames.remove(modules.config.downloading_sdxl_lcm_lora())
|
||||
|
||||
for key, value in metadata.items():
|
||||
if value in ['', 'None']:
|
||||
continue
|
||||
if key in ['base_model', 'refiner_model']:
|
||||
metadata[key] = self.replace_value_with_filename(key, value, model_filenames)
|
||||
elif key.startswith('lora_combined_'):
|
||||
metadata[key] = self.replace_value_with_filename(key, value, lora_filenames)
|
||||
else:
|
||||
continue
|
||||
|
||||
return metadata
|
||||
|
||||
def parse_string(self, metadata: list) -> str:
|
||||
for li, (label, key, value) in enumerate(metadata):
|
||||
# remove model folder paths from metadata
|
||||
if key.startswith('lora_combined_'):
|
||||
name, weight = value.split(' : ')
|
||||
name = Path(name).stem
|
||||
value = f'{name} : {weight}'
|
||||
metadata[li] = (label, key, value)
|
||||
|
||||
res = {k: v for _, k, v in metadata}
|
||||
|
||||
res['full_prompt'] = self.full_prompt
|
||||
res['full_negative_prompt'] = self.full_negative_prompt
|
||||
res['steps'] = self.steps
|
||||
res['base_model'] = self.base_model_name
|
||||
res['base_model_hash'] = self.base_model_hash
|
||||
|
||||
if self.refiner_model_name not in ['', 'None']:
|
||||
res['refiner_model'] = self.refiner_model_name
|
||||
res['refiner_model_hash'] = self.refiner_model_hash
|
||||
|
||||
res['loras'] = self.loras
|
||||
|
||||
if modules.config.metadata_created_by != '':
|
||||
res['created_by'] = modules.config.metadata_created_by
|
||||
|
||||
return json.dumps(dict(sorted(res.items())))
|
||||
|
||||
@staticmethod
|
||||
def replace_value_with_filename(key, value, filenames):
|
||||
for filename in filenames:
|
||||
path = Path(filename)
|
||||
if key.startswith('lora_combined_'):
|
||||
name, weight = value.split(' : ')
|
||||
if name == path.stem:
|
||||
return f'{filename} : {weight}'
|
||||
elif value == path.stem:
|
||||
return filename
|
||||
|
||||
|
||||
def get_metadata_parser(metadata_scheme: MetadataScheme) -> MetadataParser:
|
||||
match metadata_scheme:
|
||||
case MetadataScheme.FOOOCUS:
|
||||
return FooocusMetadataParser()
|
||||
case MetadataScheme.A1111:
|
||||
return A1111MetadataParser()
|
||||
case _:
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
def read_info_from_image(filepath) -> tuple[str | None, dict, MetadataScheme | None]:
|
||||
with Image.open(filepath) as image:
|
||||
items = (image.info or {}).copy()
|
||||
|
||||
parameters = items.pop('parameters', None)
|
||||
if parameters is not None and is_json(parameters):
|
||||
parameters = json.loads(parameters)
|
||||
|
||||
try:
|
||||
metadata_scheme = MetadataScheme(items.pop('fooocus_scheme', None))
|
||||
except ValueError:
|
||||
metadata_scheme = None
|
||||
|
||||
# broad fallback
|
||||
if isinstance(parameters, dict):
|
||||
metadata_scheme = MetadataScheme.FOOOCUS
|
||||
|
||||
if isinstance(parameters, str):
|
||||
metadata_scheme = MetadataScheme.A1111
|
||||
|
||||
return parameters, items, metadata_scheme
|
||||
|
@ -5,7 +5,9 @@ import json
|
||||
import urllib.parse
|
||||
|
||||
from PIL import Image
|
||||
from PIL.PngImagePlugin import PngInfo
|
||||
from modules.util import generate_temp_filename
|
||||
from modules.meta_parser import MetadataParser
|
||||
from tempfile import gettempdir
|
||||
|
||||
log_cache = {}
|
||||
@ -18,11 +20,21 @@ def get_current_html_path():
|
||||
return html_name
|
||||
|
||||
|
||||
def log(img, dic) -> str:
|
||||
def log(img, metadata, metadata_parser: MetadataParser | None = None) -> str:
|
||||
path_outputs = args_manager.args.temp_path if args_manager.args.disable_image_log else modules.config.path_outputs
|
||||
date_string, local_temp_filename, only_name = generate_temp_filename(folder=path_outputs, extension='png')
|
||||
os.makedirs(os.path.dirname(local_temp_filename), exist_ok=True)
|
||||
Image.fromarray(img).save(local_temp_filename)
|
||||
|
||||
parsed_parameters = metadata_parser.parse_string(metadata) if metadata_parser is not None else ''
|
||||
image = Image.fromarray(img)
|
||||
|
||||
if parsed_parameters != '':
|
||||
pnginfo = PngInfo()
|
||||
pnginfo.add_text('parameters', parsed_parameters)
|
||||
pnginfo.add_text('fooocus_scheme', metadata_parser.get_scheme().value)
|
||||
else:
|
||||
pnginfo = None
|
||||
image.save(local_temp_filename, pnginfo=pnginfo)
|
||||
|
||||
if args_manager.args.disable_image_log:
|
||||
return local_temp_filename
|
||||
@ -34,7 +46,7 @@ def log(img, dic) -> str:
|
||||
"body { background-color: #121212; color: #E0E0E0; } "
|
||||
"a { color: #BB86FC; } "
|
||||
".metadata { border-collapse: collapse; width: 100%; } "
|
||||
".metadata .key { width: 15%; } "
|
||||
".metadata .label { width: 15%; } "
|
||||
".metadata .value { width: 85%; font-weight: bold; } "
|
||||
".metadata th, .metadata td { border: 1px solid #4d4d4d; padding: 4px; } "
|
||||
".image-container img { height: auto; max-width: 512px; display: block; padding-right:10px; } "
|
||||
@ -87,13 +99,13 @@ def log(img, dic) -> str:
|
||||
item = f"<div id=\"{div_name}\" class=\"image-container\"><hr><table><tr>\n"
|
||||
item += f"<td><a href=\"{only_name}\" target=\"_blank\"><img src='{only_name}' onerror=\"this.closest('.image-container').style.display='none';\" loading='lazy'/></a><div>{only_name}</div></td>"
|
||||
item += "<td><table class='metadata'>"
|
||||
for key, value in dic:
|
||||
value_txt = str(value).replace('\n', ' <br/> ')
|
||||
item += f"<tr><td class='key'>{key}</td><td class='value'>{value_txt}</td></tr>\n"
|
||||
for label, key, value in metadata:
|
||||
value_txt = str(value).replace('\n', ' </br> ')
|
||||
item += f"<tr><td class='label'>{label}</td><td class='value'>{value_txt}</td></tr>\n"
|
||||
item += "</table>"
|
||||
|
||||
js_txt = urllib.parse.quote(json.dumps({k: v for k, v in dic}, indent=0), safe='')
|
||||
item += f"<br/><button onclick=\"to_clipboard('{js_txt}')\">Copy to Clipboard</button>"
|
||||
js_txt = urllib.parse.quote(json.dumps({k: v for _, k, v in metadata}, indent=0), safe='')
|
||||
item += f"</br><button onclick=\"to_clipboard('{js_txt}')\">Copy to Clipboard</button>"
|
||||
|
||||
item += "</td>"
|
||||
item += "</tr></table></div>\n\n"
|
||||
|
169
modules/util.py
169
modules/util.py
@ -1,15 +1,20 @@
|
||||
import typing
|
||||
|
||||
import numpy as np
|
||||
import datetime
|
||||
import random
|
||||
import math
|
||||
import os
|
||||
import cv2
|
||||
import json
|
||||
|
||||
from PIL import Image
|
||||
from hashlib import sha256
|
||||
|
||||
import modules.sdxl_styles
|
||||
|
||||
LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS)
|
||||
|
||||
HASH_SHA256_LENGTH = 10
|
||||
|
||||
def erode_or_dilate(x, k):
|
||||
k = int(k)
|
||||
@ -170,13 +175,173 @@ def get_files_from_folder(folder_path, exensions=None, name_filter=None):
|
||||
relative_path = ""
|
||||
for filename in sorted(files, key=lambda s: s.casefold()):
|
||||
_, file_extension = os.path.splitext(filename)
|
||||
if (exensions == None or file_extension.lower() in exensions) and (name_filter == None or name_filter in _):
|
||||
if (exensions is None or file_extension.lower() in exensions) and (name_filter is None or name_filter in _):
|
||||
path = os.path.join(relative_path, filename)
|
||||
filenames.append(path)
|
||||
|
||||
return filenames
|
||||
|
||||
|
||||
def calculate_sha256(filename, length=HASH_SHA256_LENGTH) -> str:
|
||||
hash_sha256 = sha256()
|
||||
blksize = 1024 * 1024
|
||||
|
||||
with open(filename, "rb") as f:
|
||||
for chunk in iter(lambda: f.read(blksize), b""):
|
||||
hash_sha256.update(chunk)
|
||||
|
||||
res = hash_sha256.hexdigest()
|
||||
return res[:length] if length else res
|
||||
|
||||
|
||||
def quote(text):
|
||||
if ',' not in str(text) and '\n' not in str(text) and ':' not in str(text):
|
||||
return text
|
||||
|
||||
return json.dumps(text, ensure_ascii=False)
|
||||
|
||||
|
||||
def unquote(text):
|
||||
if len(text) == 0 or text[0] != '"' or text[-1] != '"':
|
||||
return text
|
||||
|
||||
try:
|
||||
return json.loads(text)
|
||||
except Exception:
|
||||
return text
|
||||
|
||||
|
||||
def unwrap_style_text_from_prompt(style_text, prompt):
|
||||
"""
|
||||
Checks the prompt to see if the style text is wrapped around it. If so,
|
||||
returns True plus the prompt text without the style text. Otherwise, returns
|
||||
False with the original prompt.
|
||||
|
||||
Note that the "cleaned" version of the style text is only used for matching
|
||||
purposes here. It isn't returned; the original style text is not modified.
|
||||
"""
|
||||
stripped_prompt = prompt
|
||||
stripped_style_text = style_text
|
||||
if "{prompt}" in stripped_style_text:
|
||||
# Work out whether the prompt is wrapped in the style text. If so, we
|
||||
# return True and the "inner" prompt text that isn't part of the style.
|
||||
try:
|
||||
left, right = stripped_style_text.split("{prompt}", 2)
|
||||
except ValueError as e:
|
||||
# If the style text has multple "{prompt}"s, we can't split it into
|
||||
# two parts. This is an error, but we can't do anything about it.
|
||||
print(f"Unable to compare style text to prompt:\n{style_text}")
|
||||
print(f"Error: {e}")
|
||||
return False, prompt, ''
|
||||
|
||||
left_pos = stripped_prompt.find(left)
|
||||
right_pos = stripped_prompt.find(right)
|
||||
if 0 <= left_pos < right_pos:
|
||||
real_prompt = stripped_prompt[left_pos + len(left):right_pos]
|
||||
prompt = stripped_prompt.replace(left + real_prompt + right, '', 1)
|
||||
if prompt.startswith(", "):
|
||||
prompt = prompt[2:]
|
||||
if prompt.endswith(", "):
|
||||
prompt = prompt[:-2]
|
||||
return True, prompt, real_prompt
|
||||
else:
|
||||
# Work out whether the given prompt starts with the style text. If so, we
|
||||
# return True and the prompt text up to where the style text starts.
|
||||
if stripped_prompt.endswith(stripped_style_text):
|
||||
prompt = stripped_prompt[: len(stripped_prompt) - len(stripped_style_text)]
|
||||
if prompt.endswith(", "):
|
||||
prompt = prompt[:-2]
|
||||
return True, prompt, prompt
|
||||
|
||||
return False, prompt, ''
|
||||
|
||||
|
||||
def extract_original_prompts(style, prompt, negative_prompt):
|
||||
"""
|
||||
Takes a style and compares it to the prompt and negative prompt. If the style
|
||||
matches, returns True plus the prompt and negative prompt with the style text
|
||||
removed. Otherwise, returns False with the original prompt and negative prompt.
|
||||
"""
|
||||
if not style.prompt and not style.negative_prompt:
|
||||
return False, prompt, negative_prompt
|
||||
|
||||
match_positive, extracted_positive, real_prompt = unwrap_style_text_from_prompt(
|
||||
style.prompt, prompt
|
||||
)
|
||||
if not match_positive:
|
||||
return False, prompt, negative_prompt, ''
|
||||
|
||||
match_negative, extracted_negative, _ = unwrap_style_text_from_prompt(
|
||||
style.negative_prompt, negative_prompt
|
||||
)
|
||||
if not match_negative:
|
||||
return False, prompt, negative_prompt, ''
|
||||
|
||||
return True, extracted_positive, extracted_negative, real_prompt
|
||||
|
||||
|
||||
def extract_styles_from_prompt(prompt, negative_prompt):
|
||||
extracted = []
|
||||
applicable_styles = []
|
||||
|
||||
for style_name, (style_prompt, style_negative_prompt) in modules.sdxl_styles.styles.items():
|
||||
applicable_styles.append(PromptStyle(name=style_name, prompt=style_prompt, negative_prompt=style_negative_prompt))
|
||||
|
||||
real_prompt = ''
|
||||
|
||||
while True:
|
||||
found_style = None
|
||||
|
||||
for style in applicable_styles:
|
||||
is_match, new_prompt, new_neg_prompt, new_real_prompt = extract_original_prompts(
|
||||
style, prompt, negative_prompt
|
||||
)
|
||||
if is_match:
|
||||
found_style = style
|
||||
prompt = new_prompt
|
||||
negative_prompt = new_neg_prompt
|
||||
if real_prompt == '' and new_real_prompt != '' and new_real_prompt != prompt:
|
||||
real_prompt = new_real_prompt
|
||||
break
|
||||
|
||||
if not found_style:
|
||||
break
|
||||
|
||||
applicable_styles.remove(found_style)
|
||||
extracted.append(found_style.name)
|
||||
|
||||
# add prompt expansion if not all styles could be resolved
|
||||
if prompt != '':
|
||||
if real_prompt != '':
|
||||
extracted.append(modules.sdxl_styles.fooocus_expansion)
|
||||
else:
|
||||
# find real_prompt when only prompt expansion is selected
|
||||
first_word = prompt.split(', ')[0]
|
||||
first_word_positions = [i for i in range(len(prompt)) if prompt.startswith(first_word, i)]
|
||||
if len(first_word_positions) > 1:
|
||||
real_prompt = prompt[:first_word_positions[-1]]
|
||||
extracted.append(modules.sdxl_styles.fooocus_expansion)
|
||||
if real_prompt.endswith(', '):
|
||||
real_prompt = real_prompt[:-2]
|
||||
|
||||
return list(reversed(extracted)), real_prompt, negative_prompt
|
||||
|
||||
|
||||
class PromptStyle(typing.NamedTuple):
|
||||
name: str
|
||||
prompt: str
|
||||
negative_prompt: str
|
||||
|
||||
|
||||
def is_json(data: str) -> bool:
|
||||
try:
|
||||
loaded_json = json.loads(data)
|
||||
assert isinstance(loaded_json, dict)
|
||||
except (ValueError, AssertionError):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def get_file_from_folder_list(name, folders):
|
||||
for folder in folders:
|
||||
filename = os.path.abspath(os.path.realpath(os.path.join(folder, name)))
|
||||
|
102
webui.py
102
webui.py
@ -20,6 +20,7 @@ from modules.sdxl_styles import legal_style_names
|
||||
from modules.private_logger import get_current_html_path
|
||||
from modules.ui_gradio_extensions import reload_javascript
|
||||
from modules.auth import auth_enabled, check_auth
|
||||
from modules.util import is_json
|
||||
|
||||
def get_task(*args):
|
||||
args = list(args)
|
||||
@ -158,7 +159,7 @@ with shared.gradio_root:
|
||||
ip_weights = []
|
||||
ip_ctrls = []
|
||||
ip_ad_cols = []
|
||||
for _ in range(4):
|
||||
for _ in range(flags.controlnet_image_count):
|
||||
with gr.Column():
|
||||
ip_image = grh.Image(label='Image', source='upload', type='numpy', show_label=False, height=300)
|
||||
ip_images.append(ip_image)
|
||||
@ -216,6 +217,30 @@ with shared.gradio_root:
|
||||
value=flags.desc_type_photo)
|
||||
desc_btn = gr.Button(value='Describe this Image into Prompt')
|
||||
gr.HTML('<a href="https://github.com/lllyasviel/Fooocus/discussions/1363" target="_blank">\U0001F4D4 Document</a>')
|
||||
with gr.TabItem(label='Metadata') as load_tab:
|
||||
with gr.Column():
|
||||
metadata_input_image = grh.Image(label='Drag any image generated by Fooocus here', source='upload', type='filepath')
|
||||
metadata_json = gr.JSON(label='Metadata')
|
||||
metadata_import_button = gr.Button(value='Apply Metadata')
|
||||
|
||||
def trigger_metadata_preview(filepath):
|
||||
parameters, items, metadata_scheme = modules.meta_parser.read_info_from_image(filepath)
|
||||
|
||||
results = {}
|
||||
if parameters is not None:
|
||||
results['parameters'] = parameters
|
||||
|
||||
if items:
|
||||
results['items'] = items
|
||||
|
||||
if isinstance(metadata_scheme, flags.MetadataScheme):
|
||||
results['metadata_scheme'] = metadata_scheme.value
|
||||
|
||||
return results
|
||||
|
||||
metadata_input_image.upload(trigger_metadata_preview, inputs=metadata_input_image,
|
||||
outputs=metadata_json, queue=False, show_progress=True)
|
||||
|
||||
switch_js = "(x) => {if(x){viewer_to_bottom(100);viewer_to_bottom(500);}else{viewer_to_top();} return x;}"
|
||||
down_js = "() => {viewer_to_bottom();}"
|
||||
|
||||
@ -359,7 +384,7 @@ with shared.gradio_root:
|
||||
step=0.001, value=0.3,
|
||||
info='When to end the guidance from positive/negative ADM. ')
|
||||
|
||||
refiner_swap_method = gr.Dropdown(label='Refiner swap method', value='joint',
|
||||
refiner_swap_method = gr.Dropdown(label='Refiner swap method', value=flags.refiner_swap_method,
|
||||
choices=['joint', 'separate', 'vae'])
|
||||
|
||||
adaptive_cfg = gr.Slider(label='CFG Mimicking from TSNR', minimum=1.0, maximum=30.0, step=0.01,
|
||||
@ -407,6 +432,16 @@ with shared.gradio_root:
|
||||
info='Disable automatic seed increment when image number is > 1.',
|
||||
value=False)
|
||||
|
||||
if not args_manager.args.disable_metadata:
|
||||
save_metadata_to_images = gr.Checkbox(label='Save Metadata to Images', value=modules.config.default_save_metadata_to_images,
|
||||
info='Adds parameters to generated images allowing manual regeneration.')
|
||||
metadata_scheme = gr.Radio(label='Metadata Scheme', choices=flags.metadata_scheme, value=modules.config.default_metadata_scheme,
|
||||
info='Image Prompt parameters are not included. Use a1111 for compatibility with Civitai.',
|
||||
visible=modules.config.default_save_metadata_to_images)
|
||||
|
||||
save_metadata_to_images.change(lambda x: gr.update(visible=x), inputs=[save_metadata_to_images], outputs=[metadata_scheme],
|
||||
queue=False, show_progress=False)
|
||||
|
||||
with gr.Tab(label='Control'):
|
||||
debugging_cn_preprocessor = gr.Checkbox(label='Debug Preprocessors', value=False,
|
||||
info='See the results from preprocessors.')
|
||||
@ -484,7 +519,6 @@ with shared.gradio_root:
|
||||
results += [gr.update(choices=['None'] + modules.config.model_filenames)]
|
||||
for i in range(modules.config.default_max_lora_number):
|
||||
results += [gr.update(interactive=True), gr.update(choices=['None'] + modules.config.lora_filenames), gr.update()]
|
||||
return results
|
||||
|
||||
model_refresh.click(model_refresh_clicked, [], [base_model, refiner_model] + lora_ctrls,
|
||||
queue=False, show_progress=False)
|
||||
@ -555,20 +589,18 @@ with shared.gradio_root:
|
||||
ctrls += [refiner_swap_method, controlnet_softness]
|
||||
ctrls += freeu_ctrls
|
||||
ctrls += inpaint_ctrls
|
||||
|
||||
if not args_manager.args.disable_metadata:
|
||||
ctrls += [save_metadata_to_images, metadata_scheme]
|
||||
|
||||
ctrls += ip_ctrls
|
||||
|
||||
state_is_generating = gr.State(False)
|
||||
|
||||
def parse_meta(raw_prompt_txt, is_generating):
|
||||
loaded_json = None
|
||||
try:
|
||||
if '{' in raw_prompt_txt:
|
||||
if '}' in raw_prompt_txt:
|
||||
if ':' in raw_prompt_txt:
|
||||
loaded_json = json.loads(raw_prompt_txt)
|
||||
assert isinstance(loaded_json, dict)
|
||||
except:
|
||||
loaded_json = None
|
||||
if is_json(raw_prompt_txt):
|
||||
loaded_json = json.loads(raw_prompt_txt)
|
||||
|
||||
if loaded_json is None:
|
||||
if is_generating:
|
||||
@ -580,31 +612,29 @@ with shared.gradio_root:
|
||||
|
||||
prompt.input(parse_meta, inputs=[prompt, state_is_generating], outputs=[prompt, generate_button, load_parameter_button], queue=False, show_progress=False)
|
||||
|
||||
load_parameter_button.click(modules.meta_parser.load_parameter_button_click, inputs=[prompt, state_is_generating], outputs=[
|
||||
advanced_checkbox,
|
||||
image_number,
|
||||
prompt,
|
||||
negative_prompt,
|
||||
style_selections,
|
||||
performance_selection,
|
||||
aspect_ratios_selection,
|
||||
overwrite_width,
|
||||
overwrite_height,
|
||||
sharpness,
|
||||
guidance_scale,
|
||||
adm_scaler_positive,
|
||||
adm_scaler_negative,
|
||||
adm_scaler_end,
|
||||
base_model,
|
||||
refiner_model,
|
||||
refiner_switch,
|
||||
sampler_name,
|
||||
scheduler_name,
|
||||
seed_random,
|
||||
image_seed,
|
||||
generate_button,
|
||||
load_parameter_button
|
||||
] + lora_ctrls, queue=False, show_progress=False)
|
||||
load_data_outputs = [advanced_checkbox, image_number, prompt, negative_prompt, style_selections,
|
||||
performance_selection, overwrite_step, overwrite_switch, aspect_ratios_selection,
|
||||
overwrite_width, overwrite_height, guidance_scale, sharpness, adm_scaler_positive,
|
||||
adm_scaler_negative, adm_scaler_end, refiner_swap_method, adaptive_cfg, base_model,
|
||||
refiner_model, refiner_switch, sampler_name, scheduler_name, seed_random, image_seed,
|
||||
generate_button, load_parameter_button] + freeu_ctrls + lora_ctrls
|
||||
|
||||
load_parameter_button.click(modules.meta_parser.load_parameter_button_click, inputs=[prompt, state_is_generating], outputs=load_data_outputs, queue=False, show_progress=False)
|
||||
|
||||
def trigger_metadata_import(filepath, state_is_generating):
|
||||
parameters, items, metadata_scheme = modules.meta_parser.read_info_from_image(filepath)
|
||||
if parameters is None:
|
||||
print('Could not find metadata in the image!')
|
||||
parsed_parameters = {}
|
||||
else:
|
||||
metadata_parser = modules.meta_parser.get_metadata_parser(metadata_scheme)
|
||||
parsed_parameters = metadata_parser.parse_json(parameters)
|
||||
|
||||
return modules.meta_parser.load_parameter_button_click(parsed_parameters, state_is_generating)
|
||||
|
||||
|
||||
metadata_import_button.click(trigger_metadata_import, inputs=[metadata_input_image, state_is_generating], outputs=load_data_outputs, queue=False, show_progress=True) \
|
||||
.then(style_sorter.sort_styles, inputs=style_selections, outputs=style_selections, queue=False, show_progress=False)
|
||||
|
||||
generate_button.click(lambda: (gr.update(visible=True, interactive=True), gr.update(visible=True, interactive=True), gr.update(visible=False, interactive=False), [], True),
|
||||
outputs=[stop_button, skip_button, generate_button, gallery, state_is_generating]) \
|
||||
|
Loading…
Reference in New Issue
Block a user