* feat: add metadata logging for images inspired by https://github.com/MoonRide303/Fooocus-MRE * feat: add config and checkbox for save_metadata_to_images * feat: add argument disable_metadata * feat: add support for A1111 metadata schemacf2772fab0/modules/processing.py (L672)
* feat: add model hash support for a1111 * feat: use resolved prompts with included expansion and styles for a1111 metadata * fix: code cleanup and resolved prompt fixes * feat: add config metadata_created_by * fix: use stting isntead of quote wrap for A1111 created_by * fix: correctlyy hide/show metadata schema on app start * fix: do not generate hashes when arg --disable-metadata is used * refactor: rename metadata_schema to metadata_scheme * fix: use pnginfo "parameters" insteadf of "Comments" see https://github.com/RupertAvery/DiffusionToolkit/issues/202 andcf2772fab0/modules/processing.py (L939)
* feat: add resolved prompts to metadata * fix: use correct default value in metadata check for created_by * wip: add metadata mapping, reading and writing applying data after reading currently not functional for A1111 * feat: rename metadata tab and import button label * feat: map basic information for scheme A1111 * wip: optimize handling for metadata in Gradio calls * feat: add enums for Performance, Steps and StepsUOV also move MetadataSchema enum to prevent circular dependency * fix: correctly map resolution, use empty styles for A1111 * chore: code cleanup * feat: add A1111 prompt style detection only detects one style as Fooocus doesn't wrap {prompt} with the whole style, but has a separate prompt string for each style * wip: add prompt style extraction for A1111 scheme * feat: sort styles after metadata import * refactor: use central flag for LoRA count * refactor: use central flag for ControlNet image count * fix: use correct LoRA mapping, add fallback for backwards compatibility * feat: add created_by again * feat: add prefix "Fooocus" to version * wip: code cleanup, update todos * fix: use correct order to read LoRA in meta parser * wip: code cleanup, update todos * feat: make sha256 with length 10 default * feat: add lora handling to A1111 scheme * feat: override existing LoRA values when importing, would cause images to differ * fix: correctly extract prompt style when only prompt expansion is selected * feat: allow model / LoRA loading from subfolders * feat: code cleanup, do not queue metadata preview on image upload * refactor: add flag for refiner_swap_method * feat: add metadata handling for all non-img2img parameters * refactor: code cleanup * chore: use str as return type in calculate_sha256 * feat: add hash cache to metadata * chore: code cleanup * feat: add method get_scheme to Metadata * fix: align handling for scheme Fooocus by removing lcm lora from json parsing * refactor: add step before parsing to set data in parser - add constructor for MetadataSchema class - remove showable and copyable from log output - add functional hash cache (model hashing takes about 5 seconds, only required once per model, using hash lazy loading) * feat: sort metadata attributes before writing to image * feat: add translations and hint for image prompt parameters * chore: check and remove ToDo's * refactor: merge metadata.py into meta_parser.py * fix: add missing refiner in A1111 parse_json * wip: add TODO for ultiline prompt style resolution * fix: remove sorting for A1111, change performance key position fixes https://github.com/lllyasviel/Fooocus/pull/1940#issuecomment-1924444633 * fix: add workaround for multiline prompts * feat: add sampler mapping * feat: prevent config reset by renaming metadata_scheme to match config options * chore: remove remaining todos after analysis refiner is added when set restoring multiline prompts has been resolved by using separate parameters "raw_prompt" and "raw_negative_prompt" * chore: specify too broad exception types * feat: add mapping for _gpu samplers to cpu samplers gpu samplers are less deterministic than cpu but in general similar, see https://www.reddit.com/r/comfyui/comments/15hayzo/comment/juqcpep/ * feat: add better handling for image import with empty metadata * fix: parse adaptive_cfg as float instead of string * chore: loosen strict type for parse_json, fix indent * chore: make steps enums more strict * feat: only override steps if metadata value is not in steps enum or in steps enum and performance is not the same * fix: handle empty strings in metadata e.g. raw negative prompt when none is set
56 lines
2.4 KiB
Python
56 lines
2.4 KiB
Python
import ldm_patched.modules.args_parser as args_parser
|
|
import os
|
|
|
|
from tempfile import gettempdir
|
|
|
|
args_parser.parser.add_argument("--share", action='store_true', help="Set whether to share on Gradio.")
|
|
args_parser.parser.add_argument("--preset", type=str, default=None, help="Apply specified UI preset.")
|
|
|
|
args_parser.parser.add_argument("--language", type=str, default='default',
|
|
help="Translate UI using json files in [language] folder. "
|
|
"For example, [--language example] will use [language/example.json] for translation.")
|
|
|
|
# For example, https://github.com/lllyasviel/Fooocus/issues/849
|
|
args_parser.parser.add_argument("--disable-offload-from-vram", action="store_true",
|
|
help="Force loading models to vram when the unload can be avoided. "
|
|
"Some Mac users may need this.")
|
|
|
|
args_parser.parser.add_argument("--theme", type=str, help="launches the UI with light or dark theme", default=None)
|
|
args_parser.parser.add_argument("--disable-image-log", action='store_true',
|
|
help="Prevent writing images and logs to hard drive.")
|
|
|
|
args_parser.parser.add_argument("--disable-analytics", action='store_true',
|
|
help="Disables analytics for Gradio.")
|
|
|
|
args_parser.parser.add_argument("--disable-metadata", action='store_true',
|
|
help="Disables saving metadata to images.")
|
|
|
|
args_parser.parser.add_argument("--disable-preset-download", action='store_true',
|
|
help="Disables downloading models for presets", default=False)
|
|
|
|
args_parser.parser.add_argument("--always-download-new-model", action='store_true',
|
|
help="Always download newer models ", default=False)
|
|
|
|
args_parser.parser.set_defaults(
|
|
disable_cuda_malloc=True,
|
|
in_browser=True,
|
|
port=None
|
|
)
|
|
|
|
args_parser.args = args_parser.parser.parse_args()
|
|
|
|
# (Disable by default because of issues like https://github.com/lllyasviel/Fooocus/issues/724)
|
|
args_parser.args.always_offload_from_vram = not args_parser.args.disable_offload_from_vram
|
|
|
|
if args_parser.args.disable_analytics:
|
|
import os
|
|
os.environ["GRADIO_ANALYTICS_ENABLED"] = "False"
|
|
|
|
if args_parser.args.disable_in_browser:
|
|
args_parser.args.in_browser = False
|
|
|
|
if args_parser.args.temp_path is None:
|
|
args_parser.args.temp_path = os.path.join(gettempdir(), 'Fooocus')
|
|
|
|
args = args_parser.args
|