Compare commits
No commits in common. "main" and "v2.1.851" have entirely different histories.
@ -1 +0,0 @@
|
||||
.idea
|
18
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
18
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@ -0,0 +1,18 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Describe a problem
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Read Troubleshoot**
|
||||
|
||||
[x] I admit that I have read the [Troubleshoot](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md) before making this issue.
|
||||
|
||||
**Describe the problem**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**Full Console Log**
|
||||
Paste **full** console log here. You will make our job easier if you give a **full** log.
|
107
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
107
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@ -1,107 +0,0 @@
|
||||
name: Bug Report
|
||||
description: You think something is broken in Fooocus
|
||||
title: "[Bug]: "
|
||||
labels: ["bug", "triage"]
|
||||
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
> The title of the bug report should be short and descriptive.
|
||||
> Use relevant keywords for searchability.
|
||||
> Do not leave it blank, but also do not put an entire error log in it.
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Checklist
|
||||
description: |
|
||||
Please perform basic debugging to see if your configuration is the cause of the issue.
|
||||
Basic debug procedure
|
||||
2. Update Fooocus - sometimes things just need to be updated
|
||||
3. Backup and remove your config.txt - check if the issue is caused by bad configuration
|
||||
5. Try a fresh installation of Fooocus in a different directory - see if a clean installation solves the issue
|
||||
Before making a issue report please, check that the issue hasn't been reported recently.
|
||||
options:
|
||||
- label: The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)
|
||||
- label: The issue exists on a clean installation of Fooocus
|
||||
- label: The issue exists in the current version of Fooocus
|
||||
- label: The issue has not been reported before recently
|
||||
- label: The issue has been reported before but has not been fixed yet
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
> Please fill this form with as much information as possible. Don't forget to add information about "What browsers" and provide screenshots if possible
|
||||
- type: textarea
|
||||
id: what-did
|
||||
attributes:
|
||||
label: What happened?
|
||||
description: Tell us what happened in a very clear and simple way
|
||||
placeholder: |
|
||||
image generation is not working as intended.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: steps
|
||||
attributes:
|
||||
label: Steps to reproduce the problem
|
||||
description: Please provide us with precise step by step instructions on how to reproduce the bug
|
||||
placeholder: |
|
||||
1. Go to ...
|
||||
2. Press ...
|
||||
3. ...
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: what-should
|
||||
attributes:
|
||||
label: What should have happened?
|
||||
description: Tell us what you think the normal behavior should be
|
||||
placeholder: |
|
||||
Fooocus should ...
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: browsers
|
||||
attributes:
|
||||
label: What browsers do you use to access Fooocus?
|
||||
multiple: true
|
||||
options:
|
||||
- Mozilla Firefox
|
||||
- Google Chrome
|
||||
- Brave
|
||||
- Apple Safari
|
||||
- Microsoft Edge
|
||||
- Android
|
||||
- iOS
|
||||
- Other
|
||||
- type: dropdown
|
||||
id: hosting
|
||||
attributes:
|
||||
label: Where are you running Fooocus?
|
||||
multiple: false
|
||||
options:
|
||||
- Locally
|
||||
- Locally with virtualization (e.g. Docker)
|
||||
- Cloud (Google Colab)
|
||||
- Cloud (other)
|
||||
- type: input
|
||||
id: operating-system
|
||||
attributes:
|
||||
label: What operating system are you using?
|
||||
placeholder: |
|
||||
Windows 10
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
label: Console logs
|
||||
description: Please provide **full** cmd/terminal logs from the moment you started UI to the end of it, after the bug occured. If it's very long, provide a link to pastebin or similar service.
|
||||
render: Shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: misc
|
||||
attributes:
|
||||
label: Additional information
|
||||
description: |
|
||||
Please provide us with any relevant additional info or context.
|
||||
Examples:
|
||||
I have updated my GPU driver recently.
|
5
.github/ISSUE_TEMPLATE/config.yml
vendored
5
.github/ISSUE_TEMPLATE/config.yml
vendored
@ -1,5 +0,0 @@
|
||||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: Ask a question
|
||||
url: https://github.com/lllyasviel/Fooocus/discussions/new?category=q-a
|
||||
about: Ask the community for help
|
14
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
14
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
@ -0,0 +1,14 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the idea you'd like**
|
||||
A clear and concise description of what you want to happen.
|
40
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
40
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
@ -1,40 +0,0 @@
|
||||
name: Feature request
|
||||
description: Suggest an idea for this project
|
||||
title: "[Feature Request]: "
|
||||
labels: ["enhancement", "triage"]
|
||||
|
||||
body:
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Is there an existing issue for this?
|
||||
description: Please search to see if an issue already exists for the feature you want, and that it's not implemented in a recent build/commit.
|
||||
options:
|
||||
- label: I have searched the existing issues and checked the recent builds/commits
|
||||
required: true
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
*Please fill this form with as much information as possible, provide screenshots and/or illustrations of the feature if possible*
|
||||
- type: textarea
|
||||
id: feature
|
||||
attributes:
|
||||
label: What would your feature do?
|
||||
description: Tell us about your feature in a very clear and simple way, and what problem it would solve
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: workflow
|
||||
attributes:
|
||||
label: Proposed workflow
|
||||
description: Please provide us with step by step information on how you'd like the feature to be accessed and used
|
||||
value: |
|
||||
1. Go to ....
|
||||
2. Press ....
|
||||
3. ...
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: misc
|
||||
attributes:
|
||||
label: Additional information
|
||||
description: Add any other context or screenshots about the feature request here.
|
2
.gitignore
vendored
2
.gitignore
vendored
@ -20,7 +20,6 @@ user_path_config.txt
|
||||
user_path_config-deprecated.txt
|
||||
/modules/*.png
|
||||
/repositories
|
||||
/fooocus_env
|
||||
/venv
|
||||
/tmp
|
||||
/ui-config.json
|
||||
@ -51,4 +50,3 @@ user_path_config-deprecated.txt
|
||||
/package-lock.json
|
||||
/.coverage*
|
||||
/auth.json
|
||||
.DS_Store
|
||||
|
29
Dockerfile
29
Dockerfile
@ -1,29 +0,0 @@
|
||||
FROM nvidia/cuda:12.3.1-base-ubuntu22.04
|
||||
ENV DEBIAN_FRONTEND noninteractive
|
||||
ENV CMDARGS --listen
|
||||
|
||||
RUN apt-get update -y && \
|
||||
apt-get install -y curl libgl1 libglib2.0-0 python3-pip python-is-python3 git && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY requirements_docker.txt requirements_versions.txt /tmp/
|
||||
RUN pip install --no-cache-dir -r /tmp/requirements_docker.txt -r /tmp/requirements_versions.txt && \
|
||||
rm -f /tmp/requirements_docker.txt /tmp/requirements_versions.txt
|
||||
RUN pip install --no-cache-dir xformers==0.0.23 --no-dependencies
|
||||
RUN curl -fsL -o /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2 https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64 && \
|
||||
chmod +x /usr/local/lib/python3.10/dist-packages/gradio/frpc_linux_amd64_v0.2
|
||||
|
||||
RUN adduser --disabled-password --gecos '' user && \
|
||||
mkdir -p /content/app /content/data
|
||||
|
||||
COPY entrypoint.sh /content/
|
||||
RUN chown -R user:user /content
|
||||
|
||||
WORKDIR /content
|
||||
USER user
|
||||
|
||||
RUN git clone https://github.com/lllyasviel/Fooocus /content/app
|
||||
RUN mv /content/app/models /content/app/models.org
|
||||
|
||||
CMD [ "sh", "-c", "/content/entrypoint.sh ${CMDARGS}" ]
|
@ -1,13 +1,8 @@
|
||||
import ldm_patched.modules.args_parser as args_parser
|
||||
import os
|
||||
|
||||
from tempfile import gettempdir
|
||||
|
||||
args_parser.parser.add_argument("--share", action='store_true', help="Set whether to share on Gradio.")
|
||||
|
||||
args_parser.parser.add_argument("--preset", type=str, default=None, help="Apply specified UI preset.")
|
||||
args_parser.parser.add_argument("--disable-preset-selection", action='store_true',
|
||||
help="Disables preset selection in Gradio.")
|
||||
|
||||
args_parser.parser.add_argument("--language", type=str, default='default',
|
||||
help="Translate UI using json files in [language] folder. "
|
||||
@ -23,16 +18,7 @@ args_parser.parser.add_argument("--disable-image-log", action='store_true',
|
||||
help="Prevent writing images and logs to hard drive.")
|
||||
|
||||
args_parser.parser.add_argument("--disable-analytics", action='store_true',
|
||||
help="Disables analytics for Gradio.")
|
||||
|
||||
args_parser.parser.add_argument("--disable-metadata", action='store_true',
|
||||
help="Disables saving metadata to images.")
|
||||
|
||||
args_parser.parser.add_argument("--disable-preset-download", action='store_true',
|
||||
help="Disables downloading models for presets", default=False)
|
||||
|
||||
args_parser.parser.add_argument("--always-download-new-model", action='store_true',
|
||||
help="Always download newer models ", default=False)
|
||||
help="Disables analytics for Gradio", default=False)
|
||||
|
||||
args_parser.parser.set_defaults(
|
||||
disable_cuda_malloc=True,
|
||||
@ -49,7 +35,4 @@ if args_parser.args.disable_analytics:
|
||||
import os
|
||||
os.environ["GRADIO_ANALYTICS_ENABLED"] = "False"
|
||||
|
||||
if args_parser.args.disable_in_browser:
|
||||
args_parser.args.in_browser = False
|
||||
|
||||
args = args_parser.args
|
||||
|
198
css/style.css
198
css/style.css
@ -1,136 +1,5 @@
|
||||
/* based on https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/v1.6.0/style.css */
|
||||
|
||||
.loader-container {
|
||||
display: flex; /* Use flex to align items horizontally */
|
||||
align-items: center; /* Center items vertically within the container */
|
||||
white-space: nowrap; /* Prevent line breaks within the container */
|
||||
}
|
||||
|
||||
.loader {
|
||||
border: 8px solid #f3f3f3; /* Light grey */
|
||||
border-top: 8px solid #3498db; /* Blue */
|
||||
border-radius: 50%;
|
||||
width: 30px;
|
||||
height: 30px;
|
||||
animation: spin 2s linear infinite;
|
||||
}
|
||||
|
||||
@keyframes spin {
|
||||
0% { transform: rotate(0deg); }
|
||||
100% { transform: rotate(360deg); }
|
||||
}
|
||||
|
||||
/* Style the progress bar */
|
||||
progress {
|
||||
appearance: none; /* Remove default styling */
|
||||
height: 20px; /* Set the height of the progress bar */
|
||||
border-radius: 5px; /* Round the corners of the progress bar */
|
||||
background-color: #f3f3f3; /* Light grey background */
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
/* Style the progress bar container */
|
||||
.progress-container {
|
||||
margin-left: 20px;
|
||||
margin-right: 20px;
|
||||
flex-grow: 1; /* Allow the progress container to take up remaining space */
|
||||
}
|
||||
|
||||
/* Set the color of the progress bar fill */
|
||||
progress::-webkit-progress-value {
|
||||
background-color: #3498db; /* Blue color for the fill */
|
||||
}
|
||||
|
||||
progress::-moz-progress-bar {
|
||||
background-color: #3498db; /* Blue color for the fill in Firefox */
|
||||
}
|
||||
|
||||
/* Style the text on the progress bar */
|
||||
progress::after {
|
||||
content: attr(value '%'); /* Display the progress value followed by '%' */
|
||||
position: absolute;
|
||||
top: 50%;
|
||||
left: 50%;
|
||||
transform: translate(-50%, -50%);
|
||||
color: white; /* Set text color */
|
||||
font-size: 14px; /* Set font size */
|
||||
}
|
||||
|
||||
/* Style other texts */
|
||||
.loader-container > span {
|
||||
margin-left: 5px; /* Add spacing between the progress bar and the text */
|
||||
}
|
||||
|
||||
.progress-bar > .generating {
|
||||
display: none !important;
|
||||
}
|
||||
|
||||
.progress-bar{
|
||||
height: 30px !important;
|
||||
}
|
||||
|
||||
.type_row{
|
||||
height: 80px !important;
|
||||
}
|
||||
|
||||
.type_row_half{
|
||||
height: 32px !important;
|
||||
}
|
||||
|
||||
.scroll-hide{
|
||||
resize: none !important;
|
||||
}
|
||||
|
||||
.refresh_button{
|
||||
border: none !important;
|
||||
background: none !important;
|
||||
font-size: none !important;
|
||||
box-shadow: none !important;
|
||||
}
|
||||
|
||||
.advanced_check_row{
|
||||
width: 250px !important;
|
||||
}
|
||||
|
||||
.min_check{
|
||||
min-width: min(1px, 100%) !important;
|
||||
}
|
||||
|
||||
.resizable_area {
|
||||
resize: vertical;
|
||||
overflow: auto !important;
|
||||
}
|
||||
|
||||
.aspect_ratios label {
|
||||
width: 140px !important;
|
||||
}
|
||||
|
||||
.aspect_ratios label span {
|
||||
white-space: nowrap !important;
|
||||
}
|
||||
|
||||
.aspect_ratios label input {
|
||||
margin-left: -5px !important;
|
||||
}
|
||||
|
||||
.lora_enable label {
|
||||
height: 100%;
|
||||
}
|
||||
|
||||
.lora_enable label input {
|
||||
margin: auto;
|
||||
}
|
||||
|
||||
.lora_enable label span {
|
||||
display: none;
|
||||
}
|
||||
|
||||
@-moz-document url-prefix() {
|
||||
.lora_weight input[type=number] {
|
||||
width: 80px;
|
||||
}
|
||||
}
|
||||
|
||||
#context-menu{
|
||||
z-index:9999;
|
||||
position:absolute;
|
||||
@ -327,70 +196,3 @@ progress::after {
|
||||
pointer-events: none;
|
||||
display: none;
|
||||
}
|
||||
|
||||
#stylePreviewOverlay {
|
||||
opacity: 0;
|
||||
pointer-events: none;
|
||||
width: 128px;
|
||||
height: 128px;
|
||||
position: fixed;
|
||||
top: 0px;
|
||||
left: 0px;
|
||||
border: solid 1px lightgrey;
|
||||
transform: translate(-140px, 20px);
|
||||
background-size: cover;
|
||||
background-position: center;
|
||||
background-color: rgba(0, 0, 0, 0.3);
|
||||
border-radius: 5px;
|
||||
z-index: 100;
|
||||
transition: transform 0.1s ease, opacity 0.3s ease;
|
||||
}
|
||||
|
||||
#stylePreviewOverlay.lower-half {
|
||||
transform: translate(-140px, -140px);
|
||||
}
|
||||
|
||||
/* scrollable box for style selections */
|
||||
.contain .tabs {
|
||||
height: 100%;
|
||||
}
|
||||
|
||||
.contain .tabs .tabitem.style_selections_tab {
|
||||
height: 100%;
|
||||
}
|
||||
|
||||
.contain .tabs .tabitem.style_selections_tab > div:first-child {
|
||||
height: 100%;
|
||||
}
|
||||
|
||||
.contain .tabs .tabitem.style_selections_tab .style_selections {
|
||||
min-height: 200px;
|
||||
height: 100%;
|
||||
}
|
||||
|
||||
.contain .tabs .tabitem.style_selections_tab .style_selections .wrap[data-testid="checkbox-group"] {
|
||||
position: absolute; /* remove this to disable scrolling within the checkbox-group */
|
||||
overflow: auto;
|
||||
padding-right: 2px;
|
||||
max-height: 100%;
|
||||
}
|
||||
|
||||
.contain .tabs .tabitem.style_selections_tab .style_selections .wrap[data-testid="checkbox-group"] label {
|
||||
/* max-width: calc(35% - 15px) !important; */ /* add this to enable 3 columns layout */
|
||||
flex: calc(50% - 5px) !important;
|
||||
}
|
||||
|
||||
.contain .tabs .tabitem.style_selections_tab .style_selections .wrap[data-testid="checkbox-group"] label span {
|
||||
/* white-space:nowrap; */ /* add this to disable text wrapping (better choice for 3 columns layout) */
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
}
|
||||
|
||||
/* styles preview tooltip */
|
||||
.preview-tooltip {
|
||||
background-color: #fff8;
|
||||
font-family: monospace;
|
||||
text-align: center;
|
||||
border-radius-top: 5px;
|
||||
display: none; /* remove this to enable tooltip in preview image */
|
||||
}
|
@ -1,38 +0,0 @@
|
||||
version: '3.9'
|
||||
|
||||
volumes:
|
||||
fooocus-data:
|
||||
|
||||
services:
|
||||
app:
|
||||
build: .
|
||||
image: fooocus
|
||||
ports:
|
||||
- "7865:7865"
|
||||
environment:
|
||||
- CMDARGS=--listen # Arguments for launch.py.
|
||||
- DATADIR=/content/data # Directory which stores models, outputs dir
|
||||
- config_path=/content/data/config.txt
|
||||
- config_example_path=/content/data/config_modification_tutorial.txt
|
||||
- path_checkpoints=/content/data/models/checkpoints/
|
||||
- path_loras=/content/data/models/loras/
|
||||
- path_embeddings=/content/data/models/embeddings/
|
||||
- path_vae_approx=/content/data/models/vae_approx/
|
||||
- path_upscale_models=/content/data/models/upscale_models/
|
||||
- path_inpaint=/content/data/models/inpaint/
|
||||
- path_controlnet=/content/data/models/controlnet/
|
||||
- path_clip_vision=/content/data/models/clip_vision/
|
||||
- path_fooocus_expansion=/content/data/models/prompt_expansion/fooocus_expansion/
|
||||
- path_outputs=/content/app/outputs/ # Warning: If it is not located under '/content/app', you can't see history log!
|
||||
volumes:
|
||||
- fooocus-data:/content/data
|
||||
#- ./models:/import/models # Once you import files, you don't need to mount again.
|
||||
#- ./outputs:/import/outputs # Once you import files, you don't need to mount again.
|
||||
tty: true
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
device_ids: ['0']
|
||||
capabilities: [compute, utility]
|
66
docker.md
66
docker.md
@ -1,66 +0,0 @@
|
||||
# Fooocus on Docker
|
||||
|
||||
The docker image is based on NVIDIA CUDA 12.3 and PyTorch 2.0, see [Dockerfile](Dockerfile) and [requirements_docker.txt](requirements_docker.txt) for details.
|
||||
|
||||
## Quick start
|
||||
|
||||
**This is just an easy way for testing. Please find more information in the [notes](#notes).**
|
||||
|
||||
1. Clone this repository
|
||||
2. Build the image with `docker compose build`
|
||||
3. Run the docker container with `docker compose up`. Building the image takes some time.
|
||||
|
||||
When you see the message `Use the app with http://0.0.0.0:7865/` in the console, you can access the URL in your browser.
|
||||
|
||||
Your models and outputs are stored in the `fooocus-data` volume, which, depending on OS, is stored in `/var/lib/docker/volumes`.
|
||||
|
||||
## Details
|
||||
|
||||
### Update the container manually
|
||||
|
||||
When you are using `docker compose up` continuously, the container is not updated to the latest version of Fooocus automatically.
|
||||
Run `git pull` before executing `docker compose build --no-cache` to build an image with the latest Fooocus version.
|
||||
You can then start it with `docker compose up`
|
||||
|
||||
### Import models, outputs
|
||||
If you want to import files from models or the outputs folder, you can uncomment the following settings in the [docker-compose.yml](docker-compose.yml):
|
||||
```
|
||||
#- ./models:/import/models # Once you import files, you don't need to mount again.
|
||||
#- ./outputs:/import/outputs # Once you import files, you don't need to mount again.
|
||||
```
|
||||
After running `docker compose up`, your files will be copied into `/content/data/models` and `/content/data/outputs`
|
||||
Since `/content/data` is a persistent volume folder, your files will be persisted even when you re-run `docker compose up --build` without above volume settings.
|
||||
|
||||
|
||||
### Paths inside the container
|
||||
|
||||
|Path|Details|
|
||||
|-|-|
|
||||
|/content/app|The application stored folder|
|
||||
|/content/app/models.org|Original 'models' folder.<br> Files are copied to the '/content/app/models' which is symlinked to '/content/data/models' every time the container boots. (Existing files will not be overwritten.) |
|
||||
|/content/data|Persistent volume mount point|
|
||||
|/content/data/models|The folder is symlinked to '/content/app/models'|
|
||||
|/content/data/outputs|The folder is symlinked to '/content/app/outputs'|
|
||||
|
||||
### Environments
|
||||
|
||||
You can change `config.txt` parameters by using environment variables.
|
||||
**The priority of using the environments is higher than the values defined in `config.txt`, and they will be saved to the `config_modification_tutorial.txt`**
|
||||
|
||||
Docker specified environments are there. They are used by 'entrypoint.sh'
|
||||
|Environment|Details|
|
||||
|-|-|
|
||||
|DATADIR|'/content/data' location.|
|
||||
|CMDARGS|Arguments for [entry_with_update.py](entry_with_update.py) which is called by [entrypoint.sh](entrypoint.sh)|
|
||||
|config_path|'config.txt' location|
|
||||
|config_example_path|'config_modification_tutorial.txt' location|
|
||||
|
||||
You can also use the same json key names and values explained in the 'config_modification_tutorial.txt' as the environments.
|
||||
See examples in the [docker-compose.yml](docker-compose.yml)
|
||||
|
||||
## Notes
|
||||
|
||||
- Please keep 'path_outputs' under '/content/app'. Otherwise, you may get an error when you open the history log.
|
||||
- Docker on Mac/Windows still has issues in the form of slow volume access when you use "bind mount" volumes. Please refer to [this article](https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose) for not using "bind mount".
|
||||
- The MPS backend (Metal Performance Shaders, Apple Silicon M1/M2/etc.) is not yet supported in Docker, see https://github.com/pytorch/pytorch/issues/81224
|
||||
- You can also use `docker compose up -d` to start the container detached and connect to the logs with `docker compose logs -f`. This way you can also close the terminal and keep the container running.
|
@ -1,33 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
ORIGINALDIR=/content/app
|
||||
# Use predefined DATADIR if it is defined
|
||||
[[ x"${DATADIR}" == "x" ]] && DATADIR=/content/data
|
||||
|
||||
# Make persistent dir from original dir
|
||||
function mklink () {
|
||||
mkdir -p $DATADIR/$1
|
||||
ln -s $DATADIR/$1 $ORIGINALDIR
|
||||
}
|
||||
|
||||
# Copy old files from import dir
|
||||
function import () {
|
||||
(test -d /import/$1 && cd /import/$1 && cp -Rpn . $DATADIR/$1/)
|
||||
}
|
||||
|
||||
cd $ORIGINALDIR
|
||||
|
||||
# models
|
||||
mklink models
|
||||
# Copy original files
|
||||
(cd $ORIGINALDIR/models.org && cp -Rpn . $ORIGINALDIR/models/)
|
||||
# Import old files
|
||||
import models
|
||||
|
||||
# outputs
|
||||
mklink outputs
|
||||
# Import old files
|
||||
import outputs
|
||||
|
||||
# Start application
|
||||
python launch.py $*
|
@ -112,9 +112,6 @@ class FooocusExpansion:
|
||||
max_token_length = 75 * int(math.ceil(float(current_token_length) / 75.0))
|
||||
max_new_tokens = max_token_length - current_token_length
|
||||
|
||||
if max_new_tokens == 0:
|
||||
return prompt[:-1]
|
||||
|
||||
# https://huggingface.co/blog/introducing-csearch
|
||||
# https://huggingface.co/docs/transformers/generation_strategies
|
||||
features = self.model.generate(**tokenized_kwargs,
|
||||
|
@ -2,13 +2,12 @@ import torch
|
||||
import ldm_patched.modules.clip_vision
|
||||
import safetensors.torch as sf
|
||||
import ldm_patched.modules.model_management as model_management
|
||||
import contextlib
|
||||
import ldm_patched.ldm.modules.attention as attention
|
||||
|
||||
from extras.resampler import Resampler
|
||||
from ldm_patched.modules.model_patcher import ModelPatcher
|
||||
from modules.core import numpy_to_pytorch
|
||||
from modules.ops import use_patched_ops
|
||||
from ldm_patched.modules.ops import manual_cast
|
||||
|
||||
|
||||
SD_V12_CHANNELS = [320] * 4 + [640] * 4 + [1280] * 4 + [1280] * 6 + [640] * 6 + [320] * 6 + [1280] * 2
|
||||
@ -117,16 +116,14 @@ def load_ip_adapter(clip_vision_path, ip_negative_path, ip_adapter_path):
|
||||
clip_extra_context_tokens = ip_state_dict["image_proj"]["proj.weight"].shape[0] // cross_attention_dim
|
||||
clip_embeddings_dim = None
|
||||
|
||||
with use_patched_ops(manual_cast):
|
||||
ip_adapter = IPAdapterModel(
|
||||
ip_state_dict,
|
||||
plus=plus,
|
||||
cross_attention_dim=cross_attention_dim,
|
||||
clip_embeddings_dim=clip_embeddings_dim,
|
||||
clip_extra_context_tokens=clip_extra_context_tokens,
|
||||
sdxl_plus=sdxl_plus
|
||||
)
|
||||
|
||||
ip_adapter = IPAdapterModel(
|
||||
ip_state_dict,
|
||||
plus=plus,
|
||||
cross_attention_dim=cross_attention_dim,
|
||||
clip_embeddings_dim=clip_embeddings_dim,
|
||||
clip_extra_context_tokens=clip_extra_context_tokens,
|
||||
sdxl_plus=sdxl_plus
|
||||
)
|
||||
ip_adapter.sdxl = sdxl
|
||||
ip_adapter.load_device = load_device
|
||||
ip_adapter.offload_device = offload_device
|
||||
|
@ -1,26 +1,27 @@
|
||||
import cv2
|
||||
import numpy as np
|
||||
import modules.advanced_parameters as advanced_parameters
|
||||
|
||||
|
||||
def centered_canny(x: np.ndarray, canny_low_threshold, canny_high_threshold):
|
||||
def centered_canny(x: np.ndarray):
|
||||
assert isinstance(x, np.ndarray)
|
||||
assert x.ndim == 2 and x.dtype == np.uint8
|
||||
|
||||
y = cv2.Canny(x, int(canny_low_threshold), int(canny_high_threshold))
|
||||
y = cv2.Canny(x, int(advanced_parameters.canny_low_threshold), int(advanced_parameters.canny_high_threshold))
|
||||
y = y.astype(np.float32) / 255.0
|
||||
return y
|
||||
|
||||
|
||||
def centered_canny_color(x: np.ndarray, canny_low_threshold, canny_high_threshold):
|
||||
def centered_canny_color(x: np.ndarray):
|
||||
assert isinstance(x, np.ndarray)
|
||||
assert x.ndim == 3 and x.shape[2] == 3
|
||||
|
||||
result = [centered_canny(x[..., i], canny_low_threshold, canny_high_threshold) for i in range(3)]
|
||||
result = [centered_canny(x[..., i]) for i in range(3)]
|
||||
result = np.stack(result, axis=2)
|
||||
return result
|
||||
|
||||
|
||||
def pyramid_canny_color(x: np.ndarray, canny_low_threshold, canny_high_threshold):
|
||||
def pyramid_canny_color(x: np.ndarray):
|
||||
assert isinstance(x, np.ndarray)
|
||||
assert x.ndim == 3 and x.shape[2] == 3
|
||||
|
||||
@ -30,7 +31,7 @@ def pyramid_canny_color(x: np.ndarray, canny_low_threshold, canny_high_threshold
|
||||
for k in [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
|
||||
Hs, Ws = int(H * k), int(W * k)
|
||||
small = cv2.resize(x, (Ws, Hs), interpolation=cv2.INTER_AREA)
|
||||
edge = centered_canny_color(small, canny_low_threshold, canny_high_threshold)
|
||||
edge = centered_canny_color(small)
|
||||
if acc_edge is None:
|
||||
acc_edge = edge
|
||||
else:
|
||||
@ -53,11 +54,11 @@ def norm255(x, low=4, high=96):
|
||||
return x * 255.0
|
||||
|
||||
|
||||
def canny_pyramid(x, canny_low_threshold, canny_high_threshold):
|
||||
def canny_pyramid(x):
|
||||
# For some reasons, SAI's Control-lora Canny seems to be trained on canny maps with non-standard resolutions.
|
||||
# Then we use pyramid to use all resolutions to avoid missing any structure in specific resolutions.
|
||||
|
||||
color_canny = pyramid_canny_color(x, canny_low_threshold, canny_high_threshold)
|
||||
color_canny = pyramid_canny_color(x)
|
||||
result = np.sum(color_canny, axis=2)
|
||||
|
||||
return norm255(result, low=1, high=99).clip(0, 255).astype(np.uint8)
|
||||
|
@ -108,7 +108,8 @@ class Resampler(nn.Module):
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
latents = self.latents.repeat(x.size(0), 1, 1).to(x)
|
||||
|
||||
latents = self.latents.repeat(x.size(0), 1, 1)
|
||||
|
||||
x = self.proj_in(x)
|
||||
|
||||
@ -117,4 +118,4 @@ class Resampler(nn.Module):
|
||||
latents = ff(latents) + latents
|
||||
|
||||
latents = self.proj_out(latents)
|
||||
return self.norm_out(latents)
|
||||
return self.norm_out(latents)
|
@ -12,7 +12,7 @@
|
||||
"%cd /content\n",
|
||||
"!git clone https://github.com/lllyasviel/Fooocus.git\n",
|
||||
"%cd /content/Fooocus\n",
|
||||
"!python entry_with_update.py --share --always-high-vram\n"
|
||||
"!python entry_with_update.py --share\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
@ -1 +1 @@
|
||||
version = '2.3.1'
|
||||
version = '2.1.851'
|
||||
|
@ -154,8 +154,12 @@ let cancelGenerateForever = function() {
|
||||
let generateOnRepeatForButtons = function() {
|
||||
generateOnRepeat('#generate_button', '#stop_button');
|
||||
};
|
||||
appendContextMenuOption('#generate_button', 'Generate forever', generateOnRepeatForButtons);
|
||||
|
||||
appendContextMenuOption('#generate_button', 'Generate forever', generateOnRepeatForButtons);
|
||||
// appendContextMenuOption('#stop_button', 'Generate forever', generateOnRepeatForButtons);
|
||||
|
||||
// appendContextMenuOption('#stop_button', 'Cancel generate forever', cancelGenerateForever);
|
||||
// appendContextMenuOption('#generate_button', 'Cancel generate forever', cancelGenerateForever);
|
||||
})();
|
||||
//End example Context Menu Items
|
||||
|
||||
|
@ -45,9 +45,6 @@ function processTextNode(node) {
|
||||
var tl = getTranslation(text);
|
||||
if (tl !== undefined) {
|
||||
node.textContent = tl;
|
||||
if (text && node.parentElement) {
|
||||
node.parentElement.setAttribute("data-original-text", text);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -119,7 +119,6 @@ document.addEventListener("DOMContentLoaded", function() {
|
||||
}
|
||||
});
|
||||
mutationObserver.observe(gradioApp(), {childList: true, subtree: true});
|
||||
initStylePreviewOverlay();
|
||||
});
|
||||
|
||||
/**
|
||||
@ -146,46 +145,6 @@ document.addEventListener('keydown', function(e) {
|
||||
}
|
||||
});
|
||||
|
||||
function initStylePreviewOverlay() {
|
||||
let overlayVisible = false;
|
||||
const samplesPath = document.querySelector("meta[name='samples-path']").getAttribute("content")
|
||||
const overlay = document.createElement('div');
|
||||
const tooltip = document.createElement('div');
|
||||
tooltip.className = 'preview-tooltip';
|
||||
overlay.appendChild(tooltip);
|
||||
overlay.id = 'stylePreviewOverlay';
|
||||
document.body.appendChild(overlay);
|
||||
document.addEventListener('mouseover', function (e) {
|
||||
const label = e.target.closest('.style_selections label');
|
||||
if (!label) return;
|
||||
label.removeEventListener("mouseout", onMouseLeave);
|
||||
label.addEventListener("mouseout", onMouseLeave);
|
||||
overlayVisible = true;
|
||||
overlay.style.opacity = "1";
|
||||
const originalText = label.querySelector("span").getAttribute("data-original-text");
|
||||
const name = originalText || label.querySelector("span").textContent;
|
||||
overlay.style.backgroundImage = `url("${samplesPath.replace(
|
||||
"fooocus_v2",
|
||||
name.toLowerCase().replaceAll(" ", "_")
|
||||
).replaceAll("\\", "\\\\")}")`;
|
||||
|
||||
tooltip.textContent = name;
|
||||
|
||||
function onMouseLeave() {
|
||||
overlayVisible = false;
|
||||
overlay.style.opacity = "0";
|
||||
overlay.style.backgroundImage = "";
|
||||
label.removeEventListener("mouseout", onMouseLeave);
|
||||
}
|
||||
});
|
||||
document.addEventListener('mousemove', function (e) {
|
||||
if (!overlayVisible) return;
|
||||
overlay.style.left = `${e.clientX}px`;
|
||||
overlay.style.top = `${e.clientY}px`;
|
||||
overlay.className = e.clientY > window.innerHeight / 2 ? "lower-half" : "upper-half";
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* checks that a UI element is not in another hidden element or tab content
|
||||
*/
|
||||
|
@ -38,12 +38,9 @@
|
||||
"* \"Inpaint or Outpaint\" is powered by the sampler \"DPMPP Fooocus Seamless 2M SDE Karras Inpaint Sampler\" (beta)": "* \"Inpaint or Outpaint\" is powered by the sampler \"DPMPP Fooocus Seamless 2M SDE Karras Inpaint Sampler\" (beta)",
|
||||
"Setting": "Setting",
|
||||
"Style": "Style",
|
||||
"Preset": "Preset",
|
||||
"Performance": "Performance",
|
||||
"Speed": "Speed",
|
||||
"Quality": "Quality",
|
||||
"Extreme Speed": "Extreme Speed",
|
||||
"Lightning": "Lightning",
|
||||
"Aspect Ratios": "Aspect Ratios",
|
||||
"width \u00d7 height": "width \u00d7 height",
|
||||
"Image Number": "Image Number",
|
||||
@ -51,9 +48,6 @@
|
||||
"Describing what you do not want to see.": "Describing what you do not want to see.",
|
||||
"Random": "Random",
|
||||
"Seed": "Seed",
|
||||
"Disable seed increment": "Disable seed increment",
|
||||
"Disable automatic seed increment when image number is > 1.": "Disable automatic seed increment when image number is > 1.",
|
||||
"Read wildcards in order": "Read wildcards in order",
|
||||
"\ud83d\udcda History Log": "\uD83D\uDCDA History Log",
|
||||
"Image Style": "Image Style",
|
||||
"Fooocus V2": "Fooocus V2",
|
||||
@ -348,10 +342,6 @@
|
||||
"Forced Overwrite of Denoising Strength of \"Vary\"": "Forced Overwrite of Denoising Strength of \"Vary\"",
|
||||
"Set as negative number to disable. For developer debugging.": "Set as negative number to disable. For developer debugging.",
|
||||
"Forced Overwrite of Denoising Strength of \"Upscale\"": "Forced Overwrite of Denoising Strength of \"Upscale\"",
|
||||
"Disable Preview": "Disable Preview",
|
||||
"Disable preview during generation.": "Disable preview during generation.",
|
||||
"Disable Intermediate Results": "Disable Intermediate Results",
|
||||
"Disable intermediate results during generation, only show final gallery.": "Disable intermediate results during generation, only show final gallery.",
|
||||
"Inpaint Engine": "Inpaint Engine",
|
||||
"v1": "v1",
|
||||
"Version of Fooocus inpaint model": "Version of Fooocus inpaint model",
|
||||
@ -371,19 +361,12 @@
|
||||
"B2": "B2",
|
||||
"S1": "S1",
|
||||
"S2": "S2",
|
||||
"Extreme Speed": "Extreme Speed",
|
||||
"\uD83D\uDD0E Type here to search styles ...": "\uD83D\uDD0E Type here to search styles ...",
|
||||
"Type prompt here.": "Type prompt here.",
|
||||
"Outpaint Expansion Direction:": "Outpaint Expansion Direction:",
|
||||
"* Powered by Fooocus Inpaint Engine (beta)": "* Powered by Fooocus Inpaint Engine (beta)",
|
||||
"Fooocus Enhance": "Fooocus Enhance",
|
||||
"Fooocus Cinematic": "Fooocus Cinematic",
|
||||
"Fooocus Sharp": "Fooocus Sharp",
|
||||
"Drag any image generated by Fooocus here": "Drag any image generated by Fooocus here",
|
||||
"Metadata": "Metadata",
|
||||
"Apply Metadata": "Apply Metadata",
|
||||
"Metadata Scheme": "Metadata Scheme",
|
||||
"Image Prompt parameters are not included. Use png and a1111 for compatibility with Civitai.": "Image Prompt parameters are not included. Use png and a1111 for compatibility with Civitai.",
|
||||
"fooocus (json)": "fooocus (json)",
|
||||
"a1111 (plain text)": "a1111 (plain text)",
|
||||
"Unsupported image type in input": "Unsupported image type in input"
|
||||
"Fooocus Sharp": "Fooocus Sharp"
|
||||
}
|
85
launch.py
85
launch.py
@ -1,6 +1,6 @@
|
||||
import os
|
||||
import ssl
|
||||
import sys
|
||||
import ssl
|
||||
|
||||
print('[System ARGV] ' + str(sys.argv))
|
||||
|
||||
@ -10,17 +10,20 @@ os.chdir(root)
|
||||
|
||||
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
|
||||
os.environ["PYTORCH_MPS_HIGH_WATERMARK_RATIO"] = "0.0"
|
||||
if "GRADIO_SERVER_PORT" not in os.environ:
|
||||
os.environ["GRADIO_SERVER_PORT"] = "7865"
|
||||
os.environ["GRADIO_SERVER_PORT"] = "7865"
|
||||
|
||||
ssl._create_default_https_context = ssl._create_unverified_context
|
||||
|
||||
|
||||
import platform
|
||||
import fooocus_version
|
||||
|
||||
from build_launcher import build_launcher
|
||||
from modules.launch_util import is_installed, run, python, run_pip, requirements_met, delete_folder_content
|
||||
from modules.launch_util import is_installed, run, python, run_pip, requirements_met
|
||||
from modules.model_loader import load_file_from_url
|
||||
from modules.config import path_checkpoints, path_loras, path_vae_approx, path_fooocus_expansion, \
|
||||
checkpoint_downloads, path_embeddings, embeddings_downloads, lora_downloads
|
||||
|
||||
|
||||
REINSTALL_ALL = False
|
||||
TRY_INSTALL_XFORMERS = False
|
||||
@ -40,7 +43,7 @@ def prepare_environment():
|
||||
|
||||
if TRY_INSTALL_XFORMERS:
|
||||
if REINSTALL_ALL or not is_installed("xformers"):
|
||||
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.23')
|
||||
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.20')
|
||||
if platform.system() == "Windows":
|
||||
if platform.python_version().startswith("3.10"):
|
||||
run_pip(f"install -U -I --no-deps {xformers_package}", "xformers", live=True)
|
||||
@ -67,6 +70,25 @@ vae_approx_filenames = [
|
||||
]
|
||||
|
||||
|
||||
def download_models():
|
||||
for file_name, url in checkpoint_downloads.items():
|
||||
load_file_from_url(url=url, model_dir=path_checkpoints, file_name=file_name)
|
||||
for file_name, url in embeddings_downloads.items():
|
||||
load_file_from_url(url=url, model_dir=path_embeddings, file_name=file_name)
|
||||
for file_name, url in lora_downloads.items():
|
||||
load_file_from_url(url=url, model_dir=path_loras, file_name=file_name)
|
||||
for file_name, url in vae_approx_filenames:
|
||||
load_file_from_url(url=url, model_dir=path_vae_approx, file_name=file_name)
|
||||
|
||||
load_file_from_url(
|
||||
url='https://huggingface.co/lllyasviel/misc/resolve/main/fooocus_expansion.bin',
|
||||
model_dir=path_fooocus_expansion,
|
||||
file_name='pytorch_model.bin'
|
||||
)
|
||||
|
||||
return
|
||||
|
||||
|
||||
def ini_args():
|
||||
from args_manager import args
|
||||
return args
|
||||
@ -76,61 +98,12 @@ prepare_environment()
|
||||
build_launcher()
|
||||
args = ini_args()
|
||||
|
||||
|
||||
if args.gpu_device_id is not None:
|
||||
os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu_device_id)
|
||||
print("Set device to:", args.gpu_device_id)
|
||||
|
||||
from modules import config
|
||||
|
||||
os.environ['GRADIO_TEMP_DIR'] = config.temp_path
|
||||
|
||||
if config.temp_path_cleanup_on_launch:
|
||||
print(f'[Cleanup] Attempting to delete content of temp dir {config.temp_path}')
|
||||
result = delete_folder_content(config.temp_path, '[Cleanup] ')
|
||||
if result:
|
||||
print("[Cleanup] Cleanup successful")
|
||||
else:
|
||||
print(f"[Cleanup] Failed to delete content of temp dir.")
|
||||
|
||||
|
||||
def download_models(default_model, previous_default_models, checkpoint_downloads, embeddings_downloads, lora_downloads):
|
||||
for file_name, url in vae_approx_filenames:
|
||||
load_file_from_url(url=url, model_dir=config.path_vae_approx, file_name=file_name)
|
||||
|
||||
load_file_from_url(
|
||||
url='https://huggingface.co/lllyasviel/misc/resolve/main/fooocus_expansion.bin',
|
||||
model_dir=config.path_fooocus_expansion,
|
||||
file_name='pytorch_model.bin'
|
||||
)
|
||||
|
||||
if args.disable_preset_download:
|
||||
print('Skipped model download.')
|
||||
return default_model, checkpoint_downloads
|
||||
|
||||
if not args.always_download_new_model:
|
||||
if not os.path.exists(os.path.join(config.paths_checkpoints[0], default_model)):
|
||||
for alternative_model_name in previous_default_models:
|
||||
if os.path.exists(os.path.join(config.paths_checkpoints[0], alternative_model_name)):
|
||||
print(f'You do not have [{default_model}] but you have [{alternative_model_name}].')
|
||||
print(f'Fooocus will use [{alternative_model_name}] to avoid downloading new models, '
|
||||
f'but you are not using the latest models.')
|
||||
print('Use --always-download-new-model to avoid fallback and always get new models.')
|
||||
checkpoint_downloads = {}
|
||||
default_model = alternative_model_name
|
||||
break
|
||||
|
||||
for file_name, url in checkpoint_downloads.items():
|
||||
load_file_from_url(url=url, model_dir=config.paths_checkpoints[0], file_name=file_name)
|
||||
for file_name, url in embeddings_downloads.items():
|
||||
load_file_from_url(url=url, model_dir=config.path_embeddings, file_name=file_name)
|
||||
for file_name, url in lora_downloads.items():
|
||||
load_file_from_url(url=url, model_dir=config.paths_loras[0], file_name=file_name)
|
||||
|
||||
return default_model, checkpoint_downloads
|
||||
|
||||
|
||||
config.default_base_model_name, config.checkpoint_downloads = download_models(
|
||||
config.default_base_model_name, config.previous_default_models, config.checkpoint_downloads,
|
||||
config.embeddings_downloads, config.lora_downloads)
|
||||
download_models()
|
||||
|
||||
from webui import *
|
||||
|
@ -11,7 +11,7 @@ import math
|
||||
import time
|
||||
import random
|
||||
|
||||
from PIL import Image, ImageOps, ImageSequence
|
||||
from PIL import Image, ImageOps
|
||||
from PIL.PngImagePlugin import PngInfo
|
||||
import numpy as np
|
||||
import safetensors.torch
|
||||
@ -361,62 +361,6 @@ class VAEEncodeForInpaint:
|
||||
|
||||
return ({"samples":t, "noise_mask": (mask_erosion[:,:,:x,:y].round())}, )
|
||||
|
||||
|
||||
class InpaintModelConditioning:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {"required": {"positive": ("CONDITIONING", ),
|
||||
"negative": ("CONDITIONING", ),
|
||||
"vae": ("VAE", ),
|
||||
"pixels": ("IMAGE", ),
|
||||
"mask": ("MASK", ),
|
||||
}}
|
||||
|
||||
RETURN_TYPES = ("CONDITIONING","CONDITIONING","LATENT")
|
||||
RETURN_NAMES = ("positive", "negative", "latent")
|
||||
FUNCTION = "encode"
|
||||
|
||||
CATEGORY = "conditioning/inpaint"
|
||||
|
||||
def encode(self, positive, negative, pixels, vae, mask):
|
||||
x = (pixels.shape[1] // 8) * 8
|
||||
y = (pixels.shape[2] // 8) * 8
|
||||
mask = torch.nn.functional.interpolate(mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])), size=(pixels.shape[1], pixels.shape[2]), mode="bilinear")
|
||||
|
||||
orig_pixels = pixels
|
||||
pixels = orig_pixels.clone()
|
||||
if pixels.shape[1] != x or pixels.shape[2] != y:
|
||||
x_offset = (pixels.shape[1] % 8) // 2
|
||||
y_offset = (pixels.shape[2] % 8) // 2
|
||||
pixels = pixels[:,x_offset:x + x_offset, y_offset:y + y_offset,:]
|
||||
mask = mask[:,:,x_offset:x + x_offset, y_offset:y + y_offset]
|
||||
|
||||
m = (1.0 - mask.round()).squeeze(1)
|
||||
for i in range(3):
|
||||
pixels[:,:,:,i] -= 0.5
|
||||
pixels[:,:,:,i] *= m
|
||||
pixels[:,:,:,i] += 0.5
|
||||
concat_latent = vae.encode(pixels)
|
||||
orig_latent = vae.encode(orig_pixels)
|
||||
|
||||
out_latent = {}
|
||||
|
||||
out_latent["samples"] = orig_latent
|
||||
out_latent["noise_mask"] = mask
|
||||
|
||||
out = []
|
||||
for conditioning in [positive, negative]:
|
||||
c = []
|
||||
for t in conditioning:
|
||||
d = t[1].copy()
|
||||
d["concat_latent_image"] = concat_latent
|
||||
d["concat_mask"] = mask
|
||||
n = [t[0], d]
|
||||
c.append(n)
|
||||
out.append(c)
|
||||
return (out[0], out[1], out_latent)
|
||||
|
||||
|
||||
class SaveLatent:
|
||||
def __init__(self):
|
||||
self.output_dir = ldm_patched.utils.path_utils.get_output_directory()
|
||||
@ -1468,32 +1412,17 @@ class LoadImage:
|
||||
FUNCTION = "load_image"
|
||||
def load_image(self, image):
|
||||
image_path = ldm_patched.utils.path_utils.get_annotated_filepath(image)
|
||||
img = Image.open(image_path)
|
||||
output_images = []
|
||||
output_masks = []
|
||||
for i in ImageSequence.Iterator(img):
|
||||
i = ImageOps.exif_transpose(i)
|
||||
if i.mode == 'I':
|
||||
i = i.point(lambda i: i * (1 / 255))
|
||||
image = i.convert("RGB")
|
||||
image = np.array(image).astype(np.float32) / 255.0
|
||||
image = torch.from_numpy(image)[None,]
|
||||
if 'A' in i.getbands():
|
||||
mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0
|
||||
mask = 1. - torch.from_numpy(mask)
|
||||
else:
|
||||
mask = torch.zeros((64,64), dtype=torch.float32, device="cpu")
|
||||
output_images.append(image)
|
||||
output_masks.append(mask.unsqueeze(0))
|
||||
|
||||
if len(output_images) > 1:
|
||||
output_image = torch.cat(output_images, dim=0)
|
||||
output_mask = torch.cat(output_masks, dim=0)
|
||||
i = Image.open(image_path)
|
||||
i = ImageOps.exif_transpose(i)
|
||||
image = i.convert("RGB")
|
||||
image = np.array(image).astype(np.float32) / 255.0
|
||||
image = torch.from_numpy(image)[None,]
|
||||
if 'A' in i.getbands():
|
||||
mask = np.array(i.getchannel('A')).astype(np.float32) / 255.0
|
||||
mask = 1. - torch.from_numpy(mask)
|
||||
else:
|
||||
output_image = output_images[0]
|
||||
output_mask = output_masks[0]
|
||||
|
||||
return (output_image, output_mask)
|
||||
mask = torch.zeros((64,64), dtype=torch.float32, device="cpu")
|
||||
return (image, mask.unsqueeze(0))
|
||||
|
||||
@classmethod
|
||||
def IS_CHANGED(s, image):
|
||||
@ -1530,8 +1459,6 @@ class LoadImageMask:
|
||||
i = Image.open(image_path)
|
||||
i = ImageOps.exif_transpose(i)
|
||||
if i.getbands() != ("R", "G", "B", "A"):
|
||||
if i.mode == 'I':
|
||||
i = i.point(lambda i: i * (1 / 255))
|
||||
i = i.convert("RGBA")
|
||||
mask = None
|
||||
c = channel[0].upper()
|
||||
@ -1553,10 +1480,13 @@ class LoadImageMask:
|
||||
return m.digest().hex()
|
||||
|
||||
@classmethod
|
||||
def VALIDATE_INPUTS(s, image):
|
||||
def VALIDATE_INPUTS(s, image, channel):
|
||||
if not ldm_patched.utils.path_utils.exists_annotated_filepath(image):
|
||||
return "Invalid image file: {}".format(image)
|
||||
|
||||
if channel not in s._color_channels:
|
||||
return "Invalid color channel: {}".format(channel)
|
||||
|
||||
return True
|
||||
|
||||
class ImageScale:
|
||||
@ -1686,11 +1616,10 @@ class ImagePadForOutpaint:
|
||||
def expand_image(self, image, left, top, right, bottom, feathering):
|
||||
d1, d2, d3, d4 = image.size()
|
||||
|
||||
new_image = torch.ones(
|
||||
new_image = torch.zeros(
|
||||
(d1, d2 + top + bottom, d3 + left + right, d4),
|
||||
dtype=torch.float32,
|
||||
) * 0.5
|
||||
|
||||
)
|
||||
new_image[:, top:top + d2, left:left + d3, :] = image
|
||||
|
||||
mask = torch.ones(
|
||||
@ -1782,7 +1711,6 @@ NODE_CLASS_MAPPINGS = {
|
||||
"unCLIPCheckpointLoader": unCLIPCheckpointLoader,
|
||||
"GLIGENLoader": GLIGENLoader,
|
||||
"GLIGENTextBoxApply": GLIGENTextBoxApply,
|
||||
"InpaintModelConditioning": InpaintModelConditioning,
|
||||
|
||||
"CheckpointLoader": CheckpointLoader,
|
||||
"DiffusersLoader": DiffusersLoader,
|
||||
@ -1943,9 +1871,6 @@ def init_custom_nodes():
|
||||
"nodes_video_model.py",
|
||||
"nodes_sag.py",
|
||||
"nodes_perpneg.py",
|
||||
"nodes_stable3d.py",
|
||||
"nodes_sdupscale.py",
|
||||
"nodes_photomaker.py",
|
||||
]
|
||||
|
||||
for node_file in extras_files:
|
||||
|
@ -78,7 +78,7 @@ def spatial_gradient(input, normalized: bool = True):
|
||||
Return:
|
||||
the derivatives of the input feature map. with shape :math:`(B, C, 2, H, W)`.
|
||||
.. note::
|
||||
See a working example `here <https://kornia.readthedocs.io/en/latest/
|
||||
See a working example `here <https://kornia-tutorials.readthedocs.io/en/latest/
|
||||
filtering_edges.html>`__.
|
||||
Examples:
|
||||
>>> input = torch.rand(1, 3, 4, 4)
|
||||
@ -120,7 +120,7 @@ def rgb_to_grayscale(image, rgb_weights = None):
|
||||
grayscale version of the image with shape :math:`(*,1,H,W)`.
|
||||
|
||||
.. note::
|
||||
See a working example `here <https://kornia.readthedocs.io/en/latest/
|
||||
See a working example `here <https://kornia-tutorials.readthedocs.io/en/latest/
|
||||
color_conversions.html>`__.
|
||||
|
||||
Example:
|
||||
@ -176,7 +176,7 @@ def canny(
|
||||
- the canny edge magnitudes map, shape of :math:`(B,1,H,W)`.
|
||||
- the canny edge detection filtered by thresholds and hysteresis, shape of :math:`(B,1,H,W)`.
|
||||
.. note::
|
||||
See a working example `here <https://kornia.readthedocs.io/en/latest/
|
||||
See a working example `here <https://kornia-tutorials.readthedocs.io/en/latest/
|
||||
canny.html>`__.
|
||||
Example:
|
||||
>>> input = torch.rand(5, 3, 4, 4)
|
||||
|
@ -15,7 +15,6 @@ class BasicScheduler:
|
||||
{"model": ("MODEL",),
|
||||
"scheduler": (ldm_patched.modules.samplers.SCHEDULER_NAMES, ),
|
||||
"steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
|
||||
"denoise": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}),
|
||||
}
|
||||
}
|
||||
RETURN_TYPES = ("SIGMAS",)
|
||||
@ -23,14 +22,8 @@ class BasicScheduler:
|
||||
|
||||
FUNCTION = "get_sigmas"
|
||||
|
||||
def get_sigmas(self, model, scheduler, steps, denoise):
|
||||
total_steps = steps
|
||||
if denoise < 1.0:
|
||||
total_steps = int(steps/denoise)
|
||||
|
||||
ldm_patched.modules.model_management.load_models_gpu([model])
|
||||
sigmas = ldm_patched.modules.samplers.calculate_sigmas_scheduler(model.model, scheduler, total_steps).cpu()
|
||||
sigmas = sigmas[-(steps + 1):]
|
||||
def get_sigmas(self, model, scheduler, steps):
|
||||
sigmas = ldm_patched.modules.samplers.calculate_sigmas_scheduler(model.model, scheduler, steps).cpu()
|
||||
return (sigmas, )
|
||||
|
||||
|
||||
@ -96,7 +89,6 @@ class SDTurboScheduler:
|
||||
return {"required":
|
||||
{"model": ("MODEL",),
|
||||
"steps": ("INT", {"default": 1, "min": 1, "max": 10}),
|
||||
"denoise": ("FLOAT", {"default": 1.0, "min": 0, "max": 1.0, "step": 0.01}),
|
||||
}
|
||||
}
|
||||
RETURN_TYPES = ("SIGMAS",)
|
||||
@ -104,10 +96,8 @@ class SDTurboScheduler:
|
||||
|
||||
FUNCTION = "get_sigmas"
|
||||
|
||||
def get_sigmas(self, model, steps, denoise):
|
||||
start_step = 10 - int(10 * denoise)
|
||||
timesteps = torch.flip(torch.arange(1, 11) * 100 - 1, (0,))[start_step:start_step + steps]
|
||||
ldm_patched.modules.model_management.load_models_gpu([model])
|
||||
def get_sigmas(self, model, steps):
|
||||
timesteps = torch.flip(torch.arange(1, 11) * 100 - 1, (0,))[:steps]
|
||||
sigmas = model.model.model_sampling.sigma(timesteps)
|
||||
sigmas = torch.cat([sigmas, sigmas.new_zeros([1])])
|
||||
return (sigmas, )
|
||||
|
@ -36,7 +36,7 @@ class FreeU:
|
||||
RETURN_TYPES = ("MODEL",)
|
||||
FUNCTION = "patch"
|
||||
|
||||
CATEGORY = "model_patches"
|
||||
CATEGORY = "_for_testing"
|
||||
|
||||
def patch(self, model, b1, b2, s1, s2):
|
||||
model_channels = model.model.model_config.unet_config["model_channels"]
|
||||
@ -75,7 +75,7 @@ class FreeU_V2:
|
||||
RETURN_TYPES = ("MODEL",)
|
||||
FUNCTION = "patch"
|
||||
|
||||
CATEGORY = "model_patches"
|
||||
CATEGORY = "_for_testing"
|
||||
|
||||
def patch(self, model, b1, b2, s1, s2):
|
||||
model_channels = model.model.model_config.unet_config["model_channels"]
|
||||
|
@ -34,29 +34,29 @@ class HyperTile:
|
||||
RETURN_TYPES = ("MODEL",)
|
||||
FUNCTION = "patch"
|
||||
|
||||
CATEGORY = "model_patches"
|
||||
CATEGORY = "_for_testing"
|
||||
|
||||
def patch(self, model, tile_size, swap_size, max_depth, scale_depth):
|
||||
model_channels = model.model.model_config.unet_config["model_channels"]
|
||||
|
||||
apply_to = set()
|
||||
temp = model_channels
|
||||
for x in range(max_depth + 1):
|
||||
apply_to.add(temp)
|
||||
temp *= 2
|
||||
|
||||
latent_tile_size = max(32, tile_size) // 8
|
||||
self.temp = None
|
||||
|
||||
def hypertile_in(q, k, v, extra_options):
|
||||
model_chans = q.shape[-2]
|
||||
orig_shape = extra_options['original_shape']
|
||||
apply_to = []
|
||||
for i in range(max_depth + 1):
|
||||
apply_to.append((orig_shape[-2] / (2 ** i)) * (orig_shape[-1] / (2 ** i)))
|
||||
|
||||
if model_chans in apply_to:
|
||||
if q.shape[-1] in apply_to:
|
||||
shape = extra_options["original_shape"]
|
||||
aspect_ratio = shape[-1] / shape[-2]
|
||||
|
||||
hw = q.size(1)
|
||||
h, w = round(math.sqrt(hw * aspect_ratio)), round(math.sqrt(hw / aspect_ratio))
|
||||
|
||||
factor = (2 ** apply_to.index(model_chans)) if scale_depth else 1
|
||||
factor = 2**((q.shape[-1] // model_channels) - 1) if scale_depth else 1
|
||||
nh = random_divisor(h, latent_tile_size * factor, swap_size)
|
||||
nw = random_divisor(w, latent_tile_size * factor, swap_size)
|
||||
|
||||
|
@ -124,34 +124,10 @@ class LatentBatch:
|
||||
samples_out["batch_index"] = samples1.get("batch_index", [x for x in range(0, s1.shape[0])]) + samples2.get("batch_index", [x for x in range(0, s2.shape[0])])
|
||||
return (samples_out,)
|
||||
|
||||
class LatentBatchSeedBehavior:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {"required": { "samples": ("LATENT",),
|
||||
"seed_behavior": (["random", "fixed"],),}}
|
||||
|
||||
RETURN_TYPES = ("LATENT",)
|
||||
FUNCTION = "op"
|
||||
|
||||
CATEGORY = "latent/advanced"
|
||||
|
||||
def op(self, samples, seed_behavior):
|
||||
samples_out = samples.copy()
|
||||
latent = samples["samples"]
|
||||
if seed_behavior == "random":
|
||||
if 'batch_index' in samples_out:
|
||||
samples_out.pop('batch_index')
|
||||
elif seed_behavior == "fixed":
|
||||
batch_number = samples_out.get("batch_index", [0])[0]
|
||||
samples_out["batch_index"] = [batch_number] * latent.shape[0]
|
||||
|
||||
return (samples_out,)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"LatentAdd": LatentAdd,
|
||||
"LatentSubtract": LatentSubtract,
|
||||
"LatentMultiply": LatentMultiply,
|
||||
"LatentInterpolate": LatentInterpolate,
|
||||
"LatentBatch": LatentBatch,
|
||||
"LatentBatchSeedBehavior": LatentBatchSeedBehavior,
|
||||
}
|
||||
|
@ -8,7 +8,6 @@ import ldm_patched.modules.utils
|
||||
from ldm_patched.contrib.external import MAX_RESOLUTION
|
||||
|
||||
def composite(destination, source, x, y, mask = None, multiplier = 8, resize_source = False):
|
||||
source = source.to(destination.device)
|
||||
if resize_source:
|
||||
source = torch.nn.functional.interpolate(source, size=(destination.shape[2], destination.shape[3]), mode="bilinear")
|
||||
|
||||
@ -23,7 +22,7 @@ def composite(destination, source, x, y, mask = None, multiplier = 8, resize_sou
|
||||
if mask is None:
|
||||
mask = torch.ones_like(source)
|
||||
else:
|
||||
mask = mask.to(destination.device, copy=True)
|
||||
mask = mask.clone()
|
||||
mask = torch.nn.functional.interpolate(mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])), size=(source.shape[2], source.shape[3]), mode="bilinear")
|
||||
mask = ldm_patched.modules.utils.repeat_to_batch_size(mask, source.shape[0])
|
||||
|
||||
|
@ -121,48 +121,6 @@ class ModelMergeBlocks:
|
||||
m.add_patches({k: kp[k]}, 1.0 - ratio, ratio)
|
||||
return (m, )
|
||||
|
||||
def save_checkpoint(model, clip=None, vae=None, clip_vision=None, filename_prefix=None, output_dir=None, prompt=None, extra_pnginfo=None):
|
||||
full_output_folder, filename, counter, subfolder, filename_prefix = ldm_patched.utils.path_utils.get_save_image_path(filename_prefix, output_dir)
|
||||
prompt_info = ""
|
||||
if prompt is not None:
|
||||
prompt_info = json.dumps(prompt)
|
||||
|
||||
metadata = {}
|
||||
|
||||
enable_modelspec = True
|
||||
if isinstance(model.model, ldm_patched.modules.model_base.SDXL):
|
||||
metadata["modelspec.architecture"] = "stable-diffusion-xl-v1-base"
|
||||
elif isinstance(model.model, ldm_patched.modules.model_base.SDXLRefiner):
|
||||
metadata["modelspec.architecture"] = "stable-diffusion-xl-v1-refiner"
|
||||
else:
|
||||
enable_modelspec = False
|
||||
|
||||
if enable_modelspec:
|
||||
metadata["modelspec.sai_model_spec"] = "1.0.0"
|
||||
metadata["modelspec.implementation"] = "sgm"
|
||||
metadata["modelspec.title"] = "{} {}".format(filename, counter)
|
||||
|
||||
#TODO:
|
||||
# "stable-diffusion-v1", "stable-diffusion-v1-inpainting", "stable-diffusion-v2-512",
|
||||
# "stable-diffusion-v2-768-v", "stable-diffusion-v2-unclip-l", "stable-diffusion-v2-unclip-h",
|
||||
# "v2-inpainting"
|
||||
|
||||
if model.model.model_type == ldm_patched.modules.model_base.ModelType.EPS:
|
||||
metadata["modelspec.predict_key"] = "epsilon"
|
||||
elif model.model.model_type == ldm_patched.modules.model_base.ModelType.V_PREDICTION:
|
||||
metadata["modelspec.predict_key"] = "v"
|
||||
|
||||
if not args.disable_server_info:
|
||||
metadata["prompt"] = prompt_info
|
||||
if extra_pnginfo is not None:
|
||||
for x in extra_pnginfo:
|
||||
metadata[x] = json.dumps(extra_pnginfo[x])
|
||||
|
||||
output_checkpoint = f"{filename}_{counter:05}_.safetensors"
|
||||
output_checkpoint = os.path.join(full_output_folder, output_checkpoint)
|
||||
|
||||
ldm_patched.modules.sd.save_checkpoint(output_checkpoint, model, clip, vae, clip_vision, metadata=metadata)
|
||||
|
||||
class CheckpointSave:
|
||||
def __init__(self):
|
||||
self.output_dir = ldm_patched.utils.path_utils.get_output_directory()
|
||||
@ -181,7 +139,46 @@ class CheckpointSave:
|
||||
CATEGORY = "advanced/model_merging"
|
||||
|
||||
def save(self, model, clip, vae, filename_prefix, prompt=None, extra_pnginfo=None):
|
||||
save_checkpoint(model, clip=clip, vae=vae, filename_prefix=filename_prefix, output_dir=self.output_dir, prompt=prompt, extra_pnginfo=extra_pnginfo)
|
||||
full_output_folder, filename, counter, subfolder, filename_prefix = ldm_patched.utils.path_utils.get_save_image_path(filename_prefix, self.output_dir)
|
||||
prompt_info = ""
|
||||
if prompt is not None:
|
||||
prompt_info = json.dumps(prompt)
|
||||
|
||||
metadata = {}
|
||||
|
||||
enable_modelspec = True
|
||||
if isinstance(model.model, ldm_patched.modules.model_base.SDXL):
|
||||
metadata["modelspec.architecture"] = "stable-diffusion-xl-v1-base"
|
||||
elif isinstance(model.model, ldm_patched.modules.model_base.SDXLRefiner):
|
||||
metadata["modelspec.architecture"] = "stable-diffusion-xl-v1-refiner"
|
||||
else:
|
||||
enable_modelspec = False
|
||||
|
||||
if enable_modelspec:
|
||||
metadata["modelspec.sai_model_spec"] = "1.0.0"
|
||||
metadata["modelspec.implementation"] = "sgm"
|
||||
metadata["modelspec.title"] = "{} {}".format(filename, counter)
|
||||
|
||||
#TODO:
|
||||
# "stable-diffusion-v1", "stable-diffusion-v1-inpainting", "stable-diffusion-v2-512",
|
||||
# "stable-diffusion-v2-768-v", "stable-diffusion-v2-unclip-l", "stable-diffusion-v2-unclip-h",
|
||||
# "v2-inpainting"
|
||||
|
||||
if model.model.model_type == ldm_patched.modules.model_base.ModelType.EPS:
|
||||
metadata["modelspec.predict_key"] = "epsilon"
|
||||
elif model.model.model_type == ldm_patched.modules.model_base.ModelType.V_PREDICTION:
|
||||
metadata["modelspec.predict_key"] = "v"
|
||||
|
||||
if not args.disable_server_info:
|
||||
metadata["prompt"] = prompt_info
|
||||
if extra_pnginfo is not None:
|
||||
for x in extra_pnginfo:
|
||||
metadata[x] = json.dumps(extra_pnginfo[x])
|
||||
|
||||
output_checkpoint = f"{filename}_{counter:05}_.safetensors"
|
||||
output_checkpoint = os.path.join(full_output_folder, output_checkpoint)
|
||||
|
||||
ldm_patched.modules.sd.save_checkpoint(output_checkpoint, model, clip, vae, metadata=metadata)
|
||||
return {}
|
||||
|
||||
class CLIPSave:
|
||||
|
@ -1,189 +0,0 @@
|
||||
# https://github.com/comfyanonymous/ComfyUI/blob/master/nodes.py
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import ldm_patched.utils.path_utils
|
||||
import ldm_patched.modules.clip_model
|
||||
import ldm_patched.modules.clip_vision
|
||||
import ldm_patched.modules.ops
|
||||
|
||||
# code for model from: https://github.com/TencentARC/PhotoMaker/blob/main/photomaker/model.py under Apache License Version 2.0
|
||||
VISION_CONFIG_DICT = {
|
||||
"hidden_size": 1024,
|
||||
"image_size": 224,
|
||||
"intermediate_size": 4096,
|
||||
"num_attention_heads": 16,
|
||||
"num_channels": 3,
|
||||
"num_hidden_layers": 24,
|
||||
"patch_size": 14,
|
||||
"projection_dim": 768,
|
||||
"hidden_act": "quick_gelu",
|
||||
}
|
||||
|
||||
class MLP(nn.Module):
|
||||
def __init__(self, in_dim, out_dim, hidden_dim, use_residual=True, operations=ldm_patched.modules.ops):
|
||||
super().__init__()
|
||||
if use_residual:
|
||||
assert in_dim == out_dim
|
||||
self.layernorm = operations.LayerNorm(in_dim)
|
||||
self.fc1 = operations.Linear(in_dim, hidden_dim)
|
||||
self.fc2 = operations.Linear(hidden_dim, out_dim)
|
||||
self.use_residual = use_residual
|
||||
self.act_fn = nn.GELU()
|
||||
|
||||
def forward(self, x):
|
||||
residual = x
|
||||
x = self.layernorm(x)
|
||||
x = self.fc1(x)
|
||||
x = self.act_fn(x)
|
||||
x = self.fc2(x)
|
||||
if self.use_residual:
|
||||
x = x + residual
|
||||
return x
|
||||
|
||||
|
||||
class FuseModule(nn.Module):
|
||||
def __init__(self, embed_dim, operations):
|
||||
super().__init__()
|
||||
self.mlp1 = MLP(embed_dim * 2, embed_dim, embed_dim, use_residual=False, operations=operations)
|
||||
self.mlp2 = MLP(embed_dim, embed_dim, embed_dim, use_residual=True, operations=operations)
|
||||
self.layer_norm = operations.LayerNorm(embed_dim)
|
||||
|
||||
def fuse_fn(self, prompt_embeds, id_embeds):
|
||||
stacked_id_embeds = torch.cat([prompt_embeds, id_embeds], dim=-1)
|
||||
stacked_id_embeds = self.mlp1(stacked_id_embeds) + prompt_embeds
|
||||
stacked_id_embeds = self.mlp2(stacked_id_embeds)
|
||||
stacked_id_embeds = self.layer_norm(stacked_id_embeds)
|
||||
return stacked_id_embeds
|
||||
|
||||
def forward(
|
||||
self,
|
||||
prompt_embeds,
|
||||
id_embeds,
|
||||
class_tokens_mask,
|
||||
) -> torch.Tensor:
|
||||
# id_embeds shape: [b, max_num_inputs, 1, 2048]
|
||||
id_embeds = id_embeds.to(prompt_embeds.dtype)
|
||||
num_inputs = class_tokens_mask.sum().unsqueeze(0) # TODO: check for training case
|
||||
batch_size, max_num_inputs = id_embeds.shape[:2]
|
||||
# seq_length: 77
|
||||
seq_length = prompt_embeds.shape[1]
|
||||
# flat_id_embeds shape: [b*max_num_inputs, 1, 2048]
|
||||
flat_id_embeds = id_embeds.view(
|
||||
-1, id_embeds.shape[-2], id_embeds.shape[-1]
|
||||
)
|
||||
# valid_id_mask [b*max_num_inputs]
|
||||
valid_id_mask = (
|
||||
torch.arange(max_num_inputs, device=flat_id_embeds.device)[None, :]
|
||||
< num_inputs[:, None]
|
||||
)
|
||||
valid_id_embeds = flat_id_embeds[valid_id_mask.flatten()]
|
||||
|
||||
prompt_embeds = prompt_embeds.view(-1, prompt_embeds.shape[-1])
|
||||
class_tokens_mask = class_tokens_mask.view(-1)
|
||||
valid_id_embeds = valid_id_embeds.view(-1, valid_id_embeds.shape[-1])
|
||||
# slice out the image token embeddings
|
||||
image_token_embeds = prompt_embeds[class_tokens_mask]
|
||||
stacked_id_embeds = self.fuse_fn(image_token_embeds, valid_id_embeds)
|
||||
assert class_tokens_mask.sum() == stacked_id_embeds.shape[0], f"{class_tokens_mask.sum()} != {stacked_id_embeds.shape[0]}"
|
||||
prompt_embeds.masked_scatter_(class_tokens_mask[:, None], stacked_id_embeds.to(prompt_embeds.dtype))
|
||||
updated_prompt_embeds = prompt_embeds.view(batch_size, seq_length, -1)
|
||||
return updated_prompt_embeds
|
||||
|
||||
class PhotoMakerIDEncoder(ldm_patched.modules.clip_model.CLIPVisionModelProjection):
|
||||
def __init__(self):
|
||||
self.load_device = ldm_patched.modules.model_management.text_encoder_device()
|
||||
offload_device = ldm_patched.modules.model_management.text_encoder_offload_device()
|
||||
dtype = ldm_patched.modules.model_management.text_encoder_dtype(self.load_device)
|
||||
|
||||
super().__init__(VISION_CONFIG_DICT, dtype, offload_device, ldm_patched.modules.ops.manual_cast)
|
||||
self.visual_projection_2 = ldm_patched.modules.ops.manual_cast.Linear(1024, 1280, bias=False)
|
||||
self.fuse_module = FuseModule(2048, ldm_patched.modules.ops.manual_cast)
|
||||
|
||||
def forward(self, id_pixel_values, prompt_embeds, class_tokens_mask):
|
||||
b, num_inputs, c, h, w = id_pixel_values.shape
|
||||
id_pixel_values = id_pixel_values.view(b * num_inputs, c, h, w)
|
||||
|
||||
shared_id_embeds = self.vision_model(id_pixel_values)[2]
|
||||
id_embeds = self.visual_projection(shared_id_embeds)
|
||||
id_embeds_2 = self.visual_projection_2(shared_id_embeds)
|
||||
|
||||
id_embeds = id_embeds.view(b, num_inputs, 1, -1)
|
||||
id_embeds_2 = id_embeds_2.view(b, num_inputs, 1, -1)
|
||||
|
||||
id_embeds = torch.cat((id_embeds, id_embeds_2), dim=-1)
|
||||
updated_prompt_embeds = self.fuse_module(prompt_embeds, id_embeds, class_tokens_mask)
|
||||
|
||||
return updated_prompt_embeds
|
||||
|
||||
|
||||
class PhotoMakerLoader:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {"required": { "photomaker_model_name": (ldm_patched.utils.path_utils.get_filename_list("photomaker"), )}}
|
||||
|
||||
RETURN_TYPES = ("PHOTOMAKER",)
|
||||
FUNCTION = "load_photomaker_model"
|
||||
|
||||
CATEGORY = "_for_testing/photomaker"
|
||||
|
||||
def load_photomaker_model(self, photomaker_model_name):
|
||||
photomaker_model_path = ldm_patched.utils.path_utils.get_full_path("photomaker", photomaker_model_name)
|
||||
photomaker_model = PhotoMakerIDEncoder()
|
||||
data = ldm_patched.modules.utils.load_torch_file(photomaker_model_path, safe_load=True)
|
||||
if "id_encoder" in data:
|
||||
data = data["id_encoder"]
|
||||
photomaker_model.load_state_dict(data)
|
||||
return (photomaker_model,)
|
||||
|
||||
|
||||
class PhotoMakerEncode:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {"required": { "photomaker": ("PHOTOMAKER",),
|
||||
"image": ("IMAGE",),
|
||||
"clip": ("CLIP", ),
|
||||
"text": ("STRING", {"multiline": True, "default": "photograph of photomaker"}),
|
||||
}}
|
||||
|
||||
RETURN_TYPES = ("CONDITIONING",)
|
||||
FUNCTION = "apply_photomaker"
|
||||
|
||||
CATEGORY = "_for_testing/photomaker"
|
||||
|
||||
def apply_photomaker(self, photomaker, image, clip, text):
|
||||
special_token = "photomaker"
|
||||
pixel_values = ldm_patched.modules.clip_vision.clip_preprocess(image.to(photomaker.load_device)).float()
|
||||
try:
|
||||
index = text.split(" ").index(special_token) + 1
|
||||
except ValueError:
|
||||
index = -1
|
||||
tokens = clip.tokenize(text, return_word_ids=True)
|
||||
out_tokens = {}
|
||||
for k in tokens:
|
||||
out_tokens[k] = []
|
||||
for t in tokens[k]:
|
||||
f = list(filter(lambda x: x[2] != index, t))
|
||||
while len(f) < len(t):
|
||||
f.append(t[-1])
|
||||
out_tokens[k].append(f)
|
||||
|
||||
cond, pooled = clip.encode_from_tokens(out_tokens, return_pooled=True)
|
||||
|
||||
if index > 0:
|
||||
token_index = index - 1
|
||||
num_id_images = 1
|
||||
class_tokens_mask = [True if token_index <= i < token_index+num_id_images else False for i in range(77)]
|
||||
out = photomaker(id_pixel_values=pixel_values.unsqueeze(0), prompt_embeds=cond.to(photomaker.load_device),
|
||||
class_tokens_mask=torch.tensor(class_tokens_mask, dtype=torch.bool, device=photomaker.load_device).unsqueeze(0))
|
||||
else:
|
||||
out = cond
|
||||
|
||||
return ([[out, {"pooled_output": pooled}]], )
|
||||
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"PhotoMakerLoader": PhotoMakerLoader,
|
||||
"PhotoMakerEncode": PhotoMakerEncode,
|
||||
}
|
||||
|
@ -35,7 +35,6 @@ class Blend:
|
||||
CATEGORY = "image/postprocessing"
|
||||
|
||||
def blend_images(self, image1: torch.Tensor, image2: torch.Tensor, blend_factor: float, blend_mode: str):
|
||||
image2 = image2.to(image1.device)
|
||||
if image1.shape != image2.shape:
|
||||
image2 = image2.permute(0, 3, 1, 2)
|
||||
image2 = ldm_patched.modules.utils.common_upscale(image2, image1.shape[2], image1.shape[1], upscale_method='bicubic', crop='center')
|
||||
|
@ -101,40 +101,10 @@ class LatentRebatch:
|
||||
|
||||
return (output_list,)
|
||||
|
||||
class ImageRebatch:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {"required": { "images": ("IMAGE",),
|
||||
"batch_size": ("INT", {"default": 1, "min": 1, "max": 4096}),
|
||||
}}
|
||||
RETURN_TYPES = ("IMAGE",)
|
||||
INPUT_IS_LIST = True
|
||||
OUTPUT_IS_LIST = (True, )
|
||||
|
||||
FUNCTION = "rebatch"
|
||||
|
||||
CATEGORY = "image/batch"
|
||||
|
||||
def rebatch(self, images, batch_size):
|
||||
batch_size = batch_size[0]
|
||||
|
||||
output_list = []
|
||||
all_images = []
|
||||
for img in images:
|
||||
for i in range(img.shape[0]):
|
||||
all_images.append(img[i:i+1])
|
||||
|
||||
for i in range(0, len(all_images), batch_size):
|
||||
output_list.append(torch.cat(all_images[i:i+batch_size], dim=0))
|
||||
|
||||
return (output_list,)
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"RebatchLatents": LatentRebatch,
|
||||
"RebatchImages": ImageRebatch,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"RebatchLatents": "Rebatch Latents",
|
||||
"RebatchImages": "Rebatch Images",
|
||||
}
|
||||
}
|
@ -60,7 +60,7 @@ def create_blur_map(x0, attn, sigma=3.0, threshold=1.0):
|
||||
attn = attn.reshape(b, -1, hw1, hw2)
|
||||
# Global Average Pool
|
||||
mask = attn.mean(1, keepdim=False).sum(1, keepdim=False) > threshold
|
||||
ratio = 2**(math.ceil(math.sqrt(lh * lw / hw1)) - 1).bit_length()
|
||||
ratio = math.ceil(math.sqrt(lh * lw / hw1))
|
||||
mid_shape = [math.ceil(lh / ratio), math.ceil(lw / ratio)]
|
||||
|
||||
# Reshape
|
||||
@ -145,8 +145,6 @@ class SelfAttentionGuidance:
|
||||
sigma = args["sigma"]
|
||||
model_options = args["model_options"]
|
||||
x = args["input"]
|
||||
if min(cfg_result.shape[2:]) <= 4: #skip when too small to add padding
|
||||
return cfg_result
|
||||
|
||||
# create the adversarially blurred image
|
||||
degraded = create_blur_map(uncond_pred, uncond_attn, sag_sigma, sag_threshold)
|
||||
@ -155,7 +153,7 @@ class SelfAttentionGuidance:
|
||||
(sag, _) = ldm_patched.modules.samplers.calc_cond_uncond_batch(model, uncond, None, degraded_noised, sigma, model_options)
|
||||
return cfg_result + (degraded - sag) * sag_scale
|
||||
|
||||
m.set_model_sampler_post_cfg_function(post_cfg_function, disable_cfg1_optimization=True)
|
||||
m.set_model_sampler_post_cfg_function(post_cfg_function)
|
||||
|
||||
# from diffusers:
|
||||
# unet.mid_block.attentions[0].transformer_blocks[0].attn1.patch
|
||||
|
@ -1,49 +0,0 @@
|
||||
# https://github.com/comfyanonymous/ComfyUI/blob/master/nodes.py
|
||||
|
||||
import torch
|
||||
import ldm_patched.contrib.external
|
||||
import ldm_patched.modules.utils
|
||||
|
||||
class SD_4XUpscale_Conditioning:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {"required": { "images": ("IMAGE",),
|
||||
"positive": ("CONDITIONING",),
|
||||
"negative": ("CONDITIONING",),
|
||||
"scale_ratio": ("FLOAT", {"default": 4.0, "min": 0.0, "max": 10.0, "step": 0.01}),
|
||||
"noise_augmentation": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001}),
|
||||
}}
|
||||
RETURN_TYPES = ("CONDITIONING", "CONDITIONING", "LATENT")
|
||||
RETURN_NAMES = ("positive", "negative", "latent")
|
||||
|
||||
FUNCTION = "encode"
|
||||
|
||||
CATEGORY = "conditioning/upscale_diffusion"
|
||||
|
||||
def encode(self, images, positive, negative, scale_ratio, noise_augmentation):
|
||||
width = max(1, round(images.shape[-2] * scale_ratio))
|
||||
height = max(1, round(images.shape[-3] * scale_ratio))
|
||||
|
||||
pixels = ldm_patched.modules.utils.common_upscale((images.movedim(-1,1) * 2.0) - 1.0, width // 4, height // 4, "bilinear", "center")
|
||||
|
||||
out_cp = []
|
||||
out_cn = []
|
||||
|
||||
for t in positive:
|
||||
n = [t[0], t[1].copy()]
|
||||
n[1]['concat_image'] = pixels
|
||||
n[1]['noise_augmentation'] = noise_augmentation
|
||||
out_cp.append(n)
|
||||
|
||||
for t in negative:
|
||||
n = [t[0], t[1].copy()]
|
||||
n[1]['concat_image'] = pixels
|
||||
n[1]['noise_augmentation'] = noise_augmentation
|
||||
out_cn.append(n)
|
||||
|
||||
latent = torch.zeros([images.shape[0], 4, height // 4, width // 4])
|
||||
return (out_cp, out_cn, {"samples":latent})
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"SD_4XUpscale_Conditioning": SD_4XUpscale_Conditioning,
|
||||
}
|
@ -1,104 +0,0 @@
|
||||
# https://github.com/comfyanonymous/ComfyUI/blob/master/nodes.py
|
||||
|
||||
import torch
|
||||
import ldm_patched.contrib.external
|
||||
import ldm_patched.modules.utils
|
||||
|
||||
def camera_embeddings(elevation, azimuth):
|
||||
elevation = torch.as_tensor([elevation])
|
||||
azimuth = torch.as_tensor([azimuth])
|
||||
embeddings = torch.stack(
|
||||
[
|
||||
torch.deg2rad(
|
||||
(90 - elevation) - (90)
|
||||
), # Zero123 polar is 90-elevation
|
||||
torch.sin(torch.deg2rad(azimuth)),
|
||||
torch.cos(torch.deg2rad(azimuth)),
|
||||
torch.deg2rad(
|
||||
90 - torch.full_like(elevation, 0)
|
||||
),
|
||||
], dim=-1).unsqueeze(1)
|
||||
|
||||
return embeddings
|
||||
|
||||
|
||||
class StableZero123_Conditioning:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {"required": { "clip_vision": ("CLIP_VISION",),
|
||||
"init_image": ("IMAGE",),
|
||||
"vae": ("VAE",),
|
||||
"width": ("INT", {"default": 256, "min": 16, "max": ldm_patched.contrib.external.MAX_RESOLUTION, "step": 8}),
|
||||
"height": ("INT", {"default": 256, "min": 16, "max": ldm_patched.contrib.external.MAX_RESOLUTION, "step": 8}),
|
||||
"batch_size": ("INT", {"default": 1, "min": 1, "max": 4096}),
|
||||
"elevation": ("FLOAT", {"default": 0.0, "min": -180.0, "max": 180.0}),
|
||||
"azimuth": ("FLOAT", {"default": 0.0, "min": -180.0, "max": 180.0}),
|
||||
}}
|
||||
RETURN_TYPES = ("CONDITIONING", "CONDITIONING", "LATENT")
|
||||
RETURN_NAMES = ("positive", "negative", "latent")
|
||||
|
||||
FUNCTION = "encode"
|
||||
|
||||
CATEGORY = "conditioning/3d_models"
|
||||
|
||||
def encode(self, clip_vision, init_image, vae, width, height, batch_size, elevation, azimuth):
|
||||
output = clip_vision.encode_image(init_image)
|
||||
pooled = output.image_embeds.unsqueeze(0)
|
||||
pixels = ldm_patched.modules.utils.common_upscale(init_image.movedim(-1,1), width, height, "bilinear", "center").movedim(1,-1)
|
||||
encode_pixels = pixels[:,:,:,:3]
|
||||
t = vae.encode(encode_pixels)
|
||||
cam_embeds = camera_embeddings(elevation, azimuth)
|
||||
cond = torch.cat([pooled, cam_embeds.to(pooled.device).repeat((pooled.shape[0], 1, 1))], dim=-1)
|
||||
|
||||
positive = [[cond, {"concat_latent_image": t}]]
|
||||
negative = [[torch.zeros_like(pooled), {"concat_latent_image": torch.zeros_like(t)}]]
|
||||
latent = torch.zeros([batch_size, 4, height // 8, width // 8])
|
||||
return (positive, negative, {"samples":latent})
|
||||
|
||||
class StableZero123_Conditioning_Batched:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {"required": { "clip_vision": ("CLIP_VISION",),
|
||||
"init_image": ("IMAGE",),
|
||||
"vae": ("VAE",),
|
||||
"width": ("INT", {"default": 256, "min": 16, "max": ldm_patched.contrib.external.MAX_RESOLUTION, "step": 8}),
|
||||
"height": ("INT", {"default": 256, "min": 16, "max": ldm_patched.contrib.external.MAX_RESOLUTION, "step": 8}),
|
||||
"batch_size": ("INT", {"default": 1, "min": 1, "max": 4096}),
|
||||
"elevation": ("FLOAT", {"default": 0.0, "min": -180.0, "max": 180.0}),
|
||||
"azimuth": ("FLOAT", {"default": 0.0, "min": -180.0, "max": 180.0}),
|
||||
"elevation_batch_increment": ("FLOAT", {"default": 0.0, "min": -180.0, "max": 180.0}),
|
||||
"azimuth_batch_increment": ("FLOAT", {"default": 0.0, "min": -180.0, "max": 180.0}),
|
||||
}}
|
||||
RETURN_TYPES = ("CONDITIONING", "CONDITIONING", "LATENT")
|
||||
RETURN_NAMES = ("positive", "negative", "latent")
|
||||
|
||||
FUNCTION = "encode"
|
||||
|
||||
CATEGORY = "conditioning/3d_models"
|
||||
|
||||
def encode(self, clip_vision, init_image, vae, width, height, batch_size, elevation, azimuth, elevation_batch_increment, azimuth_batch_increment):
|
||||
output = clip_vision.encode_image(init_image)
|
||||
pooled = output.image_embeds.unsqueeze(0)
|
||||
pixels = ldm_patched.modules.utils.common_upscale(init_image.movedim(-1,1), width, height, "bilinear", "center").movedim(1,-1)
|
||||
encode_pixels = pixels[:,:,:,:3]
|
||||
t = vae.encode(encode_pixels)
|
||||
|
||||
cam_embeds = []
|
||||
for i in range(batch_size):
|
||||
cam_embeds.append(camera_embeddings(elevation, azimuth))
|
||||
elevation += elevation_batch_increment
|
||||
azimuth += azimuth_batch_increment
|
||||
|
||||
cam_embeds = torch.cat(cam_embeds, dim=0)
|
||||
cond = torch.cat([ldm_patched.modules.utils.repeat_to_batch_size(pooled, batch_size), cam_embeds], dim=-1)
|
||||
|
||||
positive = [[cond, {"concat_latent_image": t}]]
|
||||
negative = [[torch.zeros_like(pooled), {"concat_latent_image": torch.zeros_like(t)}]]
|
||||
latent = torch.zeros([batch_size, 4, height // 8, width // 8])
|
||||
return (positive, negative, {"samples":latent, "batch_index": [0] * batch_size})
|
||||
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"StableZero123_Conditioning": StableZero123_Conditioning,
|
||||
"StableZero123_Conditioning_Batched": StableZero123_Conditioning_Batched,
|
||||
}
|
@ -5,7 +5,6 @@ import torch
|
||||
import ldm_patched.modules.utils
|
||||
import ldm_patched.modules.sd
|
||||
import ldm_patched.utils.path_utils
|
||||
import ldm_patched.contrib.external_model_merging
|
||||
|
||||
|
||||
class ImageOnlyCheckpointLoader:
|
||||
@ -81,26 +80,10 @@ class VideoLinearCFGGuidance:
|
||||
m.set_model_sampler_cfg_function(linear_cfg)
|
||||
return (m, )
|
||||
|
||||
class ImageOnlyCheckpointSave(ldm_patched.contrib.external_model_merging.CheckpointSave):
|
||||
CATEGORY = "_for_testing"
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {"required": { "model": ("MODEL",),
|
||||
"clip_vision": ("CLIP_VISION",),
|
||||
"vae": ("VAE",),
|
||||
"filename_prefix": ("STRING", {"default": "checkpoints/ldm_patched"}),},
|
||||
"hidden": {"prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO"},}
|
||||
|
||||
def save(self, model, clip_vision, vae, filename_prefix, prompt=None, extra_pnginfo=None):
|
||||
ldm_patched.contrib.external_model_merging.save_checkpoint(model, clip_vision=clip_vision, vae=vae, filename_prefix=filename_prefix, output_dir=self.output_dir, prompt=prompt, extra_pnginfo=extra_pnginfo)
|
||||
return {}
|
||||
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"ImageOnlyCheckpointLoader": ImageOnlyCheckpointLoader,
|
||||
"SVD_img2vid_Conditioning": SVD_img2vid_Conditioning,
|
||||
"VideoLinearCFGGuidance": VideoLinearCFGGuidance,
|
||||
"ImageOnlyCheckpointSave": ImageOnlyCheckpointSave,
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
|
@ -8,7 +8,6 @@ from ldm_patched.ldm.modules.distributions.distributions import DiagonalGaussian
|
||||
|
||||
from ldm_patched.ldm.util import instantiate_from_config
|
||||
from ldm_patched.ldm.modules.ema import LitEma
|
||||
import ldm_patched.modules.ops
|
||||
|
||||
class DiagonalGaussianRegularizer(torch.nn.Module):
|
||||
def __init__(self, sample: bool = True):
|
||||
@ -162,12 +161,12 @@ class AutoencodingEngineLegacy(AutoencodingEngine):
|
||||
},
|
||||
**kwargs,
|
||||
)
|
||||
self.quant_conv = ldm_patched.modules.ops.disable_weight_init.Conv2d(
|
||||
self.quant_conv = torch.nn.Conv2d(
|
||||
(1 + ddconfig["double_z"]) * ddconfig["z_channels"],
|
||||
(1 + ddconfig["double_z"]) * embed_dim,
|
||||
1,
|
||||
)
|
||||
self.post_quant_conv = ldm_patched.modules.ops.disable_weight_init.Conv2d(embed_dim, ddconfig["z_channels"], 1)
|
||||
self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1)
|
||||
self.embed_dim = embed_dim
|
||||
|
||||
def get_autoencoder_params(self) -> list:
|
||||
|
@ -1,9 +1,12 @@
|
||||
from inspect import isfunction
|
||||
import math
|
||||
import torch
|
||||
import torch.nn.functional as F
|
||||
from torch import nn, einsum
|
||||
from einops import rearrange, repeat
|
||||
from typing import Optional, Any
|
||||
from functools import partial
|
||||
|
||||
|
||||
from .diffusionmodules.util import checkpoint, AlphaBlender, timestep_embedding
|
||||
from .sub_quadratic_attention import efficient_dot_product_attention
|
||||
@ -174,7 +177,6 @@ def attention_sub_quad(query, key, value, heads, mask=None):
|
||||
kv_chunk_size_min=kv_chunk_size_min,
|
||||
use_checkpoint=False,
|
||||
upcast_attention=upcast_attention,
|
||||
mask=mask,
|
||||
)
|
||||
|
||||
hidden_states = hidden_states.to(dtype)
|
||||
@ -237,12 +239,6 @@ def attention_split(q, k, v, heads, mask=None):
|
||||
else:
|
||||
s1 = einsum('b i d, b j d -> b i j', q[:, i:end], k) * scale
|
||||
|
||||
if mask is not None:
|
||||
if len(mask.shape) == 2:
|
||||
s1 += mask[i:end]
|
||||
else:
|
||||
s1 += mask[:, i:end]
|
||||
|
||||
s2 = s1.softmax(dim=-1).to(v.dtype)
|
||||
del s1
|
||||
first_op_done = True
|
||||
@ -298,14 +294,11 @@ def attention_xformers(q, k, v, heads, mask=None):
|
||||
(q, k, v),
|
||||
)
|
||||
|
||||
if mask is not None:
|
||||
pad = 8 - q.shape[1] % 8
|
||||
mask_out = torch.empty([q.shape[0], q.shape[1], q.shape[1] + pad], dtype=q.dtype, device=q.device)
|
||||
mask_out[:, :, :mask.shape[-1]] = mask
|
||||
mask = mask_out[:, :, :mask.shape[-1]]
|
||||
|
||||
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=mask)
|
||||
# actually compute the attention, what we cannot get enough of
|
||||
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None)
|
||||
|
||||
if exists(mask):
|
||||
raise NotImplementedError
|
||||
out = (
|
||||
out.unsqueeze(0)
|
||||
.reshape(b, heads, -1, dim_head)
|
||||
@ -330,6 +323,7 @@ def attention_pytorch(q, k, v, heads, mask=None):
|
||||
|
||||
|
||||
optimized_attention = attention_basic
|
||||
optimized_attention_masked = attention_basic
|
||||
|
||||
if model_management.xformers_enabled():
|
||||
print("Using xformers cross attention")
|
||||
@ -345,18 +339,15 @@ else:
|
||||
print("Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --attention-split")
|
||||
optimized_attention = attention_sub_quad
|
||||
|
||||
optimized_attention_masked = optimized_attention
|
||||
if model_management.pytorch_attention_enabled():
|
||||
optimized_attention_masked = attention_pytorch
|
||||
|
||||
def optimized_attention_for_device(device, mask=False, small_input=False):
|
||||
if small_input:
|
||||
def optimized_attention_for_device(device, mask=False):
|
||||
if device == torch.device("cpu"): #TODO
|
||||
if model_management.pytorch_attention_enabled():
|
||||
return attention_pytorch #TODO: need to confirm but this is probably slightly faster for small inputs in all cases
|
||||
return attention_pytorch
|
||||
else:
|
||||
return attention_basic
|
||||
|
||||
if device == torch.device("cpu"):
|
||||
return attention_sub_quad
|
||||
|
||||
if mask:
|
||||
return optimized_attention_masked
|
||||
|
||||
|
@ -41,7 +41,7 @@ def nonlinearity(x):
|
||||
|
||||
|
||||
def Normalize(in_channels, num_groups=32):
|
||||
return ops.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True)
|
||||
return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True)
|
||||
|
||||
|
||||
class Upsample(nn.Module):
|
||||
|
@ -1,9 +1,12 @@
|
||||
from abc import abstractmethod
|
||||
import math
|
||||
|
||||
import numpy as np
|
||||
import torch as th
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
from einops import rearrange
|
||||
from functools import partial
|
||||
|
||||
from .util import (
|
||||
checkpoint,
|
||||
@ -434,6 +437,9 @@ class UNetModel(nn.Module):
|
||||
operations=ops,
|
||||
):
|
||||
super().__init__()
|
||||
assert use_spatial_transformer == True, "use_spatial_transformer has to be true"
|
||||
if use_spatial_transformer:
|
||||
assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...'
|
||||
|
||||
if context_dim is not None:
|
||||
assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...'
|
||||
@ -450,6 +456,7 @@ class UNetModel(nn.Module):
|
||||
if num_head_channels == -1:
|
||||
assert num_heads != -1, 'Either num_heads or num_head_channels has to be set'
|
||||
|
||||
self.image_size = image_size
|
||||
self.in_channels = in_channels
|
||||
self.model_channels = model_channels
|
||||
self.out_channels = out_channels
|
||||
@ -495,7 +502,7 @@ class UNetModel(nn.Module):
|
||||
|
||||
if self.num_classes is not None:
|
||||
if isinstance(self.num_classes, int):
|
||||
self.label_emb = nn.Embedding(num_classes, time_embed_dim, dtype=self.dtype, device=device)
|
||||
self.label_emb = nn.Embedding(num_classes, time_embed_dim)
|
||||
elif self.num_classes == "continuous":
|
||||
print("setting up linear c_adm embedding layer")
|
||||
self.label_emb = nn.Linear(1, time_embed_dim)
|
||||
|
@ -41,14 +41,10 @@ class AbstractLowScaleModel(nn.Module):
|
||||
self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
|
||||
self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
|
||||
|
||||
def q_sample(self, x_start, t, noise=None, seed=None):
|
||||
if noise is None:
|
||||
if seed is None:
|
||||
noise = torch.randn_like(x_start)
|
||||
else:
|
||||
noise = torch.randn(x_start.size(), dtype=x_start.dtype, layout=x_start.layout, generator=torch.manual_seed(seed)).to(x_start.device)
|
||||
return (extract_into_tensor(self.sqrt_alphas_cumprod.to(x_start.device), t, x_start.shape) * x_start +
|
||||
extract_into_tensor(self.sqrt_one_minus_alphas_cumprod.to(x_start.device), t, x_start.shape) * noise)
|
||||
def q_sample(self, x_start, t, noise=None):
|
||||
noise = default(noise, lambda: torch.randn_like(x_start))
|
||||
return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
|
||||
extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise)
|
||||
|
||||
def forward(self, x):
|
||||
return x, None
|
||||
@ -73,12 +69,12 @@ class ImageConcatWithNoiseAugmentation(AbstractLowScaleModel):
|
||||
super().__init__(noise_schedule_config=noise_schedule_config)
|
||||
self.max_noise_level = max_noise_level
|
||||
|
||||
def forward(self, x, noise_level=None, seed=None):
|
||||
def forward(self, x, noise_level=None):
|
||||
if noise_level is None:
|
||||
noise_level = torch.randint(0, self.max_noise_level, (x.shape[0],), device=x.device).long()
|
||||
else:
|
||||
assert isinstance(noise_level, torch.Tensor)
|
||||
z = self.q_sample(x, noise_level, seed=seed)
|
||||
z = self.q_sample(x, noise_level)
|
||||
return z, noise_level
|
||||
|
||||
|
||||
|
@ -51,9 +51,9 @@ class AlphaBlender(nn.Module):
|
||||
if self.merge_strategy == "fixed":
|
||||
# make shape compatible
|
||||
# alpha = repeat(self.mix_factor, '1 -> b () t () ()', t=t, b=bs)
|
||||
alpha = self.mix_factor.to(image_only_indicator.device)
|
||||
alpha = self.mix_factor
|
||||
elif self.merge_strategy == "learned":
|
||||
alpha = torch.sigmoid(self.mix_factor.to(image_only_indicator.device))
|
||||
alpha = torch.sigmoid(self.mix_factor)
|
||||
# make shape compatible
|
||||
# alpha = repeat(alpha, '1 -> s () ()', s = t * bs)
|
||||
elif self.merge_strategy == "learned_with_images":
|
||||
@ -61,7 +61,7 @@ class AlphaBlender(nn.Module):
|
||||
alpha = torch.where(
|
||||
image_only_indicator.bool(),
|
||||
torch.ones(1, 1, device=image_only_indicator.device),
|
||||
rearrange(torch.sigmoid(self.mix_factor.to(image_only_indicator.device)), "... -> ... 1"),
|
||||
rearrange(torch.sigmoid(self.mix_factor), "... -> ... 1"),
|
||||
)
|
||||
alpha = rearrange(alpha, self.rearrange_pattern)
|
||||
# make shape compatible
|
||||
|
@ -15,21 +15,21 @@ class CLIPEmbeddingNoiseAugmentation(ImageConcatWithNoiseAugmentation):
|
||||
|
||||
def scale(self, x):
|
||||
# re-normalize to centered mean and unit variance
|
||||
x = (x - self.data_mean.to(x.device)) * 1. / self.data_std.to(x.device)
|
||||
x = (x - self.data_mean) * 1. / self.data_std
|
||||
return x
|
||||
|
||||
def unscale(self, x):
|
||||
# back to original data stats
|
||||
x = (x * self.data_std.to(x.device)) + self.data_mean.to(x.device)
|
||||
x = (x * self.data_std) + self.data_mean
|
||||
return x
|
||||
|
||||
def forward(self, x, noise_level=None, seed=None):
|
||||
def forward(self, x, noise_level=None):
|
||||
if noise_level is None:
|
||||
noise_level = torch.randint(0, self.max_noise_level, (x.shape[0],), device=x.device).long()
|
||||
else:
|
||||
assert isinstance(noise_level, torch.Tensor)
|
||||
x = self.scale(x)
|
||||
z = self.q_sample(x, noise_level, seed=seed)
|
||||
z = self.q_sample(x, noise_level)
|
||||
z = self.unscale(z)
|
||||
noise_level = self.time_embed(noise_level)
|
||||
return z, noise_level
|
||||
|
@ -61,7 +61,6 @@ def _summarize_chunk(
|
||||
value: Tensor,
|
||||
scale: float,
|
||||
upcast_attention: bool,
|
||||
mask,
|
||||
) -> AttnChunk:
|
||||
if upcast_attention:
|
||||
with torch.autocast(enabled=False, device_type = 'cuda'):
|
||||
@ -85,8 +84,6 @@ def _summarize_chunk(
|
||||
max_score, _ = torch.max(attn_weights, -1, keepdim=True)
|
||||
max_score = max_score.detach()
|
||||
attn_weights -= max_score
|
||||
if mask is not None:
|
||||
attn_weights += mask
|
||||
torch.exp(attn_weights, out=attn_weights)
|
||||
exp_weights = attn_weights.to(value.dtype)
|
||||
exp_values = torch.bmm(exp_weights, value)
|
||||
@ -99,12 +96,11 @@ def _query_chunk_attention(
|
||||
value: Tensor,
|
||||
summarize_chunk: SummarizeChunk,
|
||||
kv_chunk_size: int,
|
||||
mask,
|
||||
) -> Tensor:
|
||||
batch_x_heads, k_channels_per_head, k_tokens = key_t.shape
|
||||
_, _, v_channels_per_head = value.shape
|
||||
|
||||
def chunk_scanner(chunk_idx: int, mask) -> AttnChunk:
|
||||
def chunk_scanner(chunk_idx: int) -> AttnChunk:
|
||||
key_chunk = dynamic_slice(
|
||||
key_t,
|
||||
(0, 0, chunk_idx),
|
||||
@ -115,13 +111,10 @@ def _query_chunk_attention(
|
||||
(0, chunk_idx, 0),
|
||||
(batch_x_heads, kv_chunk_size, v_channels_per_head)
|
||||
)
|
||||
if mask is not None:
|
||||
mask = mask[:,:,chunk_idx:chunk_idx + kv_chunk_size]
|
||||
|
||||
return summarize_chunk(query, key_chunk, value_chunk, mask=mask)
|
||||
return summarize_chunk(query, key_chunk, value_chunk)
|
||||
|
||||
chunks: List[AttnChunk] = [
|
||||
chunk_scanner(chunk, mask) for chunk in torch.arange(0, k_tokens, kv_chunk_size)
|
||||
chunk_scanner(chunk) for chunk in torch.arange(0, k_tokens, kv_chunk_size)
|
||||
]
|
||||
acc_chunk = AttnChunk(*map(torch.stack, zip(*chunks)))
|
||||
chunk_values, chunk_weights, chunk_max = acc_chunk
|
||||
@ -142,7 +135,6 @@ def _get_attention_scores_no_kv_chunking(
|
||||
value: Tensor,
|
||||
scale: float,
|
||||
upcast_attention: bool,
|
||||
mask,
|
||||
) -> Tensor:
|
||||
if upcast_attention:
|
||||
with torch.autocast(enabled=False, device_type = 'cuda'):
|
||||
@ -164,8 +156,6 @@ def _get_attention_scores_no_kv_chunking(
|
||||
beta=0,
|
||||
)
|
||||
|
||||
if mask is not None:
|
||||
attn_scores += mask
|
||||
try:
|
||||
attn_probs = attn_scores.softmax(dim=-1)
|
||||
del attn_scores
|
||||
@ -193,7 +183,6 @@ def efficient_dot_product_attention(
|
||||
kv_chunk_size_min: Optional[int] = None,
|
||||
use_checkpoint=True,
|
||||
upcast_attention=False,
|
||||
mask = None,
|
||||
):
|
||||
"""Computes efficient dot-product attention given query, transposed key, and value.
|
||||
This is efficient version of attention presented in
|
||||
@ -220,22 +209,13 @@ def efficient_dot_product_attention(
|
||||
if kv_chunk_size_min is not None:
|
||||
kv_chunk_size = max(kv_chunk_size, kv_chunk_size_min)
|
||||
|
||||
if mask is not None and len(mask.shape) == 2:
|
||||
mask = mask.unsqueeze(0)
|
||||
|
||||
def get_query_chunk(chunk_idx: int) -> Tensor:
|
||||
return dynamic_slice(
|
||||
query,
|
||||
(0, chunk_idx, 0),
|
||||
(batch_x_heads, min(query_chunk_size, q_tokens), q_channels_per_head)
|
||||
)
|
||||
|
||||
def get_mask_chunk(chunk_idx: int) -> Tensor:
|
||||
if mask is None:
|
||||
return None
|
||||
chunk = min(query_chunk_size, q_tokens)
|
||||
return mask[:,chunk_idx:chunk_idx + chunk]
|
||||
|
||||
|
||||
summarize_chunk: SummarizeChunk = partial(_summarize_chunk, scale=scale, upcast_attention=upcast_attention)
|
||||
summarize_chunk: SummarizeChunk = partial(checkpoint, summarize_chunk) if use_checkpoint else summarize_chunk
|
||||
compute_query_chunk_attn: ComputeQueryChunkAttn = partial(
|
||||
@ -257,7 +237,6 @@ def efficient_dot_product_attention(
|
||||
query=query,
|
||||
key_t=key_t,
|
||||
value=value,
|
||||
mask=mask,
|
||||
)
|
||||
|
||||
# TODO: maybe we should use torch.empty_like(query) to allocate storage in-advance,
|
||||
@ -267,7 +246,6 @@ def efficient_dot_product_attention(
|
||||
query=get_query_chunk(i * query_chunk_size),
|
||||
key_t=key_t,
|
||||
value=value,
|
||||
mask=get_mask_chunk(i * query_chunk_size)
|
||||
) for i in range(math.ceil(q_tokens / query_chunk_size))
|
||||
], dim=1)
|
||||
return res
|
||||
|
@ -82,14 +82,14 @@ class VideoResBlock(ResnetBlock):
|
||||
|
||||
x = self.time_stack(x, temb)
|
||||
|
||||
alpha = self.get_alpha(bs=b // timesteps).to(x.device)
|
||||
alpha = self.get_alpha(bs=b // timesteps)
|
||||
x = alpha * x + (1.0 - alpha) * x_mix
|
||||
|
||||
x = rearrange(x, "b c t h w -> (b t) c h w")
|
||||
return x
|
||||
|
||||
|
||||
class AE3DConv(ops.Conv2d):
|
||||
class AE3DConv(torch.nn.Conv2d):
|
||||
def __init__(self, in_channels, out_channels, video_kernel_size=3, *args, **kwargs):
|
||||
super().__init__(in_channels, out_channels, *args, **kwargs)
|
||||
if isinstance(video_kernel_size, Iterable):
|
||||
@ -97,7 +97,7 @@ class AE3DConv(ops.Conv2d):
|
||||
else:
|
||||
padding = int(video_kernel_size // 2)
|
||||
|
||||
self.time_mix_conv = ops.Conv3d(
|
||||
self.time_mix_conv = torch.nn.Conv3d(
|
||||
in_channels=out_channels,
|
||||
out_channels=out_channels,
|
||||
kernel_size=video_kernel_size,
|
||||
@ -167,7 +167,7 @@ class AttnVideoBlock(AttnBlock):
|
||||
emb = emb[:, None, :]
|
||||
x_mix = x_mix + emb
|
||||
|
||||
alpha = self.get_alpha().to(x.device)
|
||||
alpha = self.get_alpha()
|
||||
x_mix = self.time_mix_block(x_mix, timesteps=timesteps)
|
||||
x = alpha * x + (1.0 - alpha) * x_mix # alpha merge
|
||||
|
||||
|
@ -1,20 +0,0 @@
|
||||
Copyright (c) 2015 Preferred Infrastructure, Inc.
|
||||
Copyright (c) 2015 Preferred Networks, Inc.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
@ -1,674 +0,0 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<https://www.gnu.org/licenses/why-not-lgpl.html>.
|
@ -1,201 +0,0 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
@ -1,19 +0,0 @@
|
||||
Copyright (c) 2022 Katherine Crowson
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
@ -1,21 +0,0 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2022 Machine Vision and Learning Group, LMU Munich
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
@ -1,21 +0,0 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023 Ollin Boer Bohan
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
@ -1,203 +0,0 @@
|
||||
Copyright 2018- The Hugging Face team. All rights reserved.
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
@ -66,8 +66,6 @@ fpvae_group.add_argument("--vae-in-fp16", action="store_true")
|
||||
fpvae_group.add_argument("--vae-in-fp32", action="store_true")
|
||||
fpvae_group.add_argument("--vae-in-bf16", action="store_true")
|
||||
|
||||
parser.add_argument("--vae-in-cpu", action="store_true")
|
||||
|
||||
fpte_group = parser.add_mutually_exclusive_group()
|
||||
fpte_group.add_argument("--clip-in-fp8-e4m3fn", action="store_true")
|
||||
fpte_group.add_argument("--clip-in-fp8-e5m2", action="store_true")
|
||||
@ -100,7 +98,8 @@ vram_group.add_argument("--always-high-vram", action="store_true")
|
||||
vram_group.add_argument("--always-normal-vram", action="store_true")
|
||||
vram_group.add_argument("--always-low-vram", action="store_true")
|
||||
vram_group.add_argument("--always-no-vram", action="store_true")
|
||||
vram_group.add_argument("--always-cpu", type=int, nargs="?", metavar="CPU_NUM_THREADS", const=-1)
|
||||
vram_group.add_argument("--always-cpu", action="store_true")
|
||||
|
||||
|
||||
parser.add_argument("--always-offload-from-vram", action="store_true")
|
||||
parser.add_argument("--pytorch-deterministic", action="store_true")
|
||||
@ -111,8 +110,6 @@ parser.add_argument("--is-windows-embedded-python", action="store_true")
|
||||
|
||||
parser.add_argument("--disable-server-info", action="store_true")
|
||||
|
||||
parser.add_argument("--multi-user", action="store_true")
|
||||
|
||||
if ldm_patched.modules.options.args_parsing:
|
||||
args = parser.parse_args([])
|
||||
else:
|
||||
|
@ -57,7 +57,7 @@ class CLIPEncoder(torch.nn.Module):
|
||||
self.layers = torch.nn.ModuleList([CLIPLayer(embed_dim, heads, intermediate_size, intermediate_activation, dtype, device, operations) for i in range(num_layers)])
|
||||
|
||||
def forward(self, x, mask=None, intermediate_output=None):
|
||||
optimized_attention = optimized_attention_for_device(x.device, mask=mask is not None, small_input=True)
|
||||
optimized_attention = optimized_attention_for_device(x.device, mask=mask is not None)
|
||||
|
||||
if intermediate_output is not None:
|
||||
if intermediate_output < 0:
|
||||
@ -151,7 +151,7 @@ class CLIPVisionEmbeddings(torch.nn.Module):
|
||||
|
||||
def forward(self, pixel_values):
|
||||
embeds = self.patch_embedding(pixel_values).flatten(2).transpose(1, 2)
|
||||
return torch.cat([self.class_embedding.to(embeds.device).expand(pixel_values.shape[0], 1, -1), embeds], dim=1) + self.position_embedding.weight.to(embeds.device)
|
||||
return torch.cat([self.class_embedding.expand(pixel_values.shape[0], 1, -1), embeds], dim=1) + self.position_embedding.weight
|
||||
|
||||
|
||||
class CLIPVision(torch.nn.Module):
|
||||
|
@ -1,6 +1,7 @@
|
||||
from .utils import load_torch_file, transformers_convert, state_dict_prefix_replace
|
||||
from .utils import load_torch_file, transformers_convert, common_upscale
|
||||
import os
|
||||
import torch
|
||||
import contextlib
|
||||
import json
|
||||
|
||||
import ldm_patched.modules.ops
|
||||
@ -40,13 +41,9 @@ class ClipVisionModel():
|
||||
self.model.eval()
|
||||
|
||||
self.patcher = ldm_patched.modules.model_patcher.ModelPatcher(self.model, load_device=self.load_device, offload_device=offload_device)
|
||||
|
||||
def load_sd(self, sd):
|
||||
return self.model.load_state_dict(sd, strict=False)
|
||||
|
||||
def get_sd(self):
|
||||
return self.model.state_dict()
|
||||
|
||||
def encode_image(self, image):
|
||||
ldm_patched.modules.model_management.load_model_gpu(self.patcher)
|
||||
pixel_values = clip_preprocess(image.to(self.load_device)).float()
|
||||
@ -79,9 +76,6 @@ def convert_to_transformers(sd, prefix):
|
||||
sd['visual_projection.weight'] = sd.pop("{}proj".format(prefix)).transpose(0, 1)
|
||||
|
||||
sd = transformers_convert(sd, prefix, "vision_model.", 48)
|
||||
else:
|
||||
replace_prefix = {prefix: ""}
|
||||
sd = state_dict_prefix_replace(sd, replace_prefix)
|
||||
return sd
|
||||
|
||||
def load_clipvision_from_sd(sd, prefix="", convert_keys=False):
|
||||
|
@ -1,8 +1,11 @@
|
||||
import enum
|
||||
import torch
|
||||
import math
|
||||
import ldm_patched.modules.utils
|
||||
|
||||
|
||||
def lcm(a, b): #TODO: eventually replace by math.lcm (added in python3.9)
|
||||
return abs(a*b) // math.gcd(a, b)
|
||||
|
||||
class CONDRegular:
|
||||
def __init__(self, cond):
|
||||
@ -39,7 +42,7 @@ class CONDCrossAttn(CONDRegular):
|
||||
if s1[0] != s2[0] or s1[2] != s2[2]: #these 2 cases should not happen
|
||||
return False
|
||||
|
||||
mult_min = math.lcm(s1[1], s2[1])
|
||||
mult_min = lcm(s1[1], s2[1])
|
||||
diff = mult_min // min(s1[1], s2[1])
|
||||
if diff > 4: #arbitrary limit on the padding because it's probably going to impact performance negatively if it's too much
|
||||
return False
|
||||
@ -50,7 +53,7 @@ class CONDCrossAttn(CONDRegular):
|
||||
crossattn_max_len = self.cond.shape[1]
|
||||
for x in others:
|
||||
c = x.cond
|
||||
crossattn_max_len = math.lcm(crossattn_max_len, c.shape[1])
|
||||
crossattn_max_len = lcm(crossattn_max_len, c.shape[1])
|
||||
conds.append(c)
|
||||
|
||||
out = []
|
||||
|
@ -1,6 +1,7 @@
|
||||
import torch
|
||||
import math
|
||||
import os
|
||||
import contextlib
|
||||
import ldm_patched.modules.utils
|
||||
import ldm_patched.modules.model_management
|
||||
import ldm_patched.modules.model_detection
|
||||
@ -125,10 +126,7 @@ class ControlBase:
|
||||
if o[i] is None:
|
||||
o[i] = prev_val
|
||||
else:
|
||||
if o[i].shape[0] < prev_val.shape[0]:
|
||||
o[i] = prev_val + o[i]
|
||||
else:
|
||||
o[i] += prev_val
|
||||
o[i] += prev_val
|
||||
return out
|
||||
|
||||
class ControlNet(ControlBase):
|
||||
@ -285,7 +283,7 @@ class ControlLora(ControlNet):
|
||||
cm = self.control_model.state_dict()
|
||||
|
||||
for k in sd:
|
||||
weight = sd[k]
|
||||
weight = ldm_patched.modules.model_management.resolve_lowvram_weight(sd[k], diffusion_model, k)
|
||||
try:
|
||||
ldm_patched.modules.utils.set_attr(self.control_model, k, weight)
|
||||
except:
|
||||
|
@ -1,3 +1,4 @@
|
||||
import json
|
||||
import os
|
||||
|
||||
import ldm_patched.modules.sd
|
||||
|
@ -1,5 +1,5 @@
|
||||
import torch
|
||||
from torch import nn
|
||||
from torch import nn, einsum
|
||||
from ldm_patched.ldm.modules.attention import CrossAttention
|
||||
from inspect import isfunction
|
||||
|
||||
|
@ -33,7 +33,3 @@ class SDXL(LatentFormat):
|
||||
[-0.3112, -0.2359, -0.2076]
|
||||
]
|
||||
self.taesd_decoder_name = "taesdxl_decoder"
|
||||
|
||||
class SD_X4(LatentFormat):
|
||||
def __init__(self):
|
||||
self.scale_factor = 0.08333
|
||||
|
@ -1,11 +1,12 @@
|
||||
import torch
|
||||
from ldm_patched.ldm.modules.diffusionmodules.openaimodel import UNetModel, Timestep
|
||||
from ldm_patched.ldm.modules.diffusionmodules.openaimodel import UNetModel
|
||||
from ldm_patched.ldm.modules.encoders.noise_aug_modules import CLIPEmbeddingNoiseAugmentation
|
||||
from ldm_patched.ldm.modules.diffusionmodules.upscaling import ImageConcatWithNoiseAugmentation
|
||||
from ldm_patched.ldm.modules.diffusionmodules.openaimodel import Timestep
|
||||
import ldm_patched.modules.model_management
|
||||
import ldm_patched.modules.conds
|
||||
import ldm_patched.modules.ops
|
||||
from enum import Enum
|
||||
import contextlib
|
||||
from . import utils
|
||||
|
||||
class ModelType(Enum):
|
||||
@ -77,9 +78,8 @@ class BaseModel(torch.nn.Module):
|
||||
extra_conds = {}
|
||||
for o in kwargs:
|
||||
extra = kwargs[o]
|
||||
if hasattr(extra, "dtype"):
|
||||
if extra.dtype != torch.int and extra.dtype != torch.long:
|
||||
extra = extra.to(dtype)
|
||||
if hasattr(extra, "to"):
|
||||
extra = extra.to(dtype)
|
||||
extra_conds[o] = extra
|
||||
|
||||
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
|
||||
@ -99,29 +99,11 @@ class BaseModel(torch.nn.Module):
|
||||
if self.inpaint_model:
|
||||
concat_keys = ("mask", "masked_image")
|
||||
cond_concat = []
|
||||
denoise_mask = kwargs.get("concat_mask", kwargs.get("denoise_mask", None))
|
||||
concat_latent_image = kwargs.get("concat_latent_image", None)
|
||||
if concat_latent_image is None:
|
||||
concat_latent_image = kwargs.get("latent_image", None)
|
||||
else:
|
||||
concat_latent_image = self.process_latent_in(concat_latent_image)
|
||||
|
||||
denoise_mask = kwargs.get("denoise_mask", None)
|
||||
latent_image = kwargs.get("latent_image", None)
|
||||
noise = kwargs.get("noise", None)
|
||||
device = kwargs["device"]
|
||||
|
||||
if concat_latent_image.shape[1:] != noise.shape[1:]:
|
||||
concat_latent_image = utils.common_upscale(concat_latent_image, noise.shape[-1], noise.shape[-2], "bilinear", "center")
|
||||
|
||||
concat_latent_image = utils.resize_to_batch_size(concat_latent_image, noise.shape[0])
|
||||
|
||||
if len(denoise_mask.shape) == len(noise.shape):
|
||||
denoise_mask = denoise_mask[:,:1]
|
||||
|
||||
denoise_mask = denoise_mask.reshape((-1, 1, denoise_mask.shape[-2], denoise_mask.shape[-1]))
|
||||
if denoise_mask.shape[-2:] != noise.shape[-2:]:
|
||||
denoise_mask = utils.common_upscale(denoise_mask, noise.shape[-1], noise.shape[-2], "bilinear", "center")
|
||||
denoise_mask = utils.resize_to_batch_size(denoise_mask.round(), noise.shape[0])
|
||||
|
||||
def blank_inpaint_image_like(latent_image):
|
||||
blank_image = torch.ones_like(latent_image)
|
||||
# these are the values for "zero" in pixel space translated to latent space
|
||||
@ -134,9 +116,9 @@ class BaseModel(torch.nn.Module):
|
||||
for ck in concat_keys:
|
||||
if denoise_mask is not None:
|
||||
if ck == "mask":
|
||||
cond_concat.append(denoise_mask.to(device))
|
||||
cond_concat.append(denoise_mask[:,:1].to(device))
|
||||
elif ck == "masked_image":
|
||||
cond_concat.append(concat_latent_image.to(device)) #NOTE: the latent_image should be masked by the mask in pixel space
|
||||
cond_concat.append(latent_image.to(device)) #NOTE: the latent_image should be masked by the mask in pixel space
|
||||
else:
|
||||
if ck == "mask":
|
||||
cond_concat.append(torch.ones_like(noise)[:,:1])
|
||||
@ -144,15 +126,9 @@ class BaseModel(torch.nn.Module):
|
||||
cond_concat.append(blank_inpaint_image_like(noise))
|
||||
data = torch.cat(cond_concat, dim=1)
|
||||
out['c_concat'] = ldm_patched.modules.conds.CONDNoiseShape(data)
|
||||
|
||||
adm = self.encode_adm(**kwargs)
|
||||
if adm is not None:
|
||||
out['y'] = ldm_patched.modules.conds.CONDRegular(adm)
|
||||
|
||||
cross_attn = kwargs.get("cross_attn", None)
|
||||
if cross_attn is not None:
|
||||
out['c_crossattn'] = ldm_patched.modules.conds.CONDCrossAttn(cross_attn)
|
||||
|
||||
return out
|
||||
|
||||
def load_model_weights(self, sd, unet_prefix=""):
|
||||
@ -178,28 +154,23 @@ class BaseModel(torch.nn.Module):
|
||||
def process_latent_out(self, latent):
|
||||
return self.latent_format.process_out(latent)
|
||||
|
||||
def state_dict_for_saving(self, clip_state_dict=None, vae_state_dict=None, clip_vision_state_dict=None):
|
||||
extra_sds = []
|
||||
if clip_state_dict is not None:
|
||||
extra_sds.append(self.model_config.process_clip_state_dict_for_saving(clip_state_dict))
|
||||
if vae_state_dict is not None:
|
||||
extra_sds.append(self.model_config.process_vae_state_dict_for_saving(vae_state_dict))
|
||||
if clip_vision_state_dict is not None:
|
||||
extra_sds.append(self.model_config.process_clip_vision_state_dict_for_saving(clip_vision_state_dict))
|
||||
def state_dict_for_saving(self, clip_state_dict, vae_state_dict):
|
||||
clip_state_dict = self.model_config.process_clip_state_dict_for_saving(clip_state_dict)
|
||||
unet_sd = self.diffusion_model.state_dict()
|
||||
unet_state_dict = {}
|
||||
for k in unet_sd:
|
||||
unet_state_dict[k] = ldm_patched.modules.model_management.resolve_lowvram_weight(unet_sd[k], self.diffusion_model, k)
|
||||
|
||||
unet_state_dict = self.diffusion_model.state_dict()
|
||||
unet_state_dict = self.model_config.process_unet_state_dict_for_saving(unet_state_dict)
|
||||
|
||||
vae_state_dict = self.model_config.process_vae_state_dict_for_saving(vae_state_dict)
|
||||
if self.get_dtype() == torch.float16:
|
||||
extra_sds = map(lambda sd: utils.convert_sd_to(sd, torch.float16), extra_sds)
|
||||
clip_state_dict = utils.convert_sd_to(clip_state_dict, torch.float16)
|
||||
vae_state_dict = utils.convert_sd_to(vae_state_dict, torch.float16)
|
||||
|
||||
if self.model_type == ModelType.V_PREDICTION:
|
||||
unet_state_dict["v_pred"] = torch.tensor([])
|
||||
|
||||
for sd in extra_sds:
|
||||
unet_state_dict.update(sd)
|
||||
|
||||
return unet_state_dict
|
||||
return {**unet_state_dict, **vae_state_dict, **clip_state_dict}
|
||||
|
||||
def set_inpaint(self):
|
||||
self.inpaint_model = True
|
||||
@ -218,7 +189,7 @@ class BaseModel(torch.nn.Module):
|
||||
return (((area * 0.6) / 0.9) + 1024) * (1024 * 1024)
|
||||
|
||||
|
||||
def unclip_adm(unclip_conditioning, device, noise_augmentor, noise_augment_merge=0.0, seed=None):
|
||||
def unclip_adm(unclip_conditioning, device, noise_augmentor, noise_augment_merge=0.0):
|
||||
adm_inputs = []
|
||||
weights = []
|
||||
noise_aug = []
|
||||
@ -227,7 +198,7 @@ def unclip_adm(unclip_conditioning, device, noise_augmentor, noise_augment_merge
|
||||
weight = unclip_cond["strength"]
|
||||
noise_augment = unclip_cond["noise_augmentation"]
|
||||
noise_level = round((noise_augmentor.max_noise_level - 1) * noise_augment)
|
||||
c_adm, noise_level_emb = noise_augmentor(adm_cond.to(device), noise_level=torch.tensor([noise_level], device=device), seed=seed)
|
||||
c_adm, noise_level_emb = noise_augmentor(adm_cond.to(device), noise_level=torch.tensor([noise_level], device=device))
|
||||
adm_out = torch.cat((c_adm, noise_level_emb), 1) * weight
|
||||
weights.append(weight)
|
||||
noise_aug.append(noise_augment)
|
||||
@ -253,11 +224,11 @@ class SD21UNCLIP(BaseModel):
|
||||
if unclip_conditioning is None:
|
||||
return torch.zeros((1, self.adm_channels))
|
||||
else:
|
||||
return unclip_adm(unclip_conditioning, device, self.noise_augmentor, kwargs.get("unclip_noise_augment_merge", 0.05), kwargs.get("seed", 0) - 10)
|
||||
return unclip_adm(unclip_conditioning, device, self.noise_augmentor, kwargs.get("unclip_noise_augment_merge", 0.05))
|
||||
|
||||
def sdxl_pooled(args, noise_augmentor):
|
||||
if "unclip_conditioning" in args:
|
||||
return unclip_adm(args.get("unclip_conditioning", None), args["device"], noise_augmentor, seed=args.get("seed", 0) - 10)[:,:1280]
|
||||
return unclip_adm(args.get("unclip_conditioning", None), args["device"], noise_augmentor)[:,:1280]
|
||||
else:
|
||||
return args["pooled_output"]
|
||||
|
||||
@ -351,75 +322,9 @@ class SVD_img2vid(BaseModel):
|
||||
|
||||
out['c_concat'] = ldm_patched.modules.conds.CONDNoiseShape(latent_image)
|
||||
|
||||
cross_attn = kwargs.get("cross_attn", None)
|
||||
if cross_attn is not None:
|
||||
out['c_crossattn'] = ldm_patched.modules.conds.CONDCrossAttn(cross_attn)
|
||||
|
||||
if "time_conditioning" in kwargs:
|
||||
out["time_context"] = ldm_patched.modules.conds.CONDCrossAttn(kwargs["time_conditioning"])
|
||||
|
||||
out['image_only_indicator'] = ldm_patched.modules.conds.CONDConstant(torch.zeros((1,), device=device))
|
||||
out['num_video_frames'] = ldm_patched.modules.conds.CONDConstant(noise.shape[0])
|
||||
return out
|
||||
|
||||
class Stable_Zero123(BaseModel):
|
||||
def __init__(self, model_config, model_type=ModelType.EPS, device=None, cc_projection_weight=None, cc_projection_bias=None):
|
||||
super().__init__(model_config, model_type, device=device)
|
||||
self.cc_projection = ldm_patched.modules.ops.manual_cast.Linear(cc_projection_weight.shape[1], cc_projection_weight.shape[0], dtype=self.get_dtype(), device=device)
|
||||
self.cc_projection.weight.copy_(cc_projection_weight)
|
||||
self.cc_projection.bias.copy_(cc_projection_bias)
|
||||
|
||||
def extra_conds(self, **kwargs):
|
||||
out = {}
|
||||
|
||||
latent_image = kwargs.get("concat_latent_image", None)
|
||||
noise = kwargs.get("noise", None)
|
||||
|
||||
if latent_image is None:
|
||||
latent_image = torch.zeros_like(noise)
|
||||
|
||||
if latent_image.shape[1:] != noise.shape[1:]:
|
||||
latent_image = utils.common_upscale(latent_image, noise.shape[-1], noise.shape[-2], "bilinear", "center")
|
||||
|
||||
latent_image = utils.resize_to_batch_size(latent_image, noise.shape[0])
|
||||
|
||||
out['c_concat'] = ldm_patched.modules.conds.CONDNoiseShape(latent_image)
|
||||
|
||||
cross_attn = kwargs.get("cross_attn", None)
|
||||
if cross_attn is not None:
|
||||
if cross_attn.shape[-1] != 768:
|
||||
cross_attn = self.cc_projection(cross_attn)
|
||||
out['c_crossattn'] = ldm_patched.modules.conds.CONDCrossAttn(cross_attn)
|
||||
return out
|
||||
|
||||
class SD_X4Upscaler(BaseModel):
|
||||
def __init__(self, model_config, model_type=ModelType.V_PREDICTION, device=None):
|
||||
super().__init__(model_config, model_type, device=device)
|
||||
self.noise_augmentor = ImageConcatWithNoiseAugmentation(noise_schedule_config={"linear_start": 0.0001, "linear_end": 0.02}, max_noise_level=350)
|
||||
|
||||
def extra_conds(self, **kwargs):
|
||||
out = {}
|
||||
|
||||
image = kwargs.get("concat_image", None)
|
||||
noise = kwargs.get("noise", None)
|
||||
noise_augment = kwargs.get("noise_augmentation", 0.0)
|
||||
device = kwargs["device"]
|
||||
seed = kwargs["seed"] - 10
|
||||
|
||||
noise_level = round((self.noise_augmentor.max_noise_level) * noise_augment)
|
||||
|
||||
if image is None:
|
||||
image = torch.zeros_like(noise)[:,:3]
|
||||
|
||||
if image.shape[1:] != noise.shape[1:]:
|
||||
image = utils.common_upscale(image.to(device), noise.shape[-1], noise.shape[-2], "bilinear", "center")
|
||||
|
||||
noise_level = torch.tensor([noise_level], device=device)
|
||||
if noise_augment > 0:
|
||||
image, noise_level = self.noise_augmentor(image.to(device), noise_level=noise_level, seed=seed)
|
||||
|
||||
image = utils.resize_to_batch_size(image, noise.shape[0])
|
||||
|
||||
out['c_concat'] = ldm_patched.modules.conds.CONDNoiseShape(image)
|
||||
out['y'] = ldm_patched.modules.conds.CONDRegular(noise_level)
|
||||
return out
|
||||
|
@ -34,6 +34,7 @@ def detect_unet_config(state_dict, key_prefix, dtype):
|
||||
unet_config = {
|
||||
"use_checkpoint": False,
|
||||
"image_size": 32,
|
||||
"out_channels": 4,
|
||||
"use_spatial_transformer": True,
|
||||
"legacy": False
|
||||
}
|
||||
@ -49,12 +50,6 @@ def detect_unet_config(state_dict, key_prefix, dtype):
|
||||
model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0]
|
||||
in_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[1]
|
||||
|
||||
out_key = '{}out.2.weight'.format(key_prefix)
|
||||
if out_key in state_dict:
|
||||
out_channels = state_dict[out_key].shape[0]
|
||||
else:
|
||||
out_channels = 4
|
||||
|
||||
num_res_blocks = []
|
||||
channel_mult = []
|
||||
attention_resolutions = []
|
||||
@ -127,7 +122,6 @@ def detect_unet_config(state_dict, key_prefix, dtype):
|
||||
transformer_depth_middle = -1
|
||||
|
||||
unet_config["in_channels"] = in_channels
|
||||
unet_config["out_channels"] = out_channels
|
||||
unet_config["model_channels"] = model_channels
|
||||
unet_config["num_res_blocks"] = num_res_blocks
|
||||
unet_config["transformer_depth"] = transformer_depth
|
||||
|
@ -60,9 +60,6 @@ except:
|
||||
pass
|
||||
|
||||
if args.always_cpu:
|
||||
if args.always_cpu > 0:
|
||||
torch.set_num_threads(args.always_cpu)
|
||||
print(f"Running on {torch.get_num_threads()} CPU threads")
|
||||
cpu_state = CPUState.CPU
|
||||
|
||||
def is_intel_xpu():
|
||||
@ -178,7 +175,7 @@ try:
|
||||
if int(torch_version[0]) >= 2:
|
||||
if ENABLE_PYTORCH_ATTENTION == False and args.attention_split == False and args.attention_quad == False:
|
||||
ENABLE_PYTORCH_ATTENTION = True
|
||||
if torch.cuda.is_bf16_supported() and torch.cuda.get_device_properties(torch.cuda.current_device()).major >= 8:
|
||||
if torch.cuda.is_bf16_supported():
|
||||
VAE_DTYPE = torch.bfloat16
|
||||
if is_intel_xpu():
|
||||
if args.attention_split == False and args.attention_quad == False:
|
||||
@ -189,9 +186,6 @@ except:
|
||||
if is_intel_xpu():
|
||||
VAE_DTYPE = torch.bfloat16
|
||||
|
||||
if args.vae_in_cpu:
|
||||
VAE_DTYPE = torch.float32
|
||||
|
||||
if args.vae_in_fp16:
|
||||
VAE_DTYPE = torch.float16
|
||||
elif args.vae_in_bf16:
|
||||
@ -224,8 +218,15 @@ if args.all_in_fp16:
|
||||
FORCE_FP16 = True
|
||||
|
||||
if lowvram_available:
|
||||
if set_vram_to in (VRAMState.LOW_VRAM, VRAMState.NO_VRAM):
|
||||
vram_state = set_vram_to
|
||||
try:
|
||||
import accelerate
|
||||
if set_vram_to in (VRAMState.LOW_VRAM, VRAMState.NO_VRAM):
|
||||
vram_state = set_vram_to
|
||||
except Exception as e:
|
||||
import traceback
|
||||
print(traceback.format_exc())
|
||||
print("ERROR: LOW VRAM MODE NEEDS accelerate.")
|
||||
lowvram_available = False
|
||||
|
||||
|
||||
if cpu_state != CPUState.GPU:
|
||||
@ -265,14 +266,6 @@ print("VAE dtype:", VAE_DTYPE)
|
||||
|
||||
current_loaded_models = []
|
||||
|
||||
def module_size(module):
|
||||
module_mem = 0
|
||||
sd = module.state_dict()
|
||||
for k in sd:
|
||||
t = sd[k]
|
||||
module_mem += t.nelement() * t.element_size()
|
||||
return module_mem
|
||||
|
||||
class LoadedModel:
|
||||
def __init__(self, model):
|
||||
self.model = model
|
||||
@ -305,20 +298,8 @@ class LoadedModel:
|
||||
|
||||
if lowvram_model_memory > 0:
|
||||
print("loading in lowvram mode", lowvram_model_memory/(1024 * 1024))
|
||||
mem_counter = 0
|
||||
for m in self.real_model.modules():
|
||||
if hasattr(m, "ldm_patched_cast_weights"):
|
||||
m.prev_ldm_patched_cast_weights = m.ldm_patched_cast_weights
|
||||
m.ldm_patched_cast_weights = True
|
||||
module_mem = module_size(m)
|
||||
if mem_counter + module_mem < lowvram_model_memory:
|
||||
m.to(self.device)
|
||||
mem_counter += module_mem
|
||||
elif hasattr(m, "weight"): #only modules with ldm_patched_cast_weights can be set to lowvram mode
|
||||
m.to(self.device)
|
||||
mem_counter += module_size(m)
|
||||
print("lowvram: loaded module regularly", m)
|
||||
|
||||
device_map = accelerate.infer_auto_device_map(self.real_model, max_memory={0: "{}MiB".format(lowvram_model_memory // (1024 * 1024)), "cpu": "16GiB"})
|
||||
accelerate.dispatch_model(self.real_model, device_map=device_map, main_device=self.device)
|
||||
self.model_accelerated = True
|
||||
|
||||
if is_intel_xpu() and not args.disable_ipex_hijack:
|
||||
@ -328,11 +309,7 @@ class LoadedModel:
|
||||
|
||||
def model_unload(self):
|
||||
if self.model_accelerated:
|
||||
for m in self.real_model.modules():
|
||||
if hasattr(m, "prev_ldm_patched_cast_weights"):
|
||||
m.ldm_patched_cast_weights = m.prev_ldm_patched_cast_weights
|
||||
del m.prev_ldm_patched_cast_weights
|
||||
|
||||
accelerate.hooks.remove_hook_from_submodules(self.real_model)
|
||||
self.model_accelerated = False
|
||||
|
||||
self.model.unpatch_model(self.model.offload_device)
|
||||
@ -425,14 +402,14 @@ def load_models_gpu(models, memory_required=0):
|
||||
if lowvram_available and (vram_set_state == VRAMState.LOW_VRAM or vram_set_state == VRAMState.NORMAL_VRAM):
|
||||
model_size = loaded_model.model_memory_required(torch_dev)
|
||||
current_free_mem = get_free_memory(torch_dev)
|
||||
lowvram_model_memory = int(max(64 * (1024 * 1024), (current_free_mem - 1024 * (1024 * 1024)) / 1.3 ))
|
||||
lowvram_model_memory = int(max(256 * (1024 * 1024), (current_free_mem - 1024 * (1024 * 1024)) / 1.3 ))
|
||||
if model_size > (current_free_mem - inference_memory): #only switch to lowvram if really necessary
|
||||
vram_set_state = VRAMState.LOW_VRAM
|
||||
else:
|
||||
lowvram_model_memory = 0
|
||||
|
||||
if vram_set_state == VRAMState.NO_VRAM:
|
||||
lowvram_model_memory = 64 * 1024 * 1024
|
||||
lowvram_model_memory = 256 * 1024 * 1024
|
||||
|
||||
cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
|
||||
current_loaded_models.insert(0, loaded_model)
|
||||
@ -561,8 +538,6 @@ def intermediate_device():
|
||||
return torch.device("cpu")
|
||||
|
||||
def vae_device():
|
||||
if args.vae_in_cpu:
|
||||
return torch.device("cpu")
|
||||
return get_torch_device()
|
||||
|
||||
def vae_offload_device():
|
||||
@ -591,11 +566,6 @@ def supports_dtype(device, dtype): #TODO
|
||||
return True
|
||||
return False
|
||||
|
||||
def device_supports_non_blocking(device):
|
||||
if is_device_mps(device):
|
||||
return False #pytorch bug? mps doesn't support non blocking
|
||||
return True
|
||||
|
||||
def cast_to_device(tensor, device, dtype, copy=False):
|
||||
device_supports_cast = False
|
||||
if tensor.dtype == torch.float32 or tensor.dtype == torch.float16:
|
||||
@ -606,7 +576,9 @@ def cast_to_device(tensor, device, dtype, copy=False):
|
||||
elif is_intel_xpu():
|
||||
device_supports_cast = True
|
||||
|
||||
non_blocking = device_supports_non_blocking(device)
|
||||
non_blocking = True
|
||||
if is_device_mps(device):
|
||||
non_blocking = False #pytorch bug? mps doesn't support non blocking
|
||||
|
||||
if device_supports_cast:
|
||||
if copy:
|
||||
@ -770,11 +742,11 @@ def soft_empty_cache(force=False):
|
||||
torch.cuda.empty_cache()
|
||||
torch.cuda.ipc_collect()
|
||||
|
||||
def unload_all_models():
|
||||
free_memory(1e30, get_torch_device())
|
||||
|
||||
|
||||
def resolve_lowvram_weight(weight, model, key): #TODO: remove
|
||||
def resolve_lowvram_weight(weight, model, key):
|
||||
if weight.device == torch.device("meta"): #lowvram NOTE: this depends on the inner working of the accelerate library so it might break.
|
||||
key_split = key.split('.') # I have no idea why they don't just leave the weight there instead of using the meta device.
|
||||
op = ldm_patched.modules.utils.get_attr(model, '.'.join(key_split[:-1]))
|
||||
weight = op._hf_hook.weights_map[key_split[-1]]
|
||||
return weight
|
||||
|
||||
#TODO: might be cleaner to put this somewhere else
|
||||
|
@ -28,9 +28,13 @@ class ModelPatcher:
|
||||
if self.size > 0:
|
||||
return self.size
|
||||
model_sd = self.model.state_dict()
|
||||
self.size = ldm_patched.modules.model_management.module_size(self.model)
|
||||
size = 0
|
||||
for k in model_sd:
|
||||
t = model_sd[k]
|
||||
size += t.nelement() * t.element_size()
|
||||
self.size = size
|
||||
self.model_keys = set(model_sd.keys())
|
||||
return self.size
|
||||
return size
|
||||
|
||||
def clone(self):
|
||||
n = ModelPatcher(self.model, self.load_device, self.offload_device, self.size, self.current_device, weight_inplace_update=self.weight_inplace_update)
|
||||
@ -51,18 +55,14 @@ class ModelPatcher:
|
||||
def memory_required(self, input_shape):
|
||||
return self.model.memory_required(input_shape=input_shape)
|
||||
|
||||
def set_model_sampler_cfg_function(self, sampler_cfg_function, disable_cfg1_optimization=False):
|
||||
def set_model_sampler_cfg_function(self, sampler_cfg_function):
|
||||
if len(inspect.signature(sampler_cfg_function).parameters) == 3:
|
||||
self.model_options["sampler_cfg_function"] = lambda args: sampler_cfg_function(args["cond"], args["uncond"], args["cond_scale"]) #Old way
|
||||
else:
|
||||
self.model_options["sampler_cfg_function"] = sampler_cfg_function
|
||||
if disable_cfg1_optimization:
|
||||
self.model_options["disable_cfg1_optimization"] = True
|
||||
|
||||
def set_model_sampler_post_cfg_function(self, post_cfg_function, disable_cfg1_optimization=False):
|
||||
def set_model_sampler_post_cfg_function(self, post_cfg_function):
|
||||
self.model_options["sampler_post_cfg_function"] = self.model_options.get("sampler_post_cfg_function", []) + [post_cfg_function]
|
||||
if disable_cfg1_optimization:
|
||||
self.model_options["disable_cfg1_optimization"] = True
|
||||
|
||||
def set_model_unet_function_wrapper(self, unet_wrapper_function):
|
||||
self.model_options["model_function_wrapper"] = unet_wrapper_function
|
||||
@ -174,41 +174,40 @@ class ModelPatcher:
|
||||
sd.pop(k)
|
||||
return sd
|
||||
|
||||
def patch_model(self, device_to=None, patch_weights=True):
|
||||
def patch_model(self, device_to=None):
|
||||
for k in self.object_patches:
|
||||
old = getattr(self.model, k)
|
||||
if k not in self.object_patches_backup:
|
||||
self.object_patches_backup[k] = old
|
||||
setattr(self.model, k, self.object_patches[k])
|
||||
|
||||
if patch_weights:
|
||||
model_sd = self.model_state_dict()
|
||||
for key in self.patches:
|
||||
if key not in model_sd:
|
||||
print("could not patch. key doesn't exist in model:", key)
|
||||
continue
|
||||
model_sd = self.model_state_dict()
|
||||
for key in self.patches:
|
||||
if key not in model_sd:
|
||||
print("could not patch. key doesn't exist in model:", key)
|
||||
continue
|
||||
|
||||
weight = model_sd[key]
|
||||
weight = model_sd[key]
|
||||
|
||||
inplace_update = self.weight_inplace_update
|
||||
inplace_update = self.weight_inplace_update
|
||||
|
||||
if key not in self.backup:
|
||||
self.backup[key] = weight.to(device=self.offload_device, copy=inplace_update)
|
||||
|
||||
if device_to is not None:
|
||||
temp_weight = ldm_patched.modules.model_management.cast_to_device(weight, device_to, torch.float32, copy=True)
|
||||
else:
|
||||
temp_weight = weight.to(torch.float32, copy=True)
|
||||
out_weight = self.calculate_weight(self.patches[key], temp_weight, key).to(weight.dtype)
|
||||
if inplace_update:
|
||||
ldm_patched.modules.utils.copy_to_param(self.model, key, out_weight)
|
||||
else:
|
||||
ldm_patched.modules.utils.set_attr(self.model, key, out_weight)
|
||||
del temp_weight
|
||||
if key not in self.backup:
|
||||
self.backup[key] = weight.to(device=self.offload_device, copy=inplace_update)
|
||||
|
||||
if device_to is not None:
|
||||
self.model.to(device_to)
|
||||
self.current_device = device_to
|
||||
temp_weight = ldm_patched.modules.model_management.cast_to_device(weight, device_to, torch.float32, copy=True)
|
||||
else:
|
||||
temp_weight = weight.to(torch.float32, copy=True)
|
||||
out_weight = self.calculate_weight(self.patches[key], temp_weight, key).to(weight.dtype)
|
||||
if inplace_update:
|
||||
ldm_patched.modules.utils.copy_to_param(self.model, key, out_weight)
|
||||
else:
|
||||
ldm_patched.modules.utils.set_attr(self.model, key, out_weight)
|
||||
del temp_weight
|
||||
|
||||
if device_to is not None:
|
||||
self.model.to(device_to)
|
||||
self.current_device = device_to
|
||||
|
||||
return self.model
|
||||
|
||||
|
@ -1,92 +1,27 @@
|
||||
import torch
|
||||
import ldm_patched.modules.model_management
|
||||
|
||||
def cast_bias_weight(s, input):
|
||||
bias = None
|
||||
non_blocking = ldm_patched.modules.model_management.device_supports_non_blocking(input.device)
|
||||
if s.bias is not None:
|
||||
bias = s.bias.to(device=input.device, dtype=input.dtype, non_blocking=non_blocking)
|
||||
weight = s.weight.to(device=input.device, dtype=input.dtype, non_blocking=non_blocking)
|
||||
return weight, bias
|
||||
|
||||
from contextlib import contextmanager
|
||||
|
||||
class disable_weight_init:
|
||||
class Linear(torch.nn.Linear):
|
||||
ldm_patched_cast_weights = False
|
||||
def reset_parameters(self):
|
||||
return None
|
||||
|
||||
def forward_ldm_patched_cast_weights(self, input):
|
||||
weight, bias = cast_bias_weight(self, input)
|
||||
return torch.nn.functional.linear(input, weight, bias)
|
||||
|
||||
def forward(self, *args, **kwargs):
|
||||
if self.ldm_patched_cast_weights:
|
||||
return self.forward_ldm_patched_cast_weights(*args, **kwargs)
|
||||
else:
|
||||
return super().forward(*args, **kwargs)
|
||||
|
||||
class Conv2d(torch.nn.Conv2d):
|
||||
ldm_patched_cast_weights = False
|
||||
def reset_parameters(self):
|
||||
return None
|
||||
|
||||
def forward_ldm_patched_cast_weights(self, input):
|
||||
weight, bias = cast_bias_weight(self, input)
|
||||
return self._conv_forward(input, weight, bias)
|
||||
|
||||
def forward(self, *args, **kwargs):
|
||||
if self.ldm_patched_cast_weights:
|
||||
return self.forward_ldm_patched_cast_weights(*args, **kwargs)
|
||||
else:
|
||||
return super().forward(*args, **kwargs)
|
||||
|
||||
class Conv3d(torch.nn.Conv3d):
|
||||
ldm_patched_cast_weights = False
|
||||
def reset_parameters(self):
|
||||
return None
|
||||
|
||||
def forward_ldm_patched_cast_weights(self, input):
|
||||
weight, bias = cast_bias_weight(self, input)
|
||||
return self._conv_forward(input, weight, bias)
|
||||
|
||||
def forward(self, *args, **kwargs):
|
||||
if self.ldm_patched_cast_weights:
|
||||
return self.forward_ldm_patched_cast_weights(*args, **kwargs)
|
||||
else:
|
||||
return super().forward(*args, **kwargs)
|
||||
|
||||
class GroupNorm(torch.nn.GroupNorm):
|
||||
ldm_patched_cast_weights = False
|
||||
def reset_parameters(self):
|
||||
return None
|
||||
|
||||
def forward_ldm_patched_cast_weights(self, input):
|
||||
weight, bias = cast_bias_weight(self, input)
|
||||
return torch.nn.functional.group_norm(input, self.num_groups, weight, bias, self.eps)
|
||||
|
||||
def forward(self, *args, **kwargs):
|
||||
if self.ldm_patched_cast_weights:
|
||||
return self.forward_ldm_patched_cast_weights(*args, **kwargs)
|
||||
else:
|
||||
return super().forward(*args, **kwargs)
|
||||
|
||||
|
||||
class LayerNorm(torch.nn.LayerNorm):
|
||||
ldm_patched_cast_weights = False
|
||||
def reset_parameters(self):
|
||||
return None
|
||||
|
||||
def forward_ldm_patched_cast_weights(self, input):
|
||||
weight, bias = cast_bias_weight(self, input)
|
||||
return torch.nn.functional.layer_norm(input, self.normalized_shape, weight, bias, self.eps)
|
||||
|
||||
def forward(self, *args, **kwargs):
|
||||
if self.ldm_patched_cast_weights:
|
||||
return self.forward_ldm_patched_cast_weights(*args, **kwargs)
|
||||
else:
|
||||
return super().forward(*args, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def conv_nd(s, dims, *args, **kwargs):
|
||||
if dims == 2:
|
||||
@ -96,19 +31,35 @@ class disable_weight_init:
|
||||
else:
|
||||
raise ValueError(f"unsupported dimensions: {dims}")
|
||||
|
||||
def cast_bias_weight(s, input):
|
||||
bias = None
|
||||
if s.bias is not None:
|
||||
bias = s.bias.to(device=input.device, dtype=input.dtype)
|
||||
weight = s.weight.to(device=input.device, dtype=input.dtype)
|
||||
return weight, bias
|
||||
|
||||
class manual_cast(disable_weight_init):
|
||||
class Linear(disable_weight_init.Linear):
|
||||
ldm_patched_cast_weights = True
|
||||
def forward(self, input):
|
||||
weight, bias = cast_bias_weight(self, input)
|
||||
return torch.nn.functional.linear(input, weight, bias)
|
||||
|
||||
class Conv2d(disable_weight_init.Conv2d):
|
||||
ldm_patched_cast_weights = True
|
||||
def forward(self, input):
|
||||
weight, bias = cast_bias_weight(self, input)
|
||||
return self._conv_forward(input, weight, bias)
|
||||
|
||||
class Conv3d(disable_weight_init.Conv3d):
|
||||
ldm_patched_cast_weights = True
|
||||
def forward(self, input):
|
||||
weight, bias = cast_bias_weight(self, input)
|
||||
return self._conv_forward(input, weight, bias)
|
||||
|
||||
class GroupNorm(disable_weight_init.GroupNorm):
|
||||
ldm_patched_cast_weights = True
|
||||
def forward(self, input):
|
||||
weight, bias = cast_bias_weight(self, input)
|
||||
return torch.nn.functional.group_norm(input, self.num_groups, weight, bias, self.eps)
|
||||
|
||||
class LayerNorm(disable_weight_init.LayerNorm):
|
||||
ldm_patched_cast_weights = True
|
||||
def forward(self, input):
|
||||
weight, bias = cast_bias_weight(self, input)
|
||||
return torch.nn.functional.layer_norm(input, self.normalized_shape, weight, bias, self.eps)
|
||||
|
@ -28,6 +28,7 @@ def prepare_noise(latent_image, seed, noise_inds=None):
|
||||
def prepare_mask(noise_mask, shape, device):
|
||||
"""ensures noise mask is of proper dimensions"""
|
||||
noise_mask = torch.nn.functional.interpolate(noise_mask.reshape((-1, 1, noise_mask.shape[-2], noise_mask.shape[-1])), size=(shape[2], shape[3]), mode="bilinear")
|
||||
noise_mask = noise_mask.round()
|
||||
noise_mask = torch.cat([noise_mask] * shape[1], dim=1)
|
||||
noise_mask = ldm_patched.modules.utils.repeat_to_batch_size(noise_mask, shape[0])
|
||||
noise_mask = noise_mask.to(device)
|
||||
@ -46,8 +47,7 @@ def convert_cond(cond):
|
||||
temp = c[1].copy()
|
||||
model_conds = temp.get("model_conds", {})
|
||||
if c[0] is not None:
|
||||
model_conds["c_crossattn"] = ldm_patched.modules.conds.CONDCrossAttn(c[0]) #TODO: remove
|
||||
temp["cross_attn"] = c[0]
|
||||
model_conds["c_crossattn"] = ldm_patched.modules.conds.CONDCrossAttn(c[0])
|
||||
temp["model_conds"] = model_conds
|
||||
out.append(temp)
|
||||
return out
|
||||
|
@ -1,9 +1,13 @@
|
||||
from ldm_patched.k_diffusion import sampling as k_diffusion_sampling
|
||||
from ldm_patched.unipc import uni_pc
|
||||
import torch
|
||||
import enum
|
||||
import collections
|
||||
from ldm_patched.modules import model_management
|
||||
import math
|
||||
from ldm_patched.modules import model_base
|
||||
import ldm_patched.modules.utils
|
||||
import ldm_patched.modules.conds
|
||||
|
||||
def get_area_and_mult(conds, x_in, timestep_in):
|
||||
area = (x_in.shape[2], x_in.shape[3], 0, 0)
|
||||
@ -240,7 +244,7 @@ def calc_cond_uncond_batch(model, cond, uncond, x_in, timestep, model_options):
|
||||
#The main sampling function shared by all the samplers
|
||||
#Returns denoised
|
||||
def sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options={}, seed=None):
|
||||
if math.isclose(cond_scale, 1.0) and model_options.get("disable_cfg1_optimization", False) == False:
|
||||
if math.isclose(cond_scale, 1.0):
|
||||
uncond_ = None
|
||||
else:
|
||||
uncond_ = uncond
|
||||
@ -595,13 +599,6 @@ def sample(model, noise, positive, negative, cfg, device, sampler, sigmas, model
|
||||
calculate_start_end_timesteps(model, negative)
|
||||
calculate_start_end_timesteps(model, positive)
|
||||
|
||||
if latent_image is not None:
|
||||
latent_image = model.process_latent_in(latent_image)
|
||||
|
||||
if hasattr(model, 'extra_conds'):
|
||||
positive = encode_model_conds(model.extra_conds, positive, noise, device, "positive", latent_image=latent_image, denoise_mask=denoise_mask, seed=seed)
|
||||
negative = encode_model_conds(model.extra_conds, negative, noise, device, "negative", latent_image=latent_image, denoise_mask=denoise_mask, seed=seed)
|
||||
|
||||
#make sure each cond area has an opposite one with the same area
|
||||
for c in positive:
|
||||
create_cond_with_same_area_if_none(negative, c)
|
||||
@ -613,6 +610,13 @@ def sample(model, noise, positive, negative, cfg, device, sampler, sigmas, model
|
||||
apply_empty_x_to_equal_area(list(filter(lambda c: c.get('control_apply_to_uncond', False) == True, positive)), negative, 'control', lambda cond_cnets, x: cond_cnets[x])
|
||||
apply_empty_x_to_equal_area(positive, negative, 'gligen', lambda cond_cnets, x: cond_cnets[x])
|
||||
|
||||
if latent_image is not None:
|
||||
latent_image = model.process_latent_in(latent_image)
|
||||
|
||||
if hasattr(model, 'extra_conds'):
|
||||
positive = encode_model_conds(model.extra_conds, positive, noise, device, "positive", latent_image=latent_image, denoise_mask=denoise_mask)
|
||||
negative = encode_model_conds(model.extra_conds, negative, noise, device, "negative", latent_image=latent_image, denoise_mask=denoise_mask)
|
||||
|
||||
extra_args = {"cond":positive, "uncond":negative, "cond_scale": cfg, "model_options": model_options, "seed":seed}
|
||||
|
||||
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
|
||||
@ -635,7 +639,7 @@ def calculate_sigmas_scheduler(model, scheduler_name, steps):
|
||||
elif scheduler_name == "sgm_uniform":
|
||||
sigmas = normal_scheduler(model, steps, sgm=True)
|
||||
else:
|
||||
print("error invalid scheduler", scheduler_name)
|
||||
print("error invalid scheduler", self.scheduler)
|
||||
return sigmas
|
||||
|
||||
def sampler_object(name):
|
||||
|
@ -1,6 +1,9 @@
|
||||
import torch
|
||||
import contextlib
|
||||
import math
|
||||
|
||||
from ldm_patched.modules import model_management
|
||||
from ldm_patched.ldm.util import instantiate_from_config
|
||||
from ldm_patched.ldm.models.autoencoder import AutoencoderKL, AutoencodingEngine
|
||||
import yaml
|
||||
|
||||
@ -154,8 +157,6 @@ class VAE:
|
||||
|
||||
self.memory_used_encode = lambda shape, dtype: (1767 * shape[2] * shape[3]) * model_management.dtype_size(dtype) #These are for AutoencoderKL and need tweaking (should be lower)
|
||||
self.memory_used_decode = lambda shape, dtype: (2178 * shape[2] * shape[3] * 64) * model_management.dtype_size(dtype)
|
||||
self.downscale_ratio = 8
|
||||
self.latent_channels = 4
|
||||
|
||||
if config is None:
|
||||
if "decoder.mid.block_1.mix_factor" in sd:
|
||||
@ -171,11 +172,6 @@ class VAE:
|
||||
else:
|
||||
#default SD1.x/SD2.x VAE parameters
|
||||
ddconfig = {'double_z': True, 'z_channels': 4, 'resolution': 256, 'in_channels': 3, 'out_ch': 3, 'ch': 128, 'ch_mult': [1, 2, 4, 4], 'num_res_blocks': 2, 'attn_resolutions': [], 'dropout': 0.0}
|
||||
|
||||
if 'encoder.down.2.downsample.conv.weight' not in sd: #Stable diffusion x4 upscaler VAE
|
||||
ddconfig['ch_mult'] = [1, 2, 4]
|
||||
self.downscale_ratio = 4
|
||||
|
||||
self.first_stage_model = AutoencoderKL(ddconfig=ddconfig, embed_dim=4)
|
||||
else:
|
||||
self.first_stage_model = AutoencoderKL(**(config['params']))
|
||||
@ -208,9 +204,9 @@ class VAE:
|
||||
|
||||
decode_fn = lambda a: (self.first_stage_model.decode(a.to(self.vae_dtype).to(self.device)) + 1.0).float()
|
||||
output = torch.clamp((
|
||||
(ldm_patched.modules.utils.tiled_scale(samples, decode_fn, tile_x // 2, tile_y * 2, overlap, upscale_amount = self.downscale_ratio, output_device=self.output_device, pbar = pbar) +
|
||||
ldm_patched.modules.utils.tiled_scale(samples, decode_fn, tile_x * 2, tile_y // 2, overlap, upscale_amount = self.downscale_ratio, output_device=self.output_device, pbar = pbar) +
|
||||
ldm_patched.modules.utils.tiled_scale(samples, decode_fn, tile_x, tile_y, overlap, upscale_amount = self.downscale_ratio, output_device=self.output_device, pbar = pbar))
|
||||
(ldm_patched.modules.utils.tiled_scale(samples, decode_fn, tile_x // 2, tile_y * 2, overlap, upscale_amount = 8, output_device=self.output_device, pbar = pbar) +
|
||||
ldm_patched.modules.utils.tiled_scale(samples, decode_fn, tile_x * 2, tile_y // 2, overlap, upscale_amount = 8, output_device=self.output_device, pbar = pbar) +
|
||||
ldm_patched.modules.utils.tiled_scale(samples, decode_fn, tile_x, tile_y, overlap, upscale_amount = 8, output_device=self.output_device, pbar = pbar))
|
||||
/ 3.0) / 2.0, min=0.0, max=1.0)
|
||||
return output
|
||||
|
||||
@ -221,9 +217,9 @@ class VAE:
|
||||
pbar = ldm_patched.modules.utils.ProgressBar(steps)
|
||||
|
||||
encode_fn = lambda a: self.first_stage_model.encode((2. * a - 1.).to(self.vae_dtype).to(self.device)).float()
|
||||
samples = ldm_patched.modules.utils.tiled_scale(pixel_samples, encode_fn, tile_x, tile_y, overlap, upscale_amount = (1/self.downscale_ratio), out_channels=self.latent_channels, output_device=self.output_device, pbar=pbar)
|
||||
samples += ldm_patched.modules.utils.tiled_scale(pixel_samples, encode_fn, tile_x * 2, tile_y // 2, overlap, upscale_amount = (1/self.downscale_ratio), out_channels=self.latent_channels, output_device=self.output_device, pbar=pbar)
|
||||
samples += ldm_patched.modules.utils.tiled_scale(pixel_samples, encode_fn, tile_x // 2, tile_y * 2, overlap, upscale_amount = (1/self.downscale_ratio), out_channels=self.latent_channels, output_device=self.output_device, pbar=pbar)
|
||||
samples = ldm_patched.modules.utils.tiled_scale(pixel_samples, encode_fn, tile_x, tile_y, overlap, upscale_amount = (1/8), out_channels=4, output_device=self.output_device, pbar=pbar)
|
||||
samples += ldm_patched.modules.utils.tiled_scale(pixel_samples, encode_fn, tile_x * 2, tile_y // 2, overlap, upscale_amount = (1/8), out_channels=4, output_device=self.output_device, pbar=pbar)
|
||||
samples += ldm_patched.modules.utils.tiled_scale(pixel_samples, encode_fn, tile_x // 2, tile_y * 2, overlap, upscale_amount = (1/8), out_channels=4, output_device=self.output_device, pbar=pbar)
|
||||
samples /= 3.0
|
||||
return samples
|
||||
|
||||
@ -235,7 +231,7 @@ class VAE:
|
||||
batch_number = int(free_memory / memory_used)
|
||||
batch_number = max(1, batch_number)
|
||||
|
||||
pixel_samples = torch.empty((samples_in.shape[0], 3, round(samples_in.shape[2] * self.downscale_ratio), round(samples_in.shape[3] * self.downscale_ratio)), device=self.output_device)
|
||||
pixel_samples = torch.empty((samples_in.shape[0], 3, round(samples_in.shape[2] * 8), round(samples_in.shape[3] * 8)), device=self.output_device)
|
||||
for x in range(0, samples_in.shape[0], batch_number):
|
||||
samples = samples_in[x:x+batch_number].to(self.vae_dtype).to(self.device)
|
||||
pixel_samples[x:x+batch_number] = torch.clamp((self.first_stage_model.decode(samples).to(self.output_device).float() + 1.0) / 2.0, min=0.0, max=1.0)
|
||||
@ -259,7 +255,7 @@ class VAE:
|
||||
free_memory = model_management.get_free_memory(self.device)
|
||||
batch_number = int(free_memory / memory_used)
|
||||
batch_number = max(1, batch_number)
|
||||
samples = torch.empty((pixel_samples.shape[0], self.latent_channels, round(pixel_samples.shape[2] // self.downscale_ratio), round(pixel_samples.shape[3] // self.downscale_ratio)), device=self.output_device)
|
||||
samples = torch.empty((pixel_samples.shape[0], 4, round(pixel_samples.shape[2] // 8), round(pixel_samples.shape[3] // 8)), device=self.output_device)
|
||||
for x in range(0, pixel_samples.shape[0], batch_number):
|
||||
pixels_in = (2. * pixel_samples[x:x+batch_number] - 1.).to(self.vae_dtype).to(self.device)
|
||||
samples[x:x+batch_number] = self.first_stage_model.encode(pixels_in).to(self.output_device).float()
|
||||
@ -531,14 +527,7 @@ def load_unet(unet_path):
|
||||
raise RuntimeError("ERROR: Could not detect model type of: {}".format(unet_path))
|
||||
return model
|
||||
|
||||
def save_checkpoint(output_path, model, clip=None, vae=None, clip_vision=None, metadata=None):
|
||||
clip_sd = None
|
||||
load_models = [model]
|
||||
if clip is not None:
|
||||
load_models.append(clip.load_model())
|
||||
clip_sd = clip.get_sd()
|
||||
|
||||
model_management.load_models_gpu(load_models)
|
||||
clip_vision_sd = clip_vision.get_sd() if clip_vision is not None else None
|
||||
sd = model.model.state_dict_for_saving(clip_sd, vae.get_sd(), clip_vision_sd)
|
||||
def save_checkpoint(output_path, model, clip, vae, metadata=None):
|
||||
model_management.load_models_gpu([model, clip.load_model()])
|
||||
sd = model.model.state_dict_for_saving(clip.get_sd(), vae.get_sd())
|
||||
ldm_patched.modules.utils.save_torch_file(sd, output_path, metadata=metadata)
|
||||
|
@ -6,6 +6,7 @@ import torch
|
||||
import traceback
|
||||
import zipfile
|
||||
from . import model_management
|
||||
import contextlib
|
||||
import ldm_patched.modules.clip_model
|
||||
import json
|
||||
|
||||
|
@ -252,59 +252,5 @@ class SVD_img2vid(supported_models_base.BASE):
|
||||
def clip_target(self):
|
||||
return None
|
||||
|
||||
class Stable_Zero123(supported_models_base.BASE):
|
||||
unet_config = {
|
||||
"context_dim": 768,
|
||||
"model_channels": 320,
|
||||
"use_linear_in_transformer": False,
|
||||
"adm_in_channels": None,
|
||||
"use_temporal_attention": False,
|
||||
"in_channels": 8,
|
||||
}
|
||||
|
||||
unet_extra_config = {
|
||||
"num_heads": 8,
|
||||
"num_head_channels": -1,
|
||||
}
|
||||
|
||||
clip_vision_prefix = "cond_stage_model.model.visual."
|
||||
|
||||
latent_format = latent_formats.SD15
|
||||
|
||||
def get_model(self, state_dict, prefix="", device=None):
|
||||
out = model_base.Stable_Zero123(self, device=device, cc_projection_weight=state_dict["cc_projection.weight"], cc_projection_bias=state_dict["cc_projection.bias"])
|
||||
return out
|
||||
|
||||
def clip_target(self):
|
||||
return None
|
||||
|
||||
class SD_X4Upscaler(SD20):
|
||||
unet_config = {
|
||||
"context_dim": 1024,
|
||||
"model_channels": 256,
|
||||
'in_channels': 7,
|
||||
"use_linear_in_transformer": True,
|
||||
"adm_in_channels": None,
|
||||
"use_temporal_attention": False,
|
||||
}
|
||||
|
||||
unet_extra_config = {
|
||||
"disable_self_attentions": [True, True, True, False],
|
||||
"num_classes": 1000,
|
||||
"num_heads": 8,
|
||||
"num_head_channels": -1,
|
||||
}
|
||||
|
||||
latent_format = latent_formats.SD_X4
|
||||
|
||||
sampling_settings = {
|
||||
"linear_start": 0.0001,
|
||||
"linear_end": 0.02,
|
||||
}
|
||||
|
||||
def get_model(self, state_dict, prefix="", device=None):
|
||||
out = model_base.SD_X4Upscaler(self, device=device)
|
||||
return out
|
||||
|
||||
models = [Stable_Zero123, SD15, SD20, SD21UnclipL, SD21UnclipH, SDXLRefiner, SDXL, SSD1B, Segmind_Vega, SD_X4Upscaler]
|
||||
models = [SD15, SD20, SD21UnclipL, SD21UnclipH, SDXLRefiner, SDXL, SSD1B, Segmind_Vega]
|
||||
models += [SVD_img2vid]
|
||||
|
@ -65,12 +65,6 @@ class BASE:
|
||||
replace_prefix = {"": "cond_stage_model."}
|
||||
return utils.state_dict_prefix_replace(state_dict, replace_prefix)
|
||||
|
||||
def process_clip_vision_state_dict_for_saving(self, state_dict):
|
||||
replace_prefix = {}
|
||||
if self.clip_vision_prefix is not None:
|
||||
replace_prefix[""] = self.clip_vision_prefix
|
||||
return utils.state_dict_prefix_replace(state_dict, replace_prefix)
|
||||
|
||||
def process_unet_state_dict_for_saving(self, state_dict):
|
||||
replace_prefix = {"": "model.diffusion_model."}
|
||||
return utils.state_dict_prefix_replace(state_dict, replace_prefix)
|
||||
|
@ -14,7 +14,7 @@ from .timm.weight_init import trunc_normal_
|
||||
|
||||
def drop_path(x, drop_prob: float = 0.0, training: bool = False):
|
||||
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
|
||||
From: https://github.com/huggingface/pytorch-image-models/blob/main/timm/layers/drop.py
|
||||
From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.py
|
||||
"""
|
||||
if drop_prob == 0.0 or not training:
|
||||
return x
|
||||
@ -30,7 +30,7 @@ def drop_path(x, drop_prob: float = 0.0, training: bool = False):
|
||||
|
||||
class DropPath(nn.Module):
|
||||
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
|
||||
From: https://github.com/huggingface/pytorch-image-models/blob/main/timm/layers/drop.py
|
||||
From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.py
|
||||
"""
|
||||
|
||||
def __init__(self, drop_prob=None):
|
||||
|
@ -13,7 +13,7 @@ import torch.nn.functional as F
|
||||
from . import block as B
|
||||
|
||||
|
||||
# Borrowed from https://github.com/rlaphoenix/VSGAN/blob/master/vsgan/archs/esrgan.py
|
||||
# Borrowed from https://github.com/rlaphoenix/VSGAN/blob/master/vsgan/archs/ESRGAN.py
|
||||
# Which enhanced stuff that was already here
|
||||
class RRDBNet(nn.Module):
|
||||
def __init__(
|
||||
|
@ -2,7 +2,7 @@
|
||||
Modified from https://github.com/sczhou/CodeFormer
|
||||
VQGAN code, adapted from the original created by the Unleashing Transformers authors:
|
||||
https://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py
|
||||
This version of the arch specifically was gathered from an old version of GFPGAN. If this is a problem, please contact me.
|
||||
This verison of the arch specifically was gathered from an old version of GFPGAN. If this is a problem, please contact me.
|
||||
"""
|
||||
import math
|
||||
from typing import Optional
|
||||
|
@ -7,10 +7,9 @@ import torch
|
||||
import torch.nn as nn
|
||||
|
||||
import ldm_patched.modules.utils
|
||||
import ldm_patched.modules.ops
|
||||
|
||||
def conv(n_in, n_out, **kwargs):
|
||||
return ldm_patched.modules.ops.disable_weight_init.Conv2d(n_in, n_out, 3, padding=1, **kwargs)
|
||||
return nn.Conv2d(n_in, n_out, 3, padding=1, **kwargs)
|
||||
|
||||
class Clamp(nn.Module):
|
||||
def forward(self, x):
|
||||
@ -20,7 +19,7 @@ class Block(nn.Module):
|
||||
def __init__(self, n_in, n_out):
|
||||
super().__init__()
|
||||
self.conv = nn.Sequential(conv(n_in, n_out), nn.ReLU(), conv(n_out, n_out), nn.ReLU(), conv(n_out, n_out))
|
||||
self.skip = ldm_patched.modules.ops.disable_weight_init.Conv2d(n_in, n_out, 1, bias=False) if n_in != n_out else nn.Identity()
|
||||
self.skip = nn.Conv2d(n_in, n_out, 1, bias=False) if n_in != n_out else nn.Identity()
|
||||
self.fuse = nn.ReLU()
|
||||
def forward(self, x):
|
||||
return self.fuse(self.conv(x) + self.skip(x))
|
||||
|
@ -29,14 +29,11 @@ folder_names_and_paths["custom_nodes"] = ([os.path.join(base_path, "custom_nodes
|
||||
|
||||
folder_names_and_paths["hypernetworks"] = ([os.path.join(models_dir, "hypernetworks")], supported_pt_extensions)
|
||||
|
||||
folder_names_and_paths["photomaker"] = ([os.path.join(models_dir, "photomaker")], supported_pt_extensions)
|
||||
|
||||
folder_names_and_paths["classifiers"] = ([os.path.join(models_dir, "classifiers")], {""})
|
||||
|
||||
output_directory = os.path.join(os.getcwd(), "output")
|
||||
temp_directory = os.path.join(os.getcwd(), "temp")
|
||||
input_directory = os.path.join(os.getcwd(), "input")
|
||||
user_directory = os.path.join(os.getcwd(), "user")
|
||||
|
||||
filename_list_cache = {}
|
||||
|
||||
@ -140,27 +137,15 @@ def recursive_search(directory, excluded_dir_names=None):
|
||||
excluded_dir_names = []
|
||||
|
||||
result = []
|
||||
dirs = {}
|
||||
|
||||
# Attempt to add the initial directory to dirs with error handling
|
||||
try:
|
||||
dirs[directory] = os.path.getmtime(directory)
|
||||
except FileNotFoundError:
|
||||
print(f"Warning: Unable to access {directory}. Skipping this path.")
|
||||
|
||||
dirs = {directory: os.path.getmtime(directory)}
|
||||
for dirpath, subdirs, filenames in os.walk(directory, followlinks=True, topdown=True):
|
||||
subdirs[:] = [d for d in subdirs if d not in excluded_dir_names]
|
||||
for file_name in filenames:
|
||||
relative_path = os.path.relpath(os.path.join(dirpath, file_name), directory)
|
||||
result.append(relative_path)
|
||||
|
||||
for d in subdirs:
|
||||
path = os.path.join(dirpath, d)
|
||||
try:
|
||||
dirs[path] = os.path.getmtime(path)
|
||||
except FileNotFoundError:
|
||||
print(f"Warning: Unable to access {path}. Skipping this path.")
|
||||
continue
|
||||
dirs[path] = os.path.getmtime(path)
|
||||
return result, dirs
|
||||
|
||||
def filter_files_extensions(files, extensions):
|
||||
@ -199,7 +184,8 @@ def cached_filename_list_(folder_name):
|
||||
if folder_name not in filename_list_cache:
|
||||
return None
|
||||
out = filename_list_cache[folder_name]
|
||||
|
||||
if time.perf_counter() < (out[2] + 0.5):
|
||||
return out
|
||||
for x in out[1]:
|
||||
time_modified = out[1][x]
|
||||
folder = x
|
||||
|
30
modules/advanced_parameters.py
Normal file
30
modules/advanced_parameters.py
Normal file
@ -0,0 +1,30 @@
|
||||
disable_preview, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg, sampler_name, \
|
||||
scheduler_name, generate_image_grid, overwrite_step, overwrite_switch, overwrite_width, overwrite_height, \
|
||||
overwrite_vary_strength, overwrite_upscale_strength, \
|
||||
mixing_image_prompt_and_vary_upscale, mixing_image_prompt_and_inpaint, \
|
||||
debugging_cn_preprocessor, skipping_cn_preprocessor, controlnet_softness, canny_low_threshold, canny_high_threshold, \
|
||||
refiner_swap_method, \
|
||||
freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2, \
|
||||
debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field = [None] * 32
|
||||
|
||||
|
||||
def set_all_advanced_parameters(*args):
|
||||
global disable_preview, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg, sampler_name, \
|
||||
scheduler_name, generate_image_grid, overwrite_step, overwrite_switch, overwrite_width, overwrite_height, \
|
||||
overwrite_vary_strength, overwrite_upscale_strength, \
|
||||
mixing_image_prompt_and_vary_upscale, mixing_image_prompt_and_inpaint, \
|
||||
debugging_cn_preprocessor, skipping_cn_preprocessor, controlnet_softness, canny_low_threshold, canny_high_threshold, \
|
||||
refiner_swap_method, \
|
||||
freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2, \
|
||||
debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field
|
||||
|
||||
disable_preview, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg, sampler_name, \
|
||||
scheduler_name, generate_image_grid, overwrite_step, overwrite_switch, overwrite_width, overwrite_height, \
|
||||
overwrite_vary_strength, overwrite_upscale_strength, \
|
||||
mixing_image_prompt_and_vary_upscale, mixing_image_prompt_and_inpaint, \
|
||||
debugging_cn_preprocessor, skipping_cn_preprocessor, controlnet_softness, canny_low_threshold, canny_high_threshold, \
|
||||
refiner_swap_method, \
|
||||
freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2, \
|
||||
debugging_inpaint_preprocessor, inpaint_disable_initial_latent, inpaint_engine, inpaint_strength, inpaint_respective_field = args
|
||||
|
||||
return
|
@ -1,16 +1,11 @@
|
||||
import threading
|
||||
import re
|
||||
from modules.patch import PatchSettings, patch_settings, patch_all
|
||||
|
||||
patch_all()
|
||||
|
||||
class AsyncTask:
|
||||
def __init__(self, args):
|
||||
self.args = args
|
||||
self.yields = []
|
||||
self.results = []
|
||||
self.last_stop = False
|
||||
self.processing = False
|
||||
|
||||
|
||||
async_tasks = []
|
||||
@ -19,11 +14,9 @@ async_tasks = []
|
||||
def worker():
|
||||
global async_tasks
|
||||
|
||||
import os
|
||||
import traceback
|
||||
import math
|
||||
import numpy as np
|
||||
import cv2
|
||||
import torch
|
||||
import time
|
||||
import shared
|
||||
@ -38,22 +31,16 @@ def worker():
|
||||
import extras.preprocessors as preprocessors
|
||||
import modules.inpaint_worker as inpaint_worker
|
||||
import modules.constants as constants
|
||||
import modules.advanced_parameters as advanced_parameters
|
||||
import extras.ip_adapter as ip_adapter
|
||||
import extras.face_crop
|
||||
import fooocus_version
|
||||
import args_manager
|
||||
|
||||
from modules.sdxl_styles import apply_style, apply_wildcards, fooocus_expansion, apply_arrays
|
||||
from modules.sdxl_styles import apply_style, apply_wildcards, fooocus_expansion
|
||||
from modules.private_logger import log
|
||||
from extras.expansion import safe_str
|
||||
from modules.util import remove_empty_str, HWC3, resize_image, get_image_shape_ceil, set_image_shape_ceil, \
|
||||
get_shape_ceil, resample_image, erode_or_dilate, ordinal_suffix, get_enabled_loras
|
||||
from modules.util import remove_empty_str, HWC3, resize_image, \
|
||||
get_image_shape_ceil, set_image_shape_ceil, get_shape_ceil, resample_image
|
||||
from modules.upscaler import perform_upscale
|
||||
from modules.flags import Performance
|
||||
from modules.meta_parser import get_metadata_parser, MetadataScheme
|
||||
|
||||
pid = os.getpid()
|
||||
print(f'Started worker with PID {pid}')
|
||||
|
||||
try:
|
||||
async_gradio_app = shared.gradio_root
|
||||
@ -81,20 +68,19 @@ def worker():
|
||||
return
|
||||
|
||||
def build_image_wall(async_task):
|
||||
results = []
|
||||
|
||||
if len(async_task.results) < 2:
|
||||
if not advanced_parameters.generate_image_grid:
|
||||
return
|
||||
|
||||
for img in async_task.results:
|
||||
if isinstance(img, str) and os.path.exists(img):
|
||||
img = cv2.imread(img)
|
||||
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
|
||||
results = async_task.results
|
||||
|
||||
if len(results) < 2:
|
||||
return
|
||||
|
||||
for img in results:
|
||||
if not isinstance(img, np.ndarray):
|
||||
return
|
||||
if img.ndim != 3:
|
||||
return
|
||||
results.append(img)
|
||||
|
||||
H, W, C = results[0].shape
|
||||
|
||||
@ -128,7 +114,6 @@ def worker():
|
||||
@torch.inference_mode()
|
||||
def handler(async_task):
|
||||
execution_start_time = time.perf_counter()
|
||||
async_task.processing = True
|
||||
|
||||
args = async_task.args
|
||||
args.reverse()
|
||||
@ -136,18 +121,16 @@ def worker():
|
||||
prompt = args.pop()
|
||||
negative_prompt = args.pop()
|
||||
style_selections = args.pop()
|
||||
performance_selection = Performance(args.pop())
|
||||
performance_selection = args.pop()
|
||||
aspect_ratios_selection = args.pop()
|
||||
image_number = args.pop()
|
||||
output_format = args.pop()
|
||||
image_seed = args.pop()
|
||||
read_wildcards_in_order = args.pop()
|
||||
sharpness = args.pop()
|
||||
guidance_scale = args.pop()
|
||||
base_model_name = args.pop()
|
||||
refiner_model_name = args.pop()
|
||||
refiner_switch = args.pop()
|
||||
loras = get_enabled_loras([[bool(args.pop()), str(args.pop()), float(args.pop())] for _ in range(modules.config.default_max_lora_number)])
|
||||
loras = [[str(args.pop()), float(args.pop())] for _ in range(5)]
|
||||
input_image_checkbox = args.pop()
|
||||
current_tab = args.pop()
|
||||
uov_method = args.pop()
|
||||
@ -155,50 +138,9 @@ def worker():
|
||||
outpaint_selections = args.pop()
|
||||
inpaint_input_image = args.pop()
|
||||
inpaint_additional_prompt = args.pop()
|
||||
inpaint_mask_image_upload = args.pop()
|
||||
|
||||
disable_preview = args.pop()
|
||||
disable_intermediate_results = args.pop()
|
||||
disable_seed_increment = args.pop()
|
||||
adm_scaler_positive = args.pop()
|
||||
adm_scaler_negative = args.pop()
|
||||
adm_scaler_end = args.pop()
|
||||
adaptive_cfg = args.pop()
|
||||
sampler_name = args.pop()
|
||||
scheduler_name = args.pop()
|
||||
overwrite_step = args.pop()
|
||||
overwrite_switch = args.pop()
|
||||
overwrite_width = args.pop()
|
||||
overwrite_height = args.pop()
|
||||
overwrite_vary_strength = args.pop()
|
||||
overwrite_upscale_strength = args.pop()
|
||||
mixing_image_prompt_and_vary_upscale = args.pop()
|
||||
mixing_image_prompt_and_inpaint = args.pop()
|
||||
debugging_cn_preprocessor = args.pop()
|
||||
skipping_cn_preprocessor = args.pop()
|
||||
canny_low_threshold = args.pop()
|
||||
canny_high_threshold = args.pop()
|
||||
refiner_swap_method = args.pop()
|
||||
controlnet_softness = args.pop()
|
||||
freeu_enabled = args.pop()
|
||||
freeu_b1 = args.pop()
|
||||
freeu_b2 = args.pop()
|
||||
freeu_s1 = args.pop()
|
||||
freeu_s2 = args.pop()
|
||||
debugging_inpaint_preprocessor = args.pop()
|
||||
inpaint_disable_initial_latent = args.pop()
|
||||
inpaint_engine = args.pop()
|
||||
inpaint_strength = args.pop()
|
||||
inpaint_respective_field = args.pop()
|
||||
inpaint_mask_upload_checkbox = args.pop()
|
||||
invert_mask_checkbox = args.pop()
|
||||
inpaint_erode_or_dilate = args.pop()
|
||||
|
||||
save_metadata_to_images = args.pop() if not args_manager.args.disable_metadata else False
|
||||
metadata_scheme = MetadataScheme(args.pop()) if not args_manager.args.disable_metadata else MetadataScheme.FOOOCUS
|
||||
|
||||
cn_tasks = {x: [] for x in flags.ip_list}
|
||||
for _ in range(flags.controlnet_image_count):
|
||||
for _ in range(4):
|
||||
cn_img = args.pop()
|
||||
cn_stop = args.pop()
|
||||
cn_weight = args.pop()
|
||||
@ -223,9 +165,17 @@ def worker():
|
||||
print(f'Refiner disabled because base model and refiner are same.')
|
||||
refiner_model_name = 'None'
|
||||
|
||||
steps = performance_selection.steps()
|
||||
assert performance_selection in ['Speed', 'Quality', 'Extreme Speed']
|
||||
|
||||
if performance_selection == Performance.EXTREME_SPEED:
|
||||
steps = 30
|
||||
|
||||
if performance_selection == 'Speed':
|
||||
steps = 30
|
||||
|
||||
if performance_selection == 'Quality':
|
||||
steps = 60
|
||||
|
||||
if performance_selection == 'Extreme Speed':
|
||||
print('Enter LCM mode.')
|
||||
progressbar(async_task, 1, 'Downloading LCM components ...')
|
||||
loras += [(modules.config.downloading_sdxl_lcm_lora(), 1.0)]
|
||||
@ -234,51 +184,30 @@ def worker():
|
||||
print(f'Refiner disabled in LCM mode.')
|
||||
|
||||
refiner_model_name = 'None'
|
||||
sampler_name = 'lcm'
|
||||
scheduler_name = 'lcm'
|
||||
sharpness = 0.0
|
||||
guidance_scale = 1.0
|
||||
adaptive_cfg = 1.0
|
||||
sampler_name = advanced_parameters.sampler_name = 'lcm'
|
||||
scheduler_name = advanced_parameters.scheduler_name = 'lcm'
|
||||
modules.patch.sharpness = sharpness = 0.0
|
||||
cfg_scale = guidance_scale = 1.0
|
||||
modules.patch.adaptive_cfg = advanced_parameters.adaptive_cfg = 1.0
|
||||
refiner_switch = 1.0
|
||||
adm_scaler_positive = 1.0
|
||||
adm_scaler_negative = 1.0
|
||||
adm_scaler_end = 0.0
|
||||
modules.patch.positive_adm_scale = advanced_parameters.adm_scaler_positive = 1.0
|
||||
modules.patch.negative_adm_scale = advanced_parameters.adm_scaler_negative = 1.0
|
||||
modules.patch.adm_scaler_end = advanced_parameters.adm_scaler_end = 0.0
|
||||
steps = 8
|
||||
|
||||
elif performance_selection == Performance.LIGHTNING:
|
||||
print('Enter Lightning mode.')
|
||||
progressbar(async_task, 1, 'Downloading Lightning components ...')
|
||||
loras += [(modules.config.downloading_sdxl_lightning_lora(), 1.0)]
|
||||
modules.patch.adaptive_cfg = advanced_parameters.adaptive_cfg
|
||||
print(f'[Parameters] Adaptive CFG = {modules.patch.adaptive_cfg}')
|
||||
|
||||
if refiner_model_name != 'None':
|
||||
print(f'Refiner disabled in Lightning mode.')
|
||||
modules.patch.sharpness = sharpness
|
||||
print(f'[Parameters] Sharpness = {modules.patch.sharpness}')
|
||||
|
||||
refiner_model_name = 'None'
|
||||
sampler_name = 'euler'
|
||||
scheduler_name = 'sgm_uniform'
|
||||
sharpness = 0.0
|
||||
guidance_scale = 1.0
|
||||
adaptive_cfg = 1.0
|
||||
refiner_switch = 1.0
|
||||
adm_scaler_positive = 1.0
|
||||
adm_scaler_negative = 1.0
|
||||
adm_scaler_end = 0.0
|
||||
|
||||
print(f'[Parameters] Adaptive CFG = {adaptive_cfg}')
|
||||
print(f'[Parameters] Sharpness = {sharpness}')
|
||||
print(f'[Parameters] ControlNet Softness = {controlnet_softness}')
|
||||
modules.patch.positive_adm_scale = advanced_parameters.adm_scaler_positive
|
||||
modules.patch.negative_adm_scale = advanced_parameters.adm_scaler_negative
|
||||
modules.patch.adm_scaler_end = advanced_parameters.adm_scaler_end
|
||||
print(f'[Parameters] ADM Scale = '
|
||||
f'{adm_scaler_positive} : '
|
||||
f'{adm_scaler_negative} : '
|
||||
f'{adm_scaler_end}')
|
||||
|
||||
patch_settings[pid] = PatchSettings(
|
||||
sharpness,
|
||||
adm_scaler_end,
|
||||
adm_scaler_positive,
|
||||
adm_scaler_negative,
|
||||
controlnet_softness,
|
||||
adaptive_cfg
|
||||
)
|
||||
f'{modules.patch.positive_adm_scale} : '
|
||||
f'{modules.patch.negative_adm_scale} : '
|
||||
f'{modules.patch.adm_scaler_end}')
|
||||
|
||||
cfg_scale = float(guidance_scale)
|
||||
print(f'[Parameters] CFG = {cfg_scale}')
|
||||
@ -291,9 +220,10 @@ def worker():
|
||||
width, height = int(width), int(height)
|
||||
|
||||
skip_prompt_processing = False
|
||||
refiner_swap_method = advanced_parameters.refiner_swap_method
|
||||
|
||||
inpaint_worker.current_task = None
|
||||
inpaint_parameterized = inpaint_engine != 'None'
|
||||
inpaint_parameterized = advanced_parameters.inpaint_engine != 'None'
|
||||
inpaint_image = None
|
||||
inpaint_mask = None
|
||||
inpaint_head_model_path = None
|
||||
@ -307,12 +237,15 @@ def worker():
|
||||
seed = int(image_seed)
|
||||
print(f'[Parameters] Seed = {seed}')
|
||||
|
||||
sampler_name = advanced_parameters.sampler_name
|
||||
scheduler_name = advanced_parameters.scheduler_name
|
||||
|
||||
goals = []
|
||||
tasks = []
|
||||
|
||||
if input_image_checkbox:
|
||||
if (current_tab == 'uov' or (
|
||||
current_tab == 'ip' and mixing_image_prompt_and_vary_upscale)) \
|
||||
current_tab == 'ip' and advanced_parameters.mixing_image_prompt_and_vary_upscale)) \
|
||||
and uov_method != flags.disabled and uov_input_image is not None:
|
||||
uov_input_image = HWC3(uov_input_image)
|
||||
if 'vary' in uov_method:
|
||||
@ -322,45 +255,37 @@ def worker():
|
||||
if 'fast' in uov_method:
|
||||
skip_prompt_processing = True
|
||||
else:
|
||||
steps = performance_selection.steps_uov()
|
||||
steps = 18
|
||||
|
||||
if performance_selection == 'Speed':
|
||||
steps = 18
|
||||
|
||||
if performance_selection == 'Quality':
|
||||
steps = 36
|
||||
|
||||
if performance_selection == 'Extreme Speed':
|
||||
steps = 8
|
||||
|
||||
progressbar(async_task, 1, 'Downloading upscale models ...')
|
||||
modules.config.downloading_upscale_model()
|
||||
if (current_tab == 'inpaint' or (
|
||||
current_tab == 'ip' and mixing_image_prompt_and_inpaint)) \
|
||||
current_tab == 'ip' and advanced_parameters.mixing_image_prompt_and_inpaint)) \
|
||||
and isinstance(inpaint_input_image, dict):
|
||||
inpaint_image = inpaint_input_image['image']
|
||||
inpaint_mask = inpaint_input_image['mask'][:, :, 0]
|
||||
|
||||
if inpaint_mask_upload_checkbox:
|
||||
if isinstance(inpaint_mask_image_upload, np.ndarray):
|
||||
if inpaint_mask_image_upload.ndim == 3:
|
||||
H, W, C = inpaint_image.shape
|
||||
inpaint_mask_image_upload = resample_image(inpaint_mask_image_upload, width=W, height=H)
|
||||
inpaint_mask_image_upload = np.mean(inpaint_mask_image_upload, axis=2)
|
||||
inpaint_mask_image_upload = (inpaint_mask_image_upload > 127).astype(np.uint8) * 255
|
||||
inpaint_mask = np.maximum(inpaint_mask, inpaint_mask_image_upload)
|
||||
|
||||
if int(inpaint_erode_or_dilate) != 0:
|
||||
inpaint_mask = erode_or_dilate(inpaint_mask, inpaint_erode_or_dilate)
|
||||
|
||||
if invert_mask_checkbox:
|
||||
inpaint_mask = 255 - inpaint_mask
|
||||
|
||||
inpaint_image = HWC3(inpaint_image)
|
||||
if isinstance(inpaint_image, np.ndarray) and isinstance(inpaint_mask, np.ndarray) \
|
||||
and (np.any(inpaint_mask > 127) or len(outpaint_selections) > 0):
|
||||
progressbar(async_task, 1, 'Downloading upscale models ...')
|
||||
modules.config.downloading_upscale_model()
|
||||
if inpaint_parameterized:
|
||||
progressbar(async_task, 1, 'Downloading inpainter ...')
|
||||
modules.config.downloading_upscale_model()
|
||||
inpaint_head_model_path, inpaint_patch_model_path = modules.config.downloading_inpaint_models(
|
||||
inpaint_engine)
|
||||
advanced_parameters.inpaint_engine)
|
||||
base_model_additional_loras += [(inpaint_patch_model_path, 1.0)]
|
||||
print(f'[Inpaint] Current inpaint model is {inpaint_patch_model_path}')
|
||||
if refiner_model_name == 'None':
|
||||
use_synthetic_refiner = True
|
||||
refiner_switch = 0.8
|
||||
refiner_switch = 0.5
|
||||
else:
|
||||
inpaint_head_model_path, inpaint_patch_model_path = None, None
|
||||
print(f'[Inpaint] Parameterized inpaint is disabled.')
|
||||
@ -371,8 +296,8 @@ def worker():
|
||||
prompt = inpaint_additional_prompt + '\n' + prompt
|
||||
goals.append('inpaint')
|
||||
if current_tab == 'ip' or \
|
||||
mixing_image_prompt_and_vary_upscale or \
|
||||
mixing_image_prompt_and_inpaint:
|
||||
advanced_parameters.mixing_image_prompt_and_inpaint or \
|
||||
advanced_parameters.mixing_image_prompt_and_vary_upscale:
|
||||
goals.append('cn')
|
||||
progressbar(async_task, 1, 'Downloading control models ...')
|
||||
if len(cn_tasks[flags.cn_canny]) > 0:
|
||||
@ -391,19 +316,19 @@ def worker():
|
||||
ip_adapter.load_ip_adapter(clip_vision_path, ip_negative_path, ip_adapter_path)
|
||||
ip_adapter.load_ip_adapter(clip_vision_path, ip_negative_path, ip_adapter_face_path)
|
||||
|
||||
if overwrite_step > 0:
|
||||
steps = overwrite_step
|
||||
|
||||
switch = int(round(steps * refiner_switch))
|
||||
|
||||
if overwrite_switch > 0:
|
||||
switch = overwrite_switch
|
||||
if advanced_parameters.overwrite_step > 0:
|
||||
steps = advanced_parameters.overwrite_step
|
||||
|
||||
if overwrite_width > 0:
|
||||
width = overwrite_width
|
||||
if advanced_parameters.overwrite_switch > 0:
|
||||
switch = advanced_parameters.overwrite_switch
|
||||
|
||||
if overwrite_height > 0:
|
||||
height = overwrite_height
|
||||
if advanced_parameters.overwrite_width > 0:
|
||||
width = advanced_parameters.overwrite_width
|
||||
|
||||
if advanced_parameters.overwrite_height > 0:
|
||||
height = advanced_parameters.overwrite_height
|
||||
|
||||
print(f'[Parameters] Sampler = {sampler_name} - {scheduler_name}')
|
||||
print(f'[Parameters] Steps = {steps} - {switch}')
|
||||
@ -432,19 +357,14 @@ def worker():
|
||||
|
||||
progressbar(async_task, 3, 'Processing prompts ...')
|
||||
tasks = []
|
||||
|
||||
for i in range(image_number):
|
||||
if disable_seed_increment:
|
||||
task_seed = seed % (constants.MAX_SEED + 1)
|
||||
else:
|
||||
task_seed = (seed + i) % (constants.MAX_SEED + 1) # randint is inclusive, % is not
|
||||
|
||||
task_seed = (seed + i) % (constants.MAX_SEED + 1) # randint is inclusive, % is not
|
||||
task_rng = random.Random(task_seed) # may bind to inpaint noise in the future
|
||||
task_prompt = apply_wildcards(prompt, task_rng, i, read_wildcards_in_order)
|
||||
task_prompt = apply_arrays(task_prompt, i)
|
||||
task_negative_prompt = apply_wildcards(negative_prompt, task_rng, i, read_wildcards_in_order)
|
||||
task_extra_positive_prompts = [apply_wildcards(pmt, task_rng, i, read_wildcards_in_order) for pmt in extra_positive_prompts]
|
||||
task_extra_negative_prompts = [apply_wildcards(pmt, task_rng, i, read_wildcards_in_order) for pmt in extra_negative_prompts]
|
||||
|
||||
task_prompt = apply_wildcards(prompt, task_rng)
|
||||
task_negative_prompt = apply_wildcards(negative_prompt, task_rng)
|
||||
task_extra_positive_prompts = [apply_wildcards(pmt, task_rng) for pmt in extra_positive_prompts]
|
||||
task_extra_negative_prompts = [apply_wildcards(pmt, task_rng) for pmt in extra_negative_prompts]
|
||||
|
||||
positive_basic_workloads = []
|
||||
negative_basic_workloads = []
|
||||
@ -476,8 +396,8 @@ def worker():
|
||||
uc=None,
|
||||
positive_top_k=len(positive_basic_workloads),
|
||||
negative_top_k=len(negative_basic_workloads),
|
||||
log_positive_prompt='\n'.join([task_prompt] + task_extra_positive_prompts),
|
||||
log_negative_prompt='\n'.join([task_negative_prompt] + task_extra_negative_prompts),
|
||||
log_positive_prompt='; '.join([task_prompt] + task_extra_positive_prompts),
|
||||
log_negative_prompt='; '.join([task_negative_prompt] + task_extra_negative_prompts),
|
||||
))
|
||||
|
||||
if use_expansion:
|
||||
@ -507,8 +427,8 @@ def worker():
|
||||
denoising_strength = 0.5
|
||||
if 'strong' in uov_method:
|
||||
denoising_strength = 0.85
|
||||
if overwrite_vary_strength > 0:
|
||||
denoising_strength = overwrite_vary_strength
|
||||
if advanced_parameters.overwrite_vary_strength > 0:
|
||||
denoising_strength = advanced_parameters.overwrite_vary_strength
|
||||
|
||||
shape_ceil = get_image_shape_ceil(uov_input_image)
|
||||
if shape_ceil < 1024:
|
||||
@ -571,16 +491,16 @@ def worker():
|
||||
direct_return = False
|
||||
|
||||
if direct_return:
|
||||
d = [('Upscale (Fast)', 'upscale_fast', '2x')]
|
||||
uov_input_image_path = log(uov_input_image, d, output_format=output_format)
|
||||
yield_result(async_task, uov_input_image_path, do_not_show_finished_images=True)
|
||||
d = [('Upscale (Fast)', '2x')]
|
||||
log(uov_input_image, d, single_line_number=1)
|
||||
yield_result(async_task, uov_input_image, do_not_show_finished_images=True)
|
||||
return
|
||||
|
||||
tiled = True
|
||||
denoising_strength = 0.382
|
||||
|
||||
if overwrite_upscale_strength > 0:
|
||||
denoising_strength = overwrite_upscale_strength
|
||||
if advanced_parameters.overwrite_upscale_strength > 0:
|
||||
denoising_strength = advanced_parameters.overwrite_upscale_strength
|
||||
|
||||
initial_pixels = core.numpy_to_pytorch(uov_input_image)
|
||||
progressbar(async_task, 13, 'VAE encoding ...')
|
||||
@ -614,29 +534,29 @@ def worker():
|
||||
|
||||
H, W, C = inpaint_image.shape
|
||||
if 'left' in outpaint_selections:
|
||||
inpaint_image = np.pad(inpaint_image, [[0, 0], [int(W * 0.3), 0], [0, 0]], mode='edge')
|
||||
inpaint_mask = np.pad(inpaint_mask, [[0, 0], [int(W * 0.3), 0]], mode='constant',
|
||||
inpaint_image = np.pad(inpaint_image, [[0, 0], [int(H * 0.3), 0], [0, 0]], mode='edge')
|
||||
inpaint_mask = np.pad(inpaint_mask, [[0, 0], [int(H * 0.3), 0]], mode='constant',
|
||||
constant_values=255)
|
||||
if 'right' in outpaint_selections:
|
||||
inpaint_image = np.pad(inpaint_image, [[0, 0], [0, int(W * 0.3)], [0, 0]], mode='edge')
|
||||
inpaint_mask = np.pad(inpaint_mask, [[0, 0], [0, int(W * 0.3)]], mode='constant',
|
||||
inpaint_image = np.pad(inpaint_image, [[0, 0], [0, int(H * 0.3)], [0, 0]], mode='edge')
|
||||
inpaint_mask = np.pad(inpaint_mask, [[0, 0], [0, int(H * 0.3)]], mode='constant',
|
||||
constant_values=255)
|
||||
|
||||
inpaint_image = np.ascontiguousarray(inpaint_image.copy())
|
||||
inpaint_mask = np.ascontiguousarray(inpaint_mask.copy())
|
||||
inpaint_strength = 1.0
|
||||
inpaint_respective_field = 1.0
|
||||
advanced_parameters.inpaint_strength = 1.0
|
||||
advanced_parameters.inpaint_respective_field = 1.0
|
||||
|
||||
denoising_strength = inpaint_strength
|
||||
denoising_strength = advanced_parameters.inpaint_strength
|
||||
|
||||
inpaint_worker.current_task = inpaint_worker.InpaintWorker(
|
||||
image=inpaint_image,
|
||||
mask=inpaint_mask,
|
||||
use_fill=denoising_strength > 0.99,
|
||||
k=inpaint_respective_field
|
||||
k=advanced_parameters.inpaint_respective_field
|
||||
)
|
||||
|
||||
if debugging_inpaint_preprocessor:
|
||||
if advanced_parameters.debugging_inpaint_preprocessor:
|
||||
yield_result(async_task, inpaint_worker.current_task.visualize_mask_processing(),
|
||||
do_not_show_finished_images=True)
|
||||
return
|
||||
@ -682,7 +602,7 @@ def worker():
|
||||
model=pipeline.final_unet
|
||||
)
|
||||
|
||||
if not inpaint_disable_initial_latent:
|
||||
if not advanced_parameters.inpaint_disable_initial_latent:
|
||||
initial_latent = {'samples': latent_fill}
|
||||
|
||||
B, C, H, W = latent_fill.shape
|
||||
@ -695,24 +615,24 @@ def worker():
|
||||
cn_img, cn_stop, cn_weight = task
|
||||
cn_img = resize_image(HWC3(cn_img), width=width, height=height)
|
||||
|
||||
if not skipping_cn_preprocessor:
|
||||
cn_img = preprocessors.canny_pyramid(cn_img, canny_low_threshold, canny_high_threshold)
|
||||
if not advanced_parameters.skipping_cn_preprocessor:
|
||||
cn_img = preprocessors.canny_pyramid(cn_img)
|
||||
|
||||
cn_img = HWC3(cn_img)
|
||||
task[0] = core.numpy_to_pytorch(cn_img)
|
||||
if debugging_cn_preprocessor:
|
||||
if advanced_parameters.debugging_cn_preprocessor:
|
||||
yield_result(async_task, cn_img, do_not_show_finished_images=True)
|
||||
return
|
||||
for task in cn_tasks[flags.cn_cpds]:
|
||||
cn_img, cn_stop, cn_weight = task
|
||||
cn_img = resize_image(HWC3(cn_img), width=width, height=height)
|
||||
|
||||
if not skipping_cn_preprocessor:
|
||||
if not advanced_parameters.skipping_cn_preprocessor:
|
||||
cn_img = preprocessors.cpds(cn_img)
|
||||
|
||||
cn_img = HWC3(cn_img)
|
||||
task[0] = core.numpy_to_pytorch(cn_img)
|
||||
if debugging_cn_preprocessor:
|
||||
if advanced_parameters.debugging_cn_preprocessor:
|
||||
yield_result(async_task, cn_img, do_not_show_finished_images=True)
|
||||
return
|
||||
for task in cn_tasks[flags.cn_ip]:
|
||||
@ -723,21 +643,21 @@ def worker():
|
||||
cn_img = resize_image(cn_img, width=224, height=224, resize_mode=0)
|
||||
|
||||
task[0] = ip_adapter.preprocess(cn_img, ip_adapter_path=ip_adapter_path)
|
||||
if debugging_cn_preprocessor:
|
||||
if advanced_parameters.debugging_cn_preprocessor:
|
||||
yield_result(async_task, cn_img, do_not_show_finished_images=True)
|
||||
return
|
||||
for task in cn_tasks[flags.cn_ip_face]:
|
||||
cn_img, cn_stop, cn_weight = task
|
||||
cn_img = HWC3(cn_img)
|
||||
|
||||
if not skipping_cn_preprocessor:
|
||||
if not advanced_parameters.skipping_cn_preprocessor:
|
||||
cn_img = extras.face_crop.crop_image(cn_img)
|
||||
|
||||
# https://github.com/tencent-ailab/IP-Adapter/blob/d580c50a291566bbf9fc7ac0f760506607297e6d/README.md?plain=1#L75
|
||||
cn_img = resize_image(cn_img, width=224, height=224, resize_mode=0)
|
||||
|
||||
task[0] = ip_adapter.preprocess(cn_img, ip_adapter_path=ip_adapter_face_path)
|
||||
if debugging_cn_preprocessor:
|
||||
if advanced_parameters.debugging_cn_preprocessor:
|
||||
yield_result(async_task, cn_img, do_not_show_finished_images=True)
|
||||
return
|
||||
|
||||
@ -746,14 +666,14 @@ def worker():
|
||||
if len(all_ip_tasks) > 0:
|
||||
pipeline.final_unet = ip_adapter.patch_model(pipeline.final_unet, all_ip_tasks)
|
||||
|
||||
if freeu_enabled:
|
||||
if advanced_parameters.freeu_enabled:
|
||||
print(f'FreeU is enabled!')
|
||||
pipeline.final_unet = core.apply_freeu(
|
||||
pipeline.final_unet,
|
||||
freeu_b1,
|
||||
freeu_b2,
|
||||
freeu_s1,
|
||||
freeu_s2
|
||||
advanced_parameters.freeu_b1,
|
||||
advanced_parameters.freeu_b2,
|
||||
advanced_parameters.freeu_s1,
|
||||
advanced_parameters.freeu_s2
|
||||
)
|
||||
|
||||
all_steps = steps * image_number
|
||||
@ -793,14 +713,13 @@ def worker():
|
||||
done_steps = current_task_id * steps + step
|
||||
async_task.yields.append(['preview', (
|
||||
int(15.0 + 85.0 * float(done_steps) / float(all_steps)),
|
||||
f'Step {step}/{total_steps} in the {current_task_id + 1}{ordinal_suffix(current_task_id + 1)} Sampling', y)])
|
||||
f'Step {step}/{total_steps} in the {current_task_id + 1}-th Sampling',
|
||||
y)])
|
||||
|
||||
for current_task_id, task in enumerate(tasks):
|
||||
execution_start_time = time.perf_counter()
|
||||
|
||||
try:
|
||||
if async_task.last_stop is not False:
|
||||
ldm_patched.modules.model_management.interrupt_current_processing()
|
||||
positive_cond, negative_cond = task['c'], task['uc']
|
||||
|
||||
if 'cn' in goals:
|
||||
@ -828,8 +747,7 @@ def worker():
|
||||
denoise=denoising_strength,
|
||||
tiled=tiled,
|
||||
cfg_scale=cfg_scale,
|
||||
refiner_swap_method=refiner_swap_method,
|
||||
disable_preview=disable_preview
|
||||
refiner_swap_method=refiner_swap_method
|
||||
)
|
||||
|
||||
del task['c'], task['uc'], positive_cond, negative_cond # Save memory
|
||||
@ -837,62 +755,36 @@ def worker():
|
||||
if inpaint_worker.current_task is not None:
|
||||
imgs = [inpaint_worker.current_task.post_process(x) for x in imgs]
|
||||
|
||||
img_paths = []
|
||||
for x in imgs:
|
||||
d = [('Prompt', 'prompt', task['log_positive_prompt']),
|
||||
('Negative Prompt', 'negative_prompt', task['log_negative_prompt']),
|
||||
('Fooocus V2 Expansion', 'prompt_expansion', task['expansion']),
|
||||
('Styles', 'styles', str(raw_style_selections)),
|
||||
('Performance', 'performance', performance_selection.value)]
|
||||
|
||||
if performance_selection.steps() != steps:
|
||||
d.append(('Steps', 'steps', steps))
|
||||
|
||||
d += [('Resolution', 'resolution', str((width, height))),
|
||||
('Guidance Scale', 'guidance_scale', guidance_scale),
|
||||
('Sharpness', 'sharpness', sharpness),
|
||||
('ADM Guidance', 'adm_guidance', str((
|
||||
modules.patch.patch_settings[pid].positive_adm_scale,
|
||||
modules.patch.patch_settings[pid].negative_adm_scale,
|
||||
modules.patch.patch_settings[pid].adm_scaler_end))),
|
||||
('Base Model', 'base_model', base_model_name),
|
||||
('Refiner Model', 'refiner_model', refiner_model_name),
|
||||
('Refiner Switch', 'refiner_switch', refiner_switch)]
|
||||
|
||||
if refiner_model_name != 'None':
|
||||
if overwrite_switch > 0:
|
||||
d.append(('Overwrite Switch', 'overwrite_switch', overwrite_switch))
|
||||
if refiner_swap_method != flags.refiner_swap_method:
|
||||
d.append(('Refiner Swap Method', 'refiner_swap_method', refiner_swap_method))
|
||||
if modules.patch.patch_settings[pid].adaptive_cfg != modules.config.default_cfg_tsnr:
|
||||
d.append(('CFG Mimicking from TSNR', 'adaptive_cfg', modules.patch.patch_settings[pid].adaptive_cfg))
|
||||
|
||||
d.append(('Sampler', 'sampler', sampler_name))
|
||||
d.append(('Scheduler', 'scheduler', scheduler_name))
|
||||
d.append(('Seed', 'seed', str(task['task_seed'])))
|
||||
|
||||
if freeu_enabled:
|
||||
d.append(('FreeU', 'freeu', str((freeu_b1, freeu_b2, freeu_s1, freeu_s2))))
|
||||
|
||||
for li, (n, w) in enumerate(loras):
|
||||
d = [
|
||||
('Prompt', task['log_positive_prompt']),
|
||||
('Negative Prompt', task['log_negative_prompt']),
|
||||
('Fooocus V2 Expansion', task['expansion']),
|
||||
('Styles', str(raw_style_selections)),
|
||||
('Performance', performance_selection),
|
||||
('Resolution', str((width, height))),
|
||||
('Sharpness', sharpness),
|
||||
('Guidance Scale', guidance_scale),
|
||||
('ADM Guidance', str((
|
||||
modules.patch.positive_adm_scale,
|
||||
modules.patch.negative_adm_scale,
|
||||
modules.patch.adm_scaler_end))),
|
||||
('Base Model', base_model_name),
|
||||
('Refiner Model', refiner_model_name),
|
||||
('Refiner Switch', refiner_switch),
|
||||
('Sampler', sampler_name),
|
||||
('Scheduler', scheduler_name),
|
||||
('Seed', task['task_seed'])
|
||||
]
|
||||
for n, w in loras:
|
||||
if n != 'None':
|
||||
d.append((f'LoRA {li + 1}', f'lora_combined_{li + 1}', f'{n} : {w}'))
|
||||
d.append((f'LoRA [{n}] weight', w))
|
||||
log(x, d, single_line_number=3)
|
||||
|
||||
metadata_parser = None
|
||||
if save_metadata_to_images:
|
||||
metadata_parser = modules.meta_parser.get_metadata_parser(metadata_scheme)
|
||||
metadata_parser.set_data(task['log_positive_prompt'], task['positive'],
|
||||
task['log_negative_prompt'], task['negative'],
|
||||
steps, base_model_name, refiner_model_name, loras)
|
||||
d.append(('Metadata Scheme', 'metadata_scheme', metadata_scheme.value if save_metadata_to_images else save_metadata_to_images))
|
||||
d.append(('Version', 'version', 'Fooocus v' + fooocus_version.version))
|
||||
img_paths.append(log(x, d, metadata_parser, output_format))
|
||||
|
||||
yield_result(async_task, img_paths, do_not_show_finished_images=len(tasks) == 1 or disable_intermediate_results)
|
||||
yield_result(async_task, imgs, do_not_show_finished_images=len(tasks) == 1)
|
||||
except ldm_patched.modules.model_management.InterruptProcessingException as e:
|
||||
if async_task.last_stop == 'skip':
|
||||
if shared.last_stop == 'skip':
|
||||
print('User skipped')
|
||||
async_task.last_stop = False
|
||||
continue
|
||||
else:
|
||||
print('User stopped')
|
||||
@ -900,27 +792,21 @@ def worker():
|
||||
|
||||
execution_time = time.perf_counter() - execution_start_time
|
||||
print(f'Generating and saving time: {execution_time:.2f} seconds')
|
||||
async_task.processing = False
|
||||
|
||||
return
|
||||
|
||||
while True:
|
||||
time.sleep(0.01)
|
||||
if len(async_tasks) > 0:
|
||||
task = async_tasks.pop(0)
|
||||
generate_image_grid = task.args.pop(0)
|
||||
|
||||
try:
|
||||
handler(task)
|
||||
if generate_image_grid:
|
||||
build_image_wall(task)
|
||||
build_image_wall(task)
|
||||
task.yields.append(['finish', task.results])
|
||||
pipeline.prepare_text_encoder(async_call=True)
|
||||
except:
|
||||
traceback.print_exc()
|
||||
task.yields.append(['finish', task.results])
|
||||
finally:
|
||||
if pid in modules.patch.patch_settings:
|
||||
del modules.patch.patch_settings[pid]
|
||||
pass
|
||||
|
||||
|
||||
|
@ -3,41 +3,23 @@ import json
|
||||
import math
|
||||
import numbers
|
||||
import args_manager
|
||||
import tempfile
|
||||
import modules.flags
|
||||
import modules.sdxl_styles
|
||||
|
||||
from modules.model_loader import load_file_from_url
|
||||
from modules.util import get_files_from_folder, makedirs_with_log
|
||||
from modules.flags import OutputFormat, Performance, MetadataScheme
|
||||
from modules.util import get_files_from_folder
|
||||
|
||||
|
||||
def get_config_path(key, default_value):
|
||||
env = os.getenv(key)
|
||||
if env is not None and isinstance(env, str):
|
||||
print(f"Environment: {key} = {env}")
|
||||
return env
|
||||
else:
|
||||
return os.path.abspath(default_value)
|
||||
|
||||
|
||||
config_path = get_config_path('config_path', "./config.txt")
|
||||
config_example_path = get_config_path('config_example_path', "config_modification_tutorial.txt")
|
||||
config_path = os.path.abspath("./config.txt")
|
||||
config_example_path = os.path.abspath("config_modification_tutorial.txt")
|
||||
config_dict = {}
|
||||
always_save_keys = []
|
||||
visited_keys = []
|
||||
|
||||
try:
|
||||
with open(os.path.abspath(f'./presets/default.json'), "r", encoding="utf-8") as json_file:
|
||||
config_dict.update(json.load(json_file))
|
||||
except Exception as e:
|
||||
print(f'Load default preset failed.')
|
||||
print(e)
|
||||
|
||||
try:
|
||||
if os.path.exists(config_path):
|
||||
with open(config_path, "r", encoding="utf-8") as json_file:
|
||||
config_dict.update(json.load(json_file))
|
||||
config_dict = json.load(json_file)
|
||||
always_save_keys = list(config_dict.keys())
|
||||
except Exception as e:
|
||||
print(f'Failed to load config file "{config_path}" . The reason is: {str(e)}')
|
||||
@ -97,50 +79,23 @@ def try_load_deprecated_user_path_config():
|
||||
|
||||
try_load_deprecated_user_path_config()
|
||||
|
||||
|
||||
def get_presets():
|
||||
preset_folder = 'presets'
|
||||
presets = ['initial']
|
||||
if not os.path.exists(preset_folder):
|
||||
print('No presets found.')
|
||||
return presets
|
||||
|
||||
return presets + [f[:f.index('.json')] for f in os.listdir(preset_folder) if f.endswith('.json')]
|
||||
|
||||
|
||||
def try_get_preset_content(preset):
|
||||
if isinstance(preset, str):
|
||||
preset_path = os.path.abspath(f'./presets/{preset}.json')
|
||||
try:
|
||||
if os.path.exists(preset_path):
|
||||
with open(preset_path, "r", encoding="utf-8") as json_file:
|
||||
json_content = json.load(json_file)
|
||||
print(f'Loaded preset: {preset_path}')
|
||||
return json_content
|
||||
else:
|
||||
raise FileNotFoundError
|
||||
except Exception as e:
|
||||
print(f'Load preset [{preset_path}] failed')
|
||||
print(e)
|
||||
return {}
|
||||
|
||||
available_presets = get_presets()
|
||||
preset = args_manager.args.preset
|
||||
config_dict.update(try_get_preset_content(preset))
|
||||
|
||||
def get_path_output() -> str:
|
||||
"""
|
||||
Checking output path argument and overriding default path.
|
||||
"""
|
||||
global config_dict
|
||||
path_output = get_dir_or_set_default('path_outputs', '../outputs/', make_directory=True)
|
||||
if args_manager.args.output_path:
|
||||
print(f'Overriding config value path_outputs with {args_manager.args.output_path}')
|
||||
config_dict['path_outputs'] = path_output = args_manager.args.output_path
|
||||
return path_output
|
||||
if isinstance(preset, str):
|
||||
preset_path = os.path.abspath(f'./presets/{preset}.json')
|
||||
try:
|
||||
if os.path.exists(preset_path):
|
||||
with open(preset_path, "r", encoding="utf-8") as json_file:
|
||||
config_dict.update(json.load(json_file))
|
||||
print(f'Loaded preset: {preset_path}')
|
||||
else:
|
||||
raise FileNotFoundError
|
||||
except Exception as e:
|
||||
print(f'Load preset [{preset_path}] failed')
|
||||
print(e)
|
||||
|
||||
|
||||
def get_dir_or_set_default(key, default_value, as_array=False, make_directory=False):
|
||||
def get_dir_or_set_default(key, default_value):
|
||||
global config_dict, visited_keys, always_save_keys
|
||||
|
||||
if key not in visited_keys:
|
||||
@ -149,44 +104,20 @@ def get_dir_or_set_default(key, default_value, as_array=False, make_directory=Fa
|
||||
if key not in always_save_keys:
|
||||
always_save_keys.append(key)
|
||||
|
||||
v = os.getenv(key)
|
||||
if v is not None:
|
||||
print(f"Environment: {key} = {v}")
|
||||
config_dict[key] = v
|
||||
else:
|
||||
v = config_dict.get(key, None)
|
||||
|
||||
if isinstance(v, str):
|
||||
if make_directory:
|
||||
makedirs_with_log(v)
|
||||
if os.path.exists(v) and os.path.isdir(v):
|
||||
return v if not as_array else [v]
|
||||
elif isinstance(v, list):
|
||||
if make_directory:
|
||||
for d in v:
|
||||
makedirs_with_log(d)
|
||||
if all([os.path.exists(d) and os.path.isdir(d) for d in v]):
|
||||
return v
|
||||
|
||||
if v is not None:
|
||||
print(f'Failed to load config key: {json.dumps({key:v})} is invalid or does not exist; will use {json.dumps({key:default_value})} instead.')
|
||||
if isinstance(default_value, list):
|
||||
dp = []
|
||||
for path in default_value:
|
||||
abs_path = os.path.abspath(os.path.join(os.path.dirname(__file__), path))
|
||||
dp.append(abs_path)
|
||||
os.makedirs(abs_path, exist_ok=True)
|
||||
v = config_dict.get(key, None)
|
||||
if isinstance(v, str) and os.path.exists(v) and os.path.isdir(v):
|
||||
return v
|
||||
else:
|
||||
if v is not None:
|
||||
print(f'Failed to load config key: {json.dumps({key:v})} is invalid or does not exist; will use {json.dumps({key:default_value})} instead.')
|
||||
dp = os.path.abspath(os.path.join(os.path.dirname(__file__), default_value))
|
||||
os.makedirs(dp, exist_ok=True)
|
||||
if as_array:
|
||||
dp = [dp]
|
||||
config_dict[key] = dp
|
||||
return dp
|
||||
config_dict[key] = dp
|
||||
return dp
|
||||
|
||||
|
||||
paths_checkpoints = get_dir_or_set_default('path_checkpoints', ['../models/checkpoints/'], True)
|
||||
paths_loras = get_dir_or_set_default('path_loras', ['../models/loras/'], True)
|
||||
path_checkpoints = get_dir_or_set_default('path_checkpoints', '../models/checkpoints/')
|
||||
path_loras = get_dir_or_set_default('path_loras', '../models/loras/')
|
||||
path_embeddings = get_dir_or_set_default('path_embeddings', '../models/embeddings/')
|
||||
path_vae_approx = get_dir_or_set_default('path_vae_approx', '../models/vae_approx/')
|
||||
path_upscale_models = get_dir_or_set_default('path_upscale_models', '../models/upscale_models/')
|
||||
@ -194,8 +125,7 @@ path_inpaint = get_dir_or_set_default('path_inpaint', '../models/inpaint/')
|
||||
path_controlnet = get_dir_or_set_default('path_controlnet', '../models/controlnet/')
|
||||
path_clip_vision = get_dir_or_set_default('path_clip_vision', '../models/clip_vision/')
|
||||
path_fooocus_expansion = get_dir_or_set_default('path_fooocus_expansion', '../models/prompt_expansion/fooocus_expansion')
|
||||
path_wildcards = get_dir_or_set_default('path_wildcards', '../wildcards/')
|
||||
path_outputs = get_path_output()
|
||||
path_outputs = get_dir_or_set_default('path_outputs', '../outputs/')
|
||||
|
||||
|
||||
def get_config_item_or_set_default(key, default_value, validator, disable_empty_as_none=False):
|
||||
@ -204,11 +134,6 @@ def get_config_item_or_set_default(key, default_value, validator, disable_empty_
|
||||
if key not in visited_keys:
|
||||
visited_keys.append(key)
|
||||
|
||||
v = os.getenv(key)
|
||||
if v is not None:
|
||||
print(f"Environment: {key} = {v}")
|
||||
config_dict[key] = v
|
||||
|
||||
if key not in config_dict:
|
||||
config_dict[key] = default_value
|
||||
return default_value
|
||||
@ -226,109 +151,50 @@ def get_config_item_or_set_default(key, default_value, validator, disable_empty_
|
||||
return default_value
|
||||
|
||||
|
||||
def init_temp_path(path: str | None, default_path: str) -> str:
|
||||
if args_manager.args.temp_path:
|
||||
path = args_manager.args.temp_path
|
||||
|
||||
if path != '' and path != default_path:
|
||||
try:
|
||||
if not os.path.isabs(path):
|
||||
path = os.path.abspath(path)
|
||||
os.makedirs(path, exist_ok=True)
|
||||
print(f'Using temp path {path}')
|
||||
return path
|
||||
except Exception as e:
|
||||
print(f'Could not create temp path {path}. Reason: {e}')
|
||||
print(f'Using default temp path {default_path} instead.')
|
||||
|
||||
os.makedirs(default_path, exist_ok=True)
|
||||
return default_path
|
||||
|
||||
|
||||
default_temp_path = os.path.join(tempfile.gettempdir(), 'fooocus')
|
||||
temp_path = init_temp_path(get_config_item_or_set_default(
|
||||
key='temp_path',
|
||||
default_value=default_temp_path,
|
||||
validator=lambda x: isinstance(x, str),
|
||||
), default_temp_path)
|
||||
temp_path_cleanup_on_launch = get_config_item_or_set_default(
|
||||
key='temp_path_cleanup_on_launch',
|
||||
default_value=True,
|
||||
validator=lambda x: isinstance(x, bool)
|
||||
)
|
||||
default_base_model_name = default_model = get_config_item_or_set_default(
|
||||
default_base_model_name = get_config_item_or_set_default(
|
||||
key='default_model',
|
||||
default_value='model.safetensors',
|
||||
default_value='juggernautXL_version6Rundiffusion.safetensors',
|
||||
validator=lambda x: isinstance(x, str)
|
||||
)
|
||||
previous_default_models = get_config_item_or_set_default(
|
||||
key='previous_default_models',
|
||||
default_value=[],
|
||||
validator=lambda x: isinstance(x, list) and all(isinstance(k, str) for k in x)
|
||||
)
|
||||
default_refiner_model_name = default_refiner = get_config_item_or_set_default(
|
||||
default_refiner_model_name = get_config_item_or_set_default(
|
||||
key='default_refiner',
|
||||
default_value='None',
|
||||
validator=lambda x: isinstance(x, str)
|
||||
)
|
||||
default_refiner_switch = get_config_item_or_set_default(
|
||||
key='default_refiner_switch',
|
||||
default_value=0.8,
|
||||
default_value=0.5,
|
||||
validator=lambda x: isinstance(x, numbers.Number) and 0 <= x <= 1
|
||||
)
|
||||
default_loras_min_weight = get_config_item_or_set_default(
|
||||
key='default_loras_min_weight',
|
||||
default_value=-2,
|
||||
validator=lambda x: isinstance(x, numbers.Number) and -10 <= x <= 10
|
||||
)
|
||||
default_loras_max_weight = get_config_item_or_set_default(
|
||||
key='default_loras_max_weight',
|
||||
default_value=2,
|
||||
validator=lambda x: isinstance(x, numbers.Number) and -10 <= x <= 10
|
||||
)
|
||||
default_loras = get_config_item_or_set_default(
|
||||
key='default_loras',
|
||||
default_value=[
|
||||
[
|
||||
True,
|
||||
"sd_xl_offset_example-lora_1.0.safetensors",
|
||||
0.1
|
||||
],
|
||||
[
|
||||
"None",
|
||||
1.0
|
||||
],
|
||||
[
|
||||
True,
|
||||
"None",
|
||||
1.0
|
||||
],
|
||||
[
|
||||
True,
|
||||
"None",
|
||||
1.0
|
||||
],
|
||||
[
|
||||
True,
|
||||
"None",
|
||||
1.0
|
||||
],
|
||||
[
|
||||
True,
|
||||
"None",
|
||||
1.0
|
||||
]
|
||||
],
|
||||
validator=lambda x: isinstance(x, list) and all(
|
||||
len(y) == 3 and isinstance(y[0], bool) and isinstance(y[1], str) and isinstance(y[2], numbers.Number)
|
||||
or len(y) == 2 and isinstance(y[0], str) and isinstance(y[1], numbers.Number)
|
||||
for y in x)
|
||||
)
|
||||
default_loras = [(y[0], y[1], y[2]) if len(y) == 3 else (True, y[0], y[1]) for y in default_loras]
|
||||
default_max_lora_number = get_config_item_or_set_default(
|
||||
key='default_max_lora_number',
|
||||
default_value=len(default_loras) if isinstance(default_loras, list) and len(default_loras) > 0 else 5,
|
||||
validator=lambda x: isinstance(x, int) and x >= 1
|
||||
validator=lambda x: isinstance(x, list) and all(len(y) == 2 and isinstance(y[0], str) and isinstance(y[1], numbers.Number) for y in x)
|
||||
)
|
||||
default_cfg_scale = get_config_item_or_set_default(
|
||||
key='default_cfg_scale',
|
||||
default_value=7.0,
|
||||
default_value=4.0,
|
||||
validator=lambda x: isinstance(x, numbers.Number)
|
||||
)
|
||||
default_sample_sharpness = get_config_item_or_set_default(
|
||||
@ -369,37 +235,31 @@ default_prompt = get_config_item_or_set_default(
|
||||
)
|
||||
default_performance = get_config_item_or_set_default(
|
||||
key='default_performance',
|
||||
default_value=Performance.SPEED.value,
|
||||
validator=lambda x: x in Performance.list()
|
||||
default_value='Speed',
|
||||
validator=lambda x: x in modules.flags.performance_selections
|
||||
)
|
||||
default_advanced_checkbox = get_config_item_or_set_default(
|
||||
key='default_advanced_checkbox',
|
||||
default_value=False,
|
||||
validator=lambda x: isinstance(x, bool)
|
||||
)
|
||||
default_max_image_number = get_config_item_or_set_default(
|
||||
key='default_max_image_number',
|
||||
default_value=32,
|
||||
validator=lambda x: isinstance(x, int) and x >= 1
|
||||
)
|
||||
default_output_format = get_config_item_or_set_default(
|
||||
key='default_output_format',
|
||||
default_value='png',
|
||||
validator=lambda x: x in OutputFormat.list()
|
||||
)
|
||||
default_image_number = get_config_item_or_set_default(
|
||||
key='default_image_number',
|
||||
default_value=2,
|
||||
validator=lambda x: isinstance(x, int) and 1 <= x <= default_max_image_number
|
||||
validator=lambda x: isinstance(x, int) and 1 <= x <= 32
|
||||
)
|
||||
checkpoint_downloads = get_config_item_or_set_default(
|
||||
key='checkpoint_downloads',
|
||||
default_value={},
|
||||
default_value={
|
||||
"juggernautXL_version6Rundiffusion.safetensors": "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_version6Rundiffusion.safetensors"
|
||||
},
|
||||
validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items())
|
||||
)
|
||||
lora_downloads = get_config_item_or_set_default(
|
||||
key='lora_downloads',
|
||||
default_value={},
|
||||
default_value={
|
||||
"sd_xl_offset_example-lora_1.0.safetensors": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors"
|
||||
},
|
||||
validator=lambda x: isinstance(x, dict) and all(isinstance(k, str) and isinstance(v, str) for k, v in x.items())
|
||||
)
|
||||
embeddings_downloads = get_config_item_or_set_default(
|
||||
@ -450,51 +310,30 @@ example_inpaint_prompts = get_config_item_or_set_default(
|
||||
],
|
||||
validator=lambda x: isinstance(x, list) and all(isinstance(v, str) for v in x)
|
||||
)
|
||||
default_save_metadata_to_images = get_config_item_or_set_default(
|
||||
key='default_save_metadata_to_images',
|
||||
default_value=False,
|
||||
validator=lambda x: isinstance(x, bool)
|
||||
)
|
||||
default_metadata_scheme = get_config_item_or_set_default(
|
||||
key='default_metadata_scheme',
|
||||
default_value=MetadataScheme.FOOOCUS.value,
|
||||
validator=lambda x: x in [y[1] for y in modules.flags.metadata_scheme if y[1] == x]
|
||||
)
|
||||
metadata_created_by = get_config_item_or_set_default(
|
||||
key='metadata_created_by',
|
||||
default_value='',
|
||||
validator=lambda x: isinstance(x, str)
|
||||
)
|
||||
|
||||
example_inpaint_prompts = [[x] for x in example_inpaint_prompts]
|
||||
|
||||
config_dict["default_loras"] = default_loras = default_loras[:default_max_lora_number] + [[True, 'None', 1.0] for _ in range(default_max_lora_number - len(default_loras))]
|
||||
config_dict["default_loras"] = default_loras = default_loras[:5] + [['None', 1.0] for _ in range(5 - len(default_loras))]
|
||||
|
||||
possible_preset_keys = [
|
||||
"default_model",
|
||||
"default_refiner",
|
||||
"default_refiner_switch",
|
||||
"default_loras",
|
||||
"default_cfg_scale",
|
||||
"default_sample_sharpness",
|
||||
"default_sampler",
|
||||
"default_scheduler",
|
||||
"default_performance",
|
||||
"default_prompt",
|
||||
"default_prompt_negative",
|
||||
"default_styles",
|
||||
"default_aspect_ratio",
|
||||
"checkpoint_downloads",
|
||||
"embeddings_downloads",
|
||||
"lora_downloads",
|
||||
]
|
||||
|
||||
# mapping config to meta parameter
|
||||
possible_preset_keys = {
|
||||
"default_model": "base_model",
|
||||
"default_refiner": "refiner_model",
|
||||
"default_refiner_switch": "refiner_switch",
|
||||
"previous_default_models": "previous_default_models",
|
||||
"default_loras_min_weight": "default_loras_min_weight",
|
||||
"default_loras_max_weight": "default_loras_max_weight",
|
||||
"default_loras": "<processed>",
|
||||
"default_cfg_scale": "guidance_scale",
|
||||
"default_sample_sharpness": "sharpness",
|
||||
"default_sampler": "sampler",
|
||||
"default_scheduler": "scheduler",
|
||||
"default_overwrite_step": "steps",
|
||||
"default_performance": "performance",
|
||||
"default_image_number": "image_number",
|
||||
"default_prompt": "prompt",
|
||||
"default_prompt_negative": "negative_prompt",
|
||||
"default_styles": "styles",
|
||||
"default_aspect_ratio": "resolution",
|
||||
"default_save_metadata_to_images": "default_save_metadata_to_images",
|
||||
"checkpoint_downloads": "checkpoint_downloads",
|
||||
"embeddings_downloads": "embeddings_downloads",
|
||||
"lora_downloads": "lora_downloads"
|
||||
}
|
||||
|
||||
REWRITE_PRESET = False
|
||||
|
||||
@ -533,30 +372,21 @@ with open(config_example_path, "w", encoding="utf-8") as json_file:
|
||||
'and there is no "," before the last "}". \n\n\n')
|
||||
json.dump({k: config_dict[k] for k in visited_keys}, json_file, indent=4)
|
||||
|
||||
|
||||
os.makedirs(path_outputs, exist_ok=True)
|
||||
|
||||
model_filenames = []
|
||||
lora_filenames = []
|
||||
wildcard_filenames = []
|
||||
|
||||
sdxl_lcm_lora = 'sdxl_lcm_lora.safetensors'
|
||||
sdxl_lightning_lora = 'sdxl_lightning_4step_lora.safetensors'
|
||||
loras_metadata_remove = [sdxl_lcm_lora, sdxl_lightning_lora]
|
||||
|
||||
|
||||
def get_model_filenames(folder_paths, extensions=None, name_filter=None):
|
||||
if extensions is None:
|
||||
extensions = ['.pth', '.ckpt', '.bin', '.safetensors', '.fooocus.patch']
|
||||
files = []
|
||||
for folder in folder_paths:
|
||||
files += get_files_from_folder(folder, extensions, name_filter)
|
||||
return files
|
||||
def get_model_filenames(folder_path, name_filter=None):
|
||||
return get_files_from_folder(folder_path, ['.pth', '.ckpt', '.bin', '.safetensors', '.fooocus.patch'], name_filter)
|
||||
|
||||
|
||||
def update_files():
|
||||
global model_filenames, lora_filenames, wildcard_filenames, available_presets
|
||||
model_filenames = get_model_filenames(paths_checkpoints)
|
||||
lora_filenames = get_model_filenames(paths_loras)
|
||||
wildcard_filenames = get_files_from_folder(path_wildcards, ['.txt'])
|
||||
available_presets = get_presets()
|
||||
def update_all_model_names():
|
||||
global model_filenames, lora_filenames
|
||||
model_filenames = get_model_filenames(path_checkpoints)
|
||||
lora_filenames = get_model_filenames(path_loras)
|
||||
return
|
||||
|
||||
|
||||
@ -601,18 +431,10 @@ def downloading_inpaint_models(v):
|
||||
def downloading_sdxl_lcm_lora():
|
||||
load_file_from_url(
|
||||
url='https://huggingface.co/lllyasviel/misc/resolve/main/sdxl_lcm_lora.safetensors',
|
||||
model_dir=paths_loras[0],
|
||||
file_name=sdxl_lcm_lora
|
||||
model_dir=path_loras,
|
||||
file_name='sdxl_lcm_lora.safetensors'
|
||||
)
|
||||
return sdxl_lcm_lora
|
||||
|
||||
def downloading_sdxl_lightning_lora():
|
||||
load_file_from_url(
|
||||
url='https://huggingface.co/ByteDance/SDXL-Lightning/resolve/main/sdxl_lightning_4step_lora.safetensors',
|
||||
model_dir=paths_loras[0],
|
||||
file_name=sdxl_lightning_lora
|
||||
)
|
||||
return sdxl_lightning_lora
|
||||
return 'sdxl_lcm_lora.safetensors'
|
||||
|
||||
|
||||
def downloading_controlnet_canny():
|
||||
@ -680,4 +502,4 @@ def downloading_upscale_model():
|
||||
return os.path.join(path_upscale_models, 'fooocus_upscaler_s409985e5.bin')
|
||||
|
||||
|
||||
update_files()
|
||||
update_all_model_names()
|
||||
|
@ -1,3 +1,8 @@
|
||||
from modules.patch import patch_all
|
||||
|
||||
patch_all()
|
||||
|
||||
|
||||
import os
|
||||
import einops
|
||||
import torch
|
||||
@ -11,6 +16,7 @@ import ldm_patched.modules.controlnet
|
||||
import modules.sample_hijack
|
||||
import ldm_patched.modules.samplers
|
||||
import ldm_patched.modules.latent_formats
|
||||
import modules.advanced_parameters
|
||||
|
||||
from ldm_patched.modules.sd import load_checkpoint_guess_config
|
||||
from ldm_patched.contrib.external import VAEDecode, EmptyLatentImage, VAEEncode, VAEEncodeTiled, VAEDecodeTiled, \
|
||||
@ -18,7 +24,6 @@ from ldm_patched.contrib.external import VAEDecode, EmptyLatentImage, VAEEncode,
|
||||
from ldm_patched.contrib.external_freelunch import FreeU_V2
|
||||
from ldm_patched.modules.sample import prepare_mask
|
||||
from modules.lora import match_lora
|
||||
from modules.util import get_file_from_folder_list
|
||||
from ldm_patched.modules.lora import model_lora_keys_unet, model_lora_keys_clip
|
||||
from modules.config import path_embeddings
|
||||
from ldm_patched.contrib.external_model_advanced import ModelSamplingDiscrete
|
||||
@ -73,14 +78,14 @@ class StableDiffusionModel:
|
||||
|
||||
loras_to_load = []
|
||||
|
||||
for filename, weight in loras:
|
||||
if filename == 'None':
|
||||
for name, weight in loras:
|
||||
if name == 'None':
|
||||
continue
|
||||
|
||||
if os.path.exists(filename):
|
||||
lora_filename = filename
|
||||
if os.path.exists(name):
|
||||
lora_filename = name
|
||||
else:
|
||||
lora_filename = get_file_from_folder_list(filename, modules.config.paths_loras)
|
||||
lora_filename = os.path.join(modules.config.path_loras, name)
|
||||
|
||||
if not os.path.exists(lora_filename):
|
||||
print(f'Lora file not found: {lora_filename}')
|
||||
@ -263,7 +268,7 @@ def get_previewer(model):
|
||||
def ksampler(model, positive, negative, latent, seed=None, steps=30, cfg=7.0, sampler_name='dpmpp_2m_sde_gpu',
|
||||
scheduler='karras', denoise=1.0, disable_noise=False, start_step=None, last_step=None,
|
||||
force_full_denoise=False, callback_function=None, refiner=None, refiner_switch=-1,
|
||||
previewer_start=None, previewer_end=None, sigmas=None, noise_mean=None, disable_preview=False):
|
||||
previewer_start=None, previewer_end=None, sigmas=None, noise_mean=None):
|
||||
|
||||
if sigmas is not None:
|
||||
sigmas = sigmas.clone().to(ldm_patched.modules.model_management.get_torch_device())
|
||||
@ -294,7 +299,7 @@ def ksampler(model, positive, negative, latent, seed=None, steps=30, cfg=7.0, sa
|
||||
def callback(step, x0, x, total_steps):
|
||||
ldm_patched.modules.model_management.throw_exception_if_processing_interrupted()
|
||||
y = None
|
||||
if previewer is not None and not disable_preview:
|
||||
if previewer is not None and not modules.advanced_parameters.disable_preview:
|
||||
y = previewer(x0, previewer_start + step, previewer_end)
|
||||
if callback_function is not None:
|
||||
callback_function(previewer_start + step, x0, x, previewer_end, y)
|
||||
|
@ -11,7 +11,6 @@ from extras.expansion import FooocusExpansion
|
||||
|
||||
from ldm_patched.modules.model_base import SDXL, SDXLRefiner
|
||||
from modules.sample_hijack import clip_separate
|
||||
from modules.util import get_file_from_folder_list, get_enabled_loras
|
||||
|
||||
|
||||
model_base = core.StableDiffusionModel()
|
||||
@ -61,7 +60,7 @@ def assert_model_integrity():
|
||||
def refresh_base_model(name):
|
||||
global model_base
|
||||
|
||||
filename = get_file_from_folder_list(name, modules.config.paths_checkpoints)
|
||||
filename = os.path.abspath(os.path.realpath(os.path.join(modules.config.path_checkpoints, name)))
|
||||
|
||||
if model_base.filename == filename:
|
||||
return
|
||||
@ -77,7 +76,7 @@ def refresh_base_model(name):
|
||||
def refresh_refiner_model(name):
|
||||
global model_refiner
|
||||
|
||||
filename = get_file_from_folder_list(name, modules.config.paths_checkpoints)
|
||||
filename = os.path.abspath(os.path.realpath(os.path.join(modules.config.path_checkpoints, name)))
|
||||
|
||||
if model_refiner.filename == filename:
|
||||
return
|
||||
@ -254,7 +253,7 @@ def refresh_everything(refiner_model_name, base_model_name, loras,
|
||||
refresh_everything(
|
||||
refiner_model_name=modules.config.default_refiner_model_name,
|
||||
base_model_name=modules.config.default_base_model_name,
|
||||
loras=get_enabled_loras(modules.config.default_loras)
|
||||
loras=modules.config.default_loras
|
||||
)
|
||||
|
||||
|
||||
@ -316,7 +315,7 @@ def get_candidate_vae(steps, switch, denoise=1.0, refiner_swap_method='joint'):
|
||||
|
||||
@torch.no_grad()
|
||||
@torch.inference_mode()
|
||||
def process_diffusion(positive_cond, negative_cond, steps, switch, width, height, image_seed, callback, sampler_name, scheduler_name, latent=None, denoise=1.0, tiled=False, cfg_scale=7.0, refiner_swap_method='joint', disable_preview=False):
|
||||
def process_diffusion(positive_cond, negative_cond, steps, switch, width, height, image_seed, callback, sampler_name, scheduler_name, latent=None, denoise=1.0, tiled=False, cfg_scale=7.0, refiner_swap_method='joint'):
|
||||
target_unet, target_vae, target_refiner_unet, target_refiner_vae, target_clip \
|
||||
= final_unet, final_vae, final_refiner_unet, final_refiner_vae, final_clip
|
||||
|
||||
@ -375,7 +374,6 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
|
||||
refiner_switch=switch,
|
||||
previewer_start=0,
|
||||
previewer_end=steps,
|
||||
disable_preview=disable_preview
|
||||
)
|
||||
decoded_latent = core.decode_vae(vae=target_vae, latent_image=sampled_latent, tiled=tiled)
|
||||
|
||||
@ -394,7 +392,6 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
|
||||
scheduler=scheduler_name,
|
||||
previewer_start=0,
|
||||
previewer_end=steps,
|
||||
disable_preview=disable_preview
|
||||
)
|
||||
print('Refiner swapped by changing ksampler. Noise preserved.')
|
||||
|
||||
@ -417,7 +414,6 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
|
||||
scheduler=scheduler_name,
|
||||
previewer_start=switch,
|
||||
previewer_end=steps,
|
||||
disable_preview=disable_preview
|
||||
)
|
||||
|
||||
target_model = target_refiner_vae
|
||||
@ -426,7 +422,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
|
||||
decoded_latent = core.decode_vae(vae=target_model, latent_image=sampled_latent, tiled=tiled)
|
||||
|
||||
if refiner_swap_method == 'vae':
|
||||
modules.patch.patch_settings[os.getpid()].eps_record = 'vae'
|
||||
modules.patch.eps_record = 'vae'
|
||||
|
||||
if modules.inpaint_worker.current_task is not None:
|
||||
modules.inpaint_worker.current_task.unswap()
|
||||
@ -444,8 +440,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
|
||||
sampler_name=sampler_name,
|
||||
scheduler=scheduler_name,
|
||||
previewer_start=0,
|
||||
previewer_end=steps,
|
||||
disable_preview=disable_preview
|
||||
previewer_end=steps
|
||||
)
|
||||
print('Fooocus VAE-based swap.')
|
||||
|
||||
@ -464,7 +459,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
|
||||
denoise=denoise)[switch:] * k_sigmas
|
||||
len_sigmas = len(sigmas) - 1
|
||||
|
||||
noise_mean = torch.mean(modules.patch.patch_settings[os.getpid()].eps_record, dim=1, keepdim=True)
|
||||
noise_mean = torch.mean(modules.patch.eps_record, dim=1, keepdim=True)
|
||||
|
||||
if modules.inpaint_worker.current_task is not None:
|
||||
modules.inpaint_worker.current_task.swap()
|
||||
@ -484,8 +479,7 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
|
||||
previewer_start=switch,
|
||||
previewer_end=steps,
|
||||
sigmas=sigmas,
|
||||
noise_mean=noise_mean,
|
||||
disable_preview=disable_preview
|
||||
noise_mean=noise_mean
|
||||
)
|
||||
|
||||
target_model = target_refiner_vae
|
||||
@ -494,5 +488,5 @@ def process_diffusion(positive_cond, negative_cond, steps, switch, width, height
|
||||
decoded_latent = core.decode_vae(vae=target_model, latent_image=sampled_latent, tiled=tiled)
|
||||
|
||||
images = core.pytorch_to_numpy(decoded_latent)
|
||||
modules.patch.patch_settings[os.getpid()].eps_record = None
|
||||
modules.patch.eps_record = None
|
||||
return images
|
||||
|
107
modules/flags.py
107
modules/flags.py
@ -1,5 +1,3 @@
|
||||
from enum import IntEnum, Enum
|
||||
|
||||
disabled = 'Disabled'
|
||||
enabled = 'Enabled'
|
||||
subtle_variation = 'Vary (Subtle)'
|
||||
@ -12,49 +10,16 @@ uov_list = [
|
||||
disabled, subtle_variation, strong_variation, upscale_15, upscale_2, upscale_fast
|
||||
]
|
||||
|
||||
CIVITAI_NO_KARRAS = ["euler", "euler_ancestral", "heun", "dpm_fast", "dpm_adaptive", "ddim", "uni_pc"]
|
||||
|
||||
# fooocus: a1111 (Civitai)
|
||||
KSAMPLER = {
|
||||
"euler": "Euler",
|
||||
"euler_ancestral": "Euler a",
|
||||
"heun": "Heun",
|
||||
"heunpp2": "",
|
||||
"dpm_2": "DPM2",
|
||||
"dpm_2_ancestral": "DPM2 a",
|
||||
"lms": "LMS",
|
||||
"dpm_fast": "DPM fast",
|
||||
"dpm_adaptive": "DPM adaptive",
|
||||
"dpmpp_2s_ancestral": "DPM++ 2S a",
|
||||
"dpmpp_sde": "DPM++ SDE",
|
||||
"dpmpp_sde_gpu": "DPM++ SDE",
|
||||
"dpmpp_2m": "DPM++ 2M",
|
||||
"dpmpp_2m_sde": "DPM++ 2M SDE",
|
||||
"dpmpp_2m_sde_gpu": "DPM++ 2M SDE",
|
||||
"dpmpp_3m_sde": "",
|
||||
"dpmpp_3m_sde_gpu": "",
|
||||
"ddpm": "",
|
||||
"lcm": "LCM"
|
||||
}
|
||||
|
||||
SAMPLER_EXTRA = {
|
||||
"ddim": "DDIM",
|
||||
"uni_pc": "UniPC",
|
||||
"uni_pc_bh2": ""
|
||||
}
|
||||
|
||||
SAMPLERS = KSAMPLER | SAMPLER_EXTRA
|
||||
|
||||
KSAMPLER_NAMES = list(KSAMPLER.keys())
|
||||
KSAMPLER_NAMES = ["euler", "euler_ancestral", "heun", "heunpp2","dpm_2", "dpm_2_ancestral",
|
||||
"lms", "dpm_fast", "dpm_adaptive", "dpmpp_2s_ancestral", "dpmpp_sde", "dpmpp_sde_gpu",
|
||||
"dpmpp_2m", "dpmpp_2m_sde", "dpmpp_2m_sde_gpu", "dpmpp_3m_sde", "dpmpp_3m_sde_gpu", "ddpm", "lcm"]
|
||||
|
||||
SCHEDULER_NAMES = ["normal", "karras", "exponential", "sgm_uniform", "simple", "ddim_uniform", "lcm", "turbo"]
|
||||
SAMPLER_NAMES = KSAMPLER_NAMES + list(SAMPLER_EXTRA.keys())
|
||||
SAMPLER_NAMES = KSAMPLER_NAMES + ["ddim", "uni_pc", "uni_pc_bh2"]
|
||||
|
||||
sampler_list = SAMPLER_NAMES
|
||||
scheduler_list = SCHEDULER_NAMES
|
||||
|
||||
refiner_swap_method = 'joint'
|
||||
|
||||
cn_ip = "ImagePrompt"
|
||||
cn_ip_face = "FaceSwap"
|
||||
cn_canny = "PyraCanny"
|
||||
@ -67,9 +32,9 @@ default_parameters = {
|
||||
cn_ip: (0.5, 0.6), cn_ip_face: (0.9, 0.75), cn_canny: (0.5, 1.0), cn_cpds: (0.5, 1.0)
|
||||
} # stop, weight
|
||||
|
||||
output_formats = ['png', 'jpeg', 'webp']
|
||||
|
||||
inpaint_engine_versions = ['None', 'v1', 'v2.5', 'v2.6']
|
||||
performance_selections = ['Speed', 'Quality', 'Extreme Speed']
|
||||
|
||||
inpaint_option_default = 'Inpaint or Outpaint (default)'
|
||||
inpaint_option_detail = 'Improve Detail (face, hand, eyes, etc.)'
|
||||
inpaint_option_modify = 'Modify Content (add objects, change background, etc.)'
|
||||
@ -77,63 +42,3 @@ inpaint_options = [inpaint_option_default, inpaint_option_detail, inpaint_option
|
||||
|
||||
desc_type_photo = 'Photograph'
|
||||
desc_type_anime = 'Art/Anime'
|
||||
|
||||
|
||||
class MetadataScheme(Enum):
|
||||
FOOOCUS = 'fooocus'
|
||||
A1111 = 'a1111'
|
||||
|
||||
|
||||
metadata_scheme = [
|
||||
(f'{MetadataScheme.FOOOCUS.value} (json)', MetadataScheme.FOOOCUS.value),
|
||||
(f'{MetadataScheme.A1111.value} (plain text)', MetadataScheme.A1111.value),
|
||||
]
|
||||
|
||||
controlnet_image_count = 4
|
||||
|
||||
|
||||
class OutputFormat(Enum):
|
||||
PNG = 'png'
|
||||
JPEG = 'jpeg'
|
||||
WEBP = 'webp'
|
||||
|
||||
@classmethod
|
||||
def list(cls) -> list:
|
||||
return list(map(lambda c: c.value, cls))
|
||||
|
||||
|
||||
class Steps(IntEnum):
|
||||
QUALITY = 60
|
||||
SPEED = 30
|
||||
EXTREME_SPEED = 8
|
||||
LIGHTNING = 4
|
||||
|
||||
|
||||
class StepsUOV(IntEnum):
|
||||
QUALITY = 36
|
||||
SPEED = 18
|
||||
EXTREME_SPEED = 8
|
||||
LIGHTNING = 4
|
||||
|
||||
|
||||
class Performance(Enum):
|
||||
QUALITY = 'Quality'
|
||||
SPEED = 'Speed'
|
||||
EXTREME_SPEED = 'Extreme Speed'
|
||||
LIGHTNING = 'Lightning'
|
||||
|
||||
@classmethod
|
||||
def list(cls) -> list:
|
||||
return list(map(lambda c: c.value, cls))
|
||||
|
||||
@classmethod
|
||||
def has_restricted_features(cls, x) -> bool:
|
||||
if isinstance(x, Performance):
|
||||
x = x.value
|
||||
return x in [cls.EXTREME_SPEED.value, cls.LIGHTNING.value]
|
||||
|
||||
def steps(self) -> int | None:
|
||||
return Steps[self.name].value if Steps[self.name] else None
|
||||
|
||||
def steps_uov(self) -> int | None:
|
||||
return StepsUOV[self.name].value if Steps[self.name] else None
|
||||
|
@ -17,7 +17,7 @@ from gradio_client.documentation import document, set_documentation_group
|
||||
from gradio_client.serializing import ImgSerializable
|
||||
from PIL import Image as _Image # using _ to minimize namespace pollution
|
||||
|
||||
from gradio import processing_utils, utils, Error
|
||||
from gradio import processing_utils, utils
|
||||
from gradio.components.base import IOComponent, _Keywords, Block
|
||||
from gradio.deprecation import warn_style_method_deprecation
|
||||
from gradio.events import (
|
||||
@ -275,10 +275,7 @@ class Image(
|
||||
x, mask = x["image"], x["mask"]
|
||||
|
||||
assert isinstance(x, str)
|
||||
try:
|
||||
im = processing_utils.decode_base64_to_image(x)
|
||||
except PIL.UnidentifiedImageError:
|
||||
raise Error("Unsupported image type in input")
|
||||
im = processing_utils.decode_base64_to_image(x)
|
||||
with warnings.catch_warnings():
|
||||
warnings.simplefilter("ignore")
|
||||
im = im.convert(self.image_mode)
|
||||
|
115
modules/html.py
115
modules/html.py
@ -1,3 +1,118 @@
|
||||
css = '''
|
||||
.loader-container {
|
||||
display: flex; /* Use flex to align items horizontally */
|
||||
align-items: center; /* Center items vertically within the container */
|
||||
white-space: nowrap; /* Prevent line breaks within the container */
|
||||
}
|
||||
|
||||
.loader {
|
||||
border: 8px solid #f3f3f3; /* Light grey */
|
||||
border-top: 8px solid #3498db; /* Blue */
|
||||
border-radius: 50%;
|
||||
width: 30px;
|
||||
height: 30px;
|
||||
animation: spin 2s linear infinite;
|
||||
}
|
||||
|
||||
@keyframes spin {
|
||||
0% { transform: rotate(0deg); }
|
||||
100% { transform: rotate(360deg); }
|
||||
}
|
||||
|
||||
/* Style the progress bar */
|
||||
progress {
|
||||
appearance: none; /* Remove default styling */
|
||||
height: 20px; /* Set the height of the progress bar */
|
||||
border-radius: 5px; /* Round the corners of the progress bar */
|
||||
background-color: #f3f3f3; /* Light grey background */
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
/* Style the progress bar container */
|
||||
.progress-container {
|
||||
margin-left: 20px;
|
||||
margin-right: 20px;
|
||||
flex-grow: 1; /* Allow the progress container to take up remaining space */
|
||||
}
|
||||
|
||||
/* Set the color of the progress bar fill */
|
||||
progress::-webkit-progress-value {
|
||||
background-color: #3498db; /* Blue color for the fill */
|
||||
}
|
||||
|
||||
progress::-moz-progress-bar {
|
||||
background-color: #3498db; /* Blue color for the fill in Firefox */
|
||||
}
|
||||
|
||||
/* Style the text on the progress bar */
|
||||
progress::after {
|
||||
content: attr(value '%'); /* Display the progress value followed by '%' */
|
||||
position: absolute;
|
||||
top: 50%;
|
||||
left: 50%;
|
||||
transform: translate(-50%, -50%);
|
||||
color: white; /* Set text color */
|
||||
font-size: 14px; /* Set font size */
|
||||
}
|
||||
|
||||
/* Style other texts */
|
||||
.loader-container > span {
|
||||
margin-left: 5px; /* Add spacing between the progress bar and the text */
|
||||
}
|
||||
|
||||
.progress-bar > .generating {
|
||||
display: none !important;
|
||||
}
|
||||
|
||||
.progress-bar{
|
||||
height: 30px !important;
|
||||
}
|
||||
|
||||
.type_row{
|
||||
height: 80px !important;
|
||||
}
|
||||
|
||||
.type_row_half{
|
||||
height: 32px !important;
|
||||
}
|
||||
|
||||
.scroll-hide{
|
||||
resize: none !important;
|
||||
}
|
||||
|
||||
.refresh_button{
|
||||
border: none !important;
|
||||
background: none !important;
|
||||
font-size: none !important;
|
||||
box-shadow: none !important;
|
||||
}
|
||||
|
||||
.advanced_check_row{
|
||||
width: 250px !important;
|
||||
}
|
||||
|
||||
.min_check{
|
||||
min-width: min(1px, 100%) !important;
|
||||
}
|
||||
|
||||
.resizable_area {
|
||||
resize: vertical;
|
||||
overflow: auto !important;
|
||||
}
|
||||
|
||||
.aspect_ratios label {
|
||||
width: 140px !important;
|
||||
}
|
||||
|
||||
.aspect_ratios label span {
|
||||
white-space: nowrap !important;
|
||||
}
|
||||
|
||||
.aspect_ratios label input {
|
||||
margin-left: -5px !important;
|
||||
}
|
||||
|
||||
'''
|
||||
progress_html = '''
|
||||
<div class="loader-container">
|
||||
<div class="loader"></div>
|
||||
|
@ -4,7 +4,6 @@ import numpy as np
|
||||
from PIL import Image, ImageFilter
|
||||
from modules.util import resample_image, set_image_shape_ceil, get_image_shape_ceil
|
||||
from modules.upscaler import perform_upscale
|
||||
import cv2
|
||||
|
||||
|
||||
inpaint_head_model = None
|
||||
@ -29,25 +28,19 @@ def box_blur(x, k):
|
||||
return np.array(x)
|
||||
|
||||
|
||||
def max_filter_opencv(x, ksize=3):
|
||||
# Use OpenCV maximum filter
|
||||
# Make sure the input type is int16
|
||||
return cv2.dilate(x, np.ones((ksize, ksize), dtype=np.int16))
|
||||
def max33(x):
|
||||
x = Image.fromarray(x)
|
||||
x = x.filter(ImageFilter.MaxFilter(3))
|
||||
return np.array(x)
|
||||
|
||||
|
||||
def morphological_open(x):
|
||||
# Convert array to int16 type via threshold operation
|
||||
x_int16 = np.zeros_like(x, dtype=np.int16)
|
||||
x_int16[x > 127] = 256
|
||||
|
||||
for i in range(32):
|
||||
# Use int16 type to avoid overflow
|
||||
maxed = max_filter_opencv(x_int16, ksize=3) - 8
|
||||
x_int16 = np.maximum(maxed, x_int16)
|
||||
|
||||
# Clip negative values to 0 and convert back to uint8 type
|
||||
x_uint8 = np.clip(x_int16, 0, 255).astype(np.uint8)
|
||||
return x_uint8
|
||||
x_int32 = np.zeros_like(x).astype(np.int32)
|
||||
x_int32[x > 127] = 256
|
||||
for _ in range(32):
|
||||
maxed = max33(x_int32) - 8
|
||||
x_int32 = np.maximum(maxed, x_int32)
|
||||
return x_int32.clip(0, 255).astype(np.uint8)
|
||||
|
||||
|
||||
def up255(x, t=0):
|
||||
|
@ -1,19 +1,16 @@
|
||||
import os
|
||||
import importlib
|
||||
import importlib.util
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import re
|
||||
import logging
|
||||
import importlib.metadata
|
||||
import packaging.version
|
||||
from packaging.requirements import Requirement
|
||||
|
||||
|
||||
logging.getLogger("torch.distributed.nn").setLevel(logging.ERROR) # sshh...
|
||||
logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())
|
||||
|
||||
re_requirement = re.compile(r"\s*([-\w]+)\s*(?:==\s*([-+.\w]+))?\s*")
|
||||
re_requirement = re.compile(r"\s*([-_a-zA-Z0-9]+)\s*(?:==\s*([-+_.a-zA-Z0-9]+))?\s*")
|
||||
|
||||
python = sys.executable
|
||||
default_command_live = (os.environ.get('LAUNCH_LIVE_OUTPUT') == "1")
|
||||
@ -76,42 +73,35 @@ def run_pip(command, desc=None, live=default_command_live):
|
||||
|
||||
|
||||
def requirements_met(requirements_file):
|
||||
"""
|
||||
Does a simple parse of a requirements.txt file to determine if all rerqirements in it
|
||||
are already installed. Returns True if so, False if not installed or parsing fails.
|
||||
"""
|
||||
|
||||
import importlib.metadata
|
||||
import packaging.version
|
||||
|
||||
with open(requirements_file, "r", encoding="utf8") as file:
|
||||
for line in file:
|
||||
line = line.strip()
|
||||
if line == "" or line.startswith('#'):
|
||||
if line.strip() == "":
|
||||
continue
|
||||
|
||||
requirement = Requirement(line)
|
||||
package = requirement.name
|
||||
m = re.match(re_requirement, line)
|
||||
if m is None:
|
||||
return False
|
||||
|
||||
package = m.group(1).strip()
|
||||
version_required = (m.group(2) or "").strip()
|
||||
|
||||
if version_required == "":
|
||||
continue
|
||||
|
||||
try:
|
||||
version_installed = importlib.metadata.version(package)
|
||||
installed_version = packaging.version.parse(version_installed)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
# Check if the installed version satisfies the requirement
|
||||
if installed_version not in requirement.specifier:
|
||||
print(f"Version mismatch for {package}: Installed version {version_installed} does not meet requirement {requirement}")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"Error checking version for {package}: {e}")
|
||||
if packaging.version.parse(version_required) != packaging.version.parse(version_installed):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def delete_folder_content(folder, prefix=None):
|
||||
result = True
|
||||
|
||||
for filename in os.listdir(folder):
|
||||
file_path = os.path.join(folder, filename)
|
||||
try:
|
||||
if os.path.isfile(file_path) or os.path.islink(file_path):
|
||||
os.unlink(file_path)
|
||||
elif os.path.isdir(file_path):
|
||||
shutil.rmtree(file_path)
|
||||
except Exception as e:
|
||||
print(f'{prefix}Failed to delete {file_path}. Reason: {e}')
|
||||
result = False
|
||||
|
||||
return result
|
@ -1,609 +0,0 @@
|
||||
import json
|
||||
import re
|
||||
from abc import ABC, abstractmethod
|
||||
from pathlib import Path
|
||||
|
||||
import gradio as gr
|
||||
from PIL import Image
|
||||
|
||||
import fooocus_version
|
||||
import modules.config
|
||||
import modules.sdxl_styles
|
||||
from modules.flags import MetadataScheme, Performance, Steps
|
||||
from modules.flags import SAMPLERS, CIVITAI_NO_KARRAS
|
||||
from modules.util import quote, unquote, extract_styles_from_prompt, is_json, get_file_from_folder_list, sha256
|
||||
|
||||
re_param_code = r'\s*(\w[\w \-/]+):\s*("(?:\\.|[^\\"])+"|[^,]*)(?:,|$)'
|
||||
re_param = re.compile(re_param_code)
|
||||
re_imagesize = re.compile(r"^(\d+)x(\d+)$")
|
||||
|
||||
hash_cache = {}
|
||||
|
||||
|
||||
def load_parameter_button_click(raw_metadata: dict | str, is_generating: bool):
|
||||
loaded_parameter_dict = raw_metadata
|
||||
if isinstance(raw_metadata, str):
|
||||
loaded_parameter_dict = json.loads(raw_metadata)
|
||||
assert isinstance(loaded_parameter_dict, dict)
|
||||
|
||||
results = [len(loaded_parameter_dict) > 0]
|
||||
|
||||
get_image_number('image_number', 'Image Number', loaded_parameter_dict, results)
|
||||
get_str('prompt', 'Prompt', loaded_parameter_dict, results)
|
||||
get_str('negative_prompt', 'Negative Prompt', loaded_parameter_dict, results)
|
||||
get_list('styles', 'Styles', loaded_parameter_dict, results)
|
||||
get_str('performance', 'Performance', loaded_parameter_dict, results)
|
||||
get_steps('steps', 'Steps', loaded_parameter_dict, results)
|
||||
get_float('overwrite_switch', 'Overwrite Switch', loaded_parameter_dict, results)
|
||||
get_resolution('resolution', 'Resolution', loaded_parameter_dict, results)
|
||||
get_float('guidance_scale', 'Guidance Scale', loaded_parameter_dict, results)
|
||||
get_float('sharpness', 'Sharpness', loaded_parameter_dict, results)
|
||||
get_adm_guidance('adm_guidance', 'ADM Guidance', loaded_parameter_dict, results)
|
||||
get_str('refiner_swap_method', 'Refiner Swap Method', loaded_parameter_dict, results)
|
||||
get_float('adaptive_cfg', 'CFG Mimicking from TSNR', loaded_parameter_dict, results)
|
||||
get_str('base_model', 'Base Model', loaded_parameter_dict, results)
|
||||
get_str('refiner_model', 'Refiner Model', loaded_parameter_dict, results)
|
||||
get_float('refiner_switch', 'Refiner Switch', loaded_parameter_dict, results)
|
||||
get_str('sampler', 'Sampler', loaded_parameter_dict, results)
|
||||
get_str('scheduler', 'Scheduler', loaded_parameter_dict, results)
|
||||
get_seed('seed', 'Seed', loaded_parameter_dict, results)
|
||||
|
||||
if is_generating:
|
||||
results.append(gr.update())
|
||||
else:
|
||||
results.append(gr.update(visible=True))
|
||||
|
||||
results.append(gr.update(visible=False))
|
||||
|
||||
get_freeu('freeu', 'FreeU', loaded_parameter_dict, results)
|
||||
|
||||
for i in range(modules.config.default_max_lora_number):
|
||||
get_lora(f'lora_combined_{i + 1}', f'LoRA {i + 1}', loaded_parameter_dict, results)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def get_str(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
assert isinstance(h, str)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
|
||||
def get_list(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
h = eval(h)
|
||||
assert isinstance(h, list)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
|
||||
def get_float(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
assert h is not None
|
||||
h = float(h)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
|
||||
|
||||
def get_image_number(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
assert h is not None
|
||||
h = int(h)
|
||||
h = min(h, modules.config.default_max_image_number)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(1)
|
||||
|
||||
|
||||
def get_steps(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
assert h is not None
|
||||
h = int(h)
|
||||
# if not in steps or in steps and performance is not the same
|
||||
if h not in iter(Steps) or Steps(h).name.casefold() != source_dict.get('performance', '').replace(' ',
|
||||
'_').casefold():
|
||||
results.append(h)
|
||||
return
|
||||
results.append(-1)
|
||||
except:
|
||||
results.append(-1)
|
||||
|
||||
|
||||
def get_resolution(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
width, height = eval(h)
|
||||
formatted = modules.config.add_ratio(f'{width}*{height}')
|
||||
if formatted in modules.config.available_aspect_ratios:
|
||||
results.append(formatted)
|
||||
results.append(-1)
|
||||
results.append(-1)
|
||||
else:
|
||||
results.append(gr.update())
|
||||
results.append(int(width))
|
||||
results.append(int(height))
|
||||
except:
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
|
||||
|
||||
def get_seed(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
assert h is not None
|
||||
h = int(h)
|
||||
results.append(False)
|
||||
results.append(h)
|
||||
except:
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
|
||||
|
||||
def get_adm_guidance(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
p, n, e = eval(h)
|
||||
results.append(float(p))
|
||||
results.append(float(n))
|
||||
results.append(float(e))
|
||||
except:
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
|
||||
|
||||
def get_freeu(key: str, fallback: str | None, source_dict: dict, results: list, default=None):
|
||||
try:
|
||||
h = source_dict.get(key, source_dict.get(fallback, default))
|
||||
b1, b2, s1, s2 = eval(h)
|
||||
results.append(True)
|
||||
results.append(float(b1))
|
||||
results.append(float(b2))
|
||||
results.append(float(s1))
|
||||
results.append(float(s2))
|
||||
except:
|
||||
results.append(False)
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
results.append(gr.update())
|
||||
|
||||
|
||||
def get_lora(key: str, fallback: str | None, source_dict: dict, results: list):
|
||||
try:
|
||||
split_data = source_dict.get(key, source_dict.get(fallback)).split(' : ')
|
||||
enabled = True
|
||||
name = split_data[0]
|
||||
weight = split_data[1]
|
||||
|
||||
if len(split_data) == 3:
|
||||
enabled = split_data[0] == 'True'
|
||||
name = split_data[1]
|
||||
weight = split_data[2]
|
||||
|
||||
weight = float(weight)
|
||||
results.append(enabled)
|
||||
results.append(name)
|
||||
results.append(weight)
|
||||
except:
|
||||
results.append(True)
|
||||
results.append('None')
|
||||
results.append(1)
|
||||
|
||||
|
||||
def get_sha256(filepath):
|
||||
global hash_cache
|
||||
if filepath not in hash_cache:
|
||||
# is_safetensors = os.path.splitext(filepath)[1].lower() == '.safetensors'
|
||||
hash_cache[filepath] = sha256(filepath)
|
||||
|
||||
return hash_cache[filepath]
|
||||
|
||||
|
||||
def parse_meta_from_preset(preset_content):
|
||||
assert isinstance(preset_content, dict)
|
||||
preset_prepared = {}
|
||||
items = preset_content
|
||||
|
||||
for settings_key, meta_key in modules.config.possible_preset_keys.items():
|
||||
if settings_key == "default_loras":
|
||||
loras = getattr(modules.config, settings_key)
|
||||
if settings_key in items:
|
||||
loras = items[settings_key]
|
||||
for index, lora in enumerate(loras[:5]):
|
||||
preset_prepared[f'lora_combined_{index + 1}'] = ' : '.join(map(str, lora))
|
||||
elif settings_key == "default_aspect_ratio":
|
||||
if settings_key in items and items[settings_key] is not None:
|
||||
default_aspect_ratio = items[settings_key]
|
||||
width, height = default_aspect_ratio.split('*')
|
||||
else:
|
||||
default_aspect_ratio = getattr(modules.config, settings_key)
|
||||
width, height = default_aspect_ratio.split('×')
|
||||
height = height[:height.index(" ")]
|
||||
preset_prepared[meta_key] = (width, height)
|
||||
else:
|
||||
preset_prepared[meta_key] = items[settings_key] if settings_key in items and items[
|
||||
settings_key] is not None else getattr(modules.config, settings_key)
|
||||
|
||||
if settings_key == "default_styles" or settings_key == "default_aspect_ratio":
|
||||
preset_prepared[meta_key] = str(preset_prepared[meta_key])
|
||||
|
||||
return preset_prepared
|
||||
|
||||
|
||||
class MetadataParser(ABC):
|
||||
def __init__(self):
|
||||
self.raw_prompt: str = ''
|
||||
self.full_prompt: str = ''
|
||||
self.raw_negative_prompt: str = ''
|
||||
self.full_negative_prompt: str = ''
|
||||
self.steps: int = 30
|
||||
self.base_model_name: str = ''
|
||||
self.base_model_hash: str = ''
|
||||
self.refiner_model_name: str = ''
|
||||
self.refiner_model_hash: str = ''
|
||||
self.loras: list = []
|
||||
|
||||
@abstractmethod
|
||||
def get_scheme(self) -> MetadataScheme:
|
||||
raise NotImplementedError
|
||||
|
||||
@abstractmethod
|
||||
def parse_json(self, metadata: dict | str) -> dict:
|
||||
raise NotImplementedError
|
||||
|
||||
@abstractmethod
|
||||
def parse_string(self, metadata: dict) -> str:
|
||||
raise NotImplementedError
|
||||
|
||||
def set_data(self, raw_prompt, full_prompt, raw_negative_prompt, full_negative_prompt, steps, base_model_name,
|
||||
refiner_model_name, loras):
|
||||
self.raw_prompt = raw_prompt
|
||||
self.full_prompt = full_prompt
|
||||
self.raw_negative_prompt = raw_negative_prompt
|
||||
self.full_negative_prompt = full_negative_prompt
|
||||
self.steps = steps
|
||||
self.base_model_name = Path(base_model_name).stem
|
||||
|
||||
base_model_path = get_file_from_folder_list(base_model_name, modules.config.paths_checkpoints)
|
||||
self.base_model_hash = get_sha256(base_model_path)
|
||||
|
||||
if refiner_model_name not in ['', 'None']:
|
||||
self.refiner_model_name = Path(refiner_model_name).stem
|
||||
refiner_model_path = get_file_from_folder_list(refiner_model_name, modules.config.paths_checkpoints)
|
||||
self.refiner_model_hash = get_sha256(refiner_model_path)
|
||||
|
||||
self.loras = []
|
||||
for (lora_name, lora_weight) in loras:
|
||||
if lora_name != 'None':
|
||||
lora_path = get_file_from_folder_list(lora_name, modules.config.paths_loras)
|
||||
lora_hash = get_sha256(lora_path)
|
||||
self.loras.append((Path(lora_name).stem, lora_weight, lora_hash))
|
||||
|
||||
@staticmethod
|
||||
def remove_special_loras(lora_filenames):
|
||||
for lora_to_remove in modules.config.loras_metadata_remove:
|
||||
if lora_to_remove in lora_filenames:
|
||||
lora_filenames.remove(lora_to_remove)
|
||||
|
||||
|
||||
class A1111MetadataParser(MetadataParser):
|
||||
def get_scheme(self) -> MetadataScheme:
|
||||
return MetadataScheme.A1111
|
||||
|
||||
fooocus_to_a1111 = {
|
||||
'raw_prompt': 'Raw prompt',
|
||||
'raw_negative_prompt': 'Raw negative prompt',
|
||||
'negative_prompt': 'Negative prompt',
|
||||
'styles': 'Styles',
|
||||
'performance': 'Performance',
|
||||
'steps': 'Steps',
|
||||
'sampler': 'Sampler',
|
||||
'scheduler': 'Scheduler',
|
||||
'guidance_scale': 'CFG scale',
|
||||
'seed': 'Seed',
|
||||
'resolution': 'Size',
|
||||
'sharpness': 'Sharpness',
|
||||
'adm_guidance': 'ADM Guidance',
|
||||
'refiner_swap_method': 'Refiner Swap Method',
|
||||
'adaptive_cfg': 'Adaptive CFG',
|
||||
'overwrite_switch': 'Overwrite Switch',
|
||||
'freeu': 'FreeU',
|
||||
'base_model': 'Model',
|
||||
'base_model_hash': 'Model hash',
|
||||
'refiner_model': 'Refiner',
|
||||
'refiner_model_hash': 'Refiner hash',
|
||||
'lora_hashes': 'Lora hashes',
|
||||
'lora_weights': 'Lora weights',
|
||||
'created_by': 'User',
|
||||
'version': 'Version'
|
||||
}
|
||||
|
||||
def parse_json(self, metadata: str) -> dict:
|
||||
metadata_prompt = ''
|
||||
metadata_negative_prompt = ''
|
||||
|
||||
done_with_prompt = False
|
||||
|
||||
*lines, lastline = metadata.strip().split("\n")
|
||||
if len(re_param.findall(lastline)) < 3:
|
||||
lines.append(lastline)
|
||||
lastline = ''
|
||||
|
||||
for line in lines:
|
||||
line = line.strip()
|
||||
if line.startswith(f"{self.fooocus_to_a1111['negative_prompt']}:"):
|
||||
done_with_prompt = True
|
||||
line = line[len(f"{self.fooocus_to_a1111['negative_prompt']}:"):].strip()
|
||||
if done_with_prompt:
|
||||
metadata_negative_prompt += ('' if metadata_negative_prompt == '' else "\n") + line
|
||||
else:
|
||||
metadata_prompt += ('' if metadata_prompt == '' else "\n") + line
|
||||
|
||||
found_styles, prompt, negative_prompt = extract_styles_from_prompt(metadata_prompt, metadata_negative_prompt)
|
||||
|
||||
data = {
|
||||
'prompt': prompt,
|
||||
'negative_prompt': negative_prompt
|
||||
}
|
||||
|
||||
for k, v in re_param.findall(lastline):
|
||||
try:
|
||||
if v != '' and v[0] == '"' and v[-1] == '"':
|
||||
v = unquote(v)
|
||||
|
||||
m = re_imagesize.match(v)
|
||||
if m is not None:
|
||||
data['resolution'] = str((m.group(1), m.group(2)))
|
||||
else:
|
||||
data[list(self.fooocus_to_a1111.keys())[list(self.fooocus_to_a1111.values()).index(k)]] = v
|
||||
except Exception:
|
||||
print(f"Error parsing \"{k}: {v}\"")
|
||||
|
||||
# workaround for multiline prompts
|
||||
if 'raw_prompt' in data:
|
||||
data['prompt'] = data['raw_prompt']
|
||||
raw_prompt = data['raw_prompt'].replace("\n", ', ')
|
||||
if metadata_prompt != raw_prompt and modules.sdxl_styles.fooocus_expansion not in found_styles:
|
||||
found_styles.append(modules.sdxl_styles.fooocus_expansion)
|
||||
|
||||
if 'raw_negative_prompt' in data:
|
||||
data['negative_prompt'] = data['raw_negative_prompt']
|
||||
|
||||
data['styles'] = str(found_styles)
|
||||
|
||||
# try to load performance based on steps, fallback for direct A1111 imports
|
||||
if 'steps' in data and 'performance' not in data:
|
||||
try:
|
||||
data['performance'] = Performance[Steps(int(data['steps'])).name].value
|
||||
except ValueError | KeyError:
|
||||
pass
|
||||
|
||||
if 'sampler' in data:
|
||||
data['sampler'] = data['sampler'].replace(' Karras', '')
|
||||
# get key
|
||||
for k, v in SAMPLERS.items():
|
||||
if v == data['sampler']:
|
||||
data['sampler'] = k
|
||||
break
|
||||
|
||||
for key in ['base_model', 'refiner_model']:
|
||||
if key in data:
|
||||
for filename in modules.config.model_filenames:
|
||||
path = Path(filename)
|
||||
if data[key] == path.stem:
|
||||
data[key] = filename
|
||||
break
|
||||
|
||||
lora_data = ''
|
||||
if 'lora_weights' in data and data['lora_weights'] != '':
|
||||
lora_data = data['lora_weights']
|
||||
elif 'lora_hashes' in data and data['lora_hashes'] != '' and data['lora_hashes'].split(', ')[0].count(':') == 2:
|
||||
lora_data = data['lora_hashes']
|
||||
|
||||
if lora_data != '':
|
||||
lora_filenames = modules.config.lora_filenames.copy()
|
||||
self.remove_special_loras(lora_filenames)
|
||||
for li, lora in enumerate(lora_data.split(', ')):
|
||||
lora_split = lora.split(': ')
|
||||
lora_name = lora_split[0]
|
||||
lora_weight = lora_split[2] if len(lora_split) == 3 else lora_split[1]
|
||||
for filename in lora_filenames:
|
||||
path = Path(filename)
|
||||
if lora_name == path.stem:
|
||||
data[f'lora_combined_{li + 1}'] = f'{filename} : {lora_weight}'
|
||||
break
|
||||
|
||||
return data
|
||||
|
||||
def parse_string(self, metadata: dict) -> str:
|
||||
data = {k: v for _, k, v in metadata}
|
||||
|
||||
width, height = eval(data['resolution'])
|
||||
|
||||
sampler = data['sampler']
|
||||
scheduler = data['scheduler']
|
||||
if sampler in SAMPLERS and SAMPLERS[sampler] != '':
|
||||
sampler = SAMPLERS[sampler]
|
||||
if sampler not in CIVITAI_NO_KARRAS and scheduler == 'karras':
|
||||
sampler += f' Karras'
|
||||
|
||||
generation_params = {
|
||||
self.fooocus_to_a1111['steps']: self.steps,
|
||||
self.fooocus_to_a1111['sampler']: sampler,
|
||||
self.fooocus_to_a1111['seed']: data['seed'],
|
||||
self.fooocus_to_a1111['resolution']: f'{width}x{height}',
|
||||
self.fooocus_to_a1111['guidance_scale']: data['guidance_scale'],
|
||||
self.fooocus_to_a1111['sharpness']: data['sharpness'],
|
||||
self.fooocus_to_a1111['adm_guidance']: data['adm_guidance'],
|
||||
self.fooocus_to_a1111['base_model']: Path(data['base_model']).stem,
|
||||
self.fooocus_to_a1111['base_model_hash']: self.base_model_hash,
|
||||
|
||||
self.fooocus_to_a1111['performance']: data['performance'],
|
||||
self.fooocus_to_a1111['scheduler']: scheduler,
|
||||
# workaround for multiline prompts
|
||||
self.fooocus_to_a1111['raw_prompt']: self.raw_prompt,
|
||||
self.fooocus_to_a1111['raw_negative_prompt']: self.raw_negative_prompt,
|
||||
}
|
||||
|
||||
if self.refiner_model_name not in ['', 'None']:
|
||||
generation_params |= {
|
||||
self.fooocus_to_a1111['refiner_model']: self.refiner_model_name,
|
||||
self.fooocus_to_a1111['refiner_model_hash']: self.refiner_model_hash
|
||||
}
|
||||
|
||||
for key in ['adaptive_cfg', 'overwrite_switch', 'refiner_swap_method', 'freeu']:
|
||||
if key in data:
|
||||
generation_params[self.fooocus_to_a1111[key]] = data[key]
|
||||
|
||||
if len(self.loras) > 0:
|
||||
lora_hashes = []
|
||||
lora_weights = []
|
||||
for index, (lora_name, lora_weight, lora_hash) in enumerate(self.loras):
|
||||
# workaround for Fooocus not knowing LoRA name in LoRA metadata
|
||||
lora_hashes.append(f'{lora_name}: {lora_hash}')
|
||||
lora_weights.append(f'{lora_name}: {lora_weight}')
|
||||
lora_hashes_string = ', '.join(lora_hashes)
|
||||
lora_weights_string = ', '.join(lora_weights)
|
||||
generation_params[self.fooocus_to_a1111['lora_hashes']] = lora_hashes_string
|
||||
generation_params[self.fooocus_to_a1111['lora_weights']] = lora_weights_string
|
||||
|
||||
generation_params[self.fooocus_to_a1111['version']] = data['version']
|
||||
|
||||
if modules.config.metadata_created_by != '':
|
||||
generation_params[self.fooocus_to_a1111['created_by']] = modules.config.metadata_created_by
|
||||
|
||||
generation_params_text = ", ".join(
|
||||
[k if k == v else f'{k}: {quote(v)}' for k, v in generation_params.items() if
|
||||
v is not None])
|
||||
positive_prompt_resolved = ', '.join(self.full_prompt)
|
||||
negative_prompt_resolved = ', '.join(self.full_negative_prompt)
|
||||
negative_prompt_text = f"\nNegative prompt: {negative_prompt_resolved}" if negative_prompt_resolved else ""
|
||||
return f"{positive_prompt_resolved}{negative_prompt_text}\n{generation_params_text}".strip()
|
||||
|
||||
|
||||
class FooocusMetadataParser(MetadataParser):
|
||||
def get_scheme(self) -> MetadataScheme:
|
||||
return MetadataScheme.FOOOCUS
|
||||
|
||||
def parse_json(self, metadata: dict) -> dict:
|
||||
model_filenames = modules.config.model_filenames.copy()
|
||||
lora_filenames = modules.config.lora_filenames.copy()
|
||||
self.remove_special_loras(lora_filenames)
|
||||
for key, value in metadata.items():
|
||||
if value in ['', 'None']:
|
||||
continue
|
||||
if key in ['base_model', 'refiner_model']:
|
||||
metadata[key] = self.replace_value_with_filename(key, value, model_filenames)
|
||||
elif key.startswith('lora_combined_'):
|
||||
metadata[key] = self.replace_value_with_filename(key, value, lora_filenames)
|
||||
else:
|
||||
continue
|
||||
|
||||
return metadata
|
||||
|
||||
def parse_string(self, metadata: list) -> str:
|
||||
for li, (label, key, value) in enumerate(metadata):
|
||||
# remove model folder paths from metadata
|
||||
if key.startswith('lora_combined_'):
|
||||
name, weight = value.split(' : ')
|
||||
name = Path(name).stem
|
||||
value = f'{name} : {weight}'
|
||||
metadata[li] = (label, key, value)
|
||||
|
||||
res = {k: v for _, k, v in metadata}
|
||||
|
||||
res['full_prompt'] = self.full_prompt
|
||||
res['full_negative_prompt'] = self.full_negative_prompt
|
||||
res['steps'] = self.steps
|
||||
res['base_model'] = self.base_model_name
|
||||
res['base_model_hash'] = self.base_model_hash
|
||||
|
||||
if self.refiner_model_name not in ['', 'None']:
|
||||
res['refiner_model'] = self.refiner_model_name
|
||||
res['refiner_model_hash'] = self.refiner_model_hash
|
||||
|
||||
res['loras'] = self.loras
|
||||
|
||||
if modules.config.metadata_created_by != '':
|
||||
res['created_by'] = modules.config.metadata_created_by
|
||||
|
||||
return json.dumps(dict(sorted(res.items())))
|
||||
|
||||
@staticmethod
|
||||
def replace_value_with_filename(key, value, filenames):
|
||||
for filename in filenames:
|
||||
path = Path(filename)
|
||||
if key.startswith('lora_combined_'):
|
||||
name, weight = value.split(' : ')
|
||||
if name == path.stem:
|
||||
return f'{filename} : {weight}'
|
||||
elif value == path.stem:
|
||||
return filename
|
||||
|
||||
|
||||
def get_metadata_parser(metadata_scheme: MetadataScheme) -> MetadataParser:
|
||||
match metadata_scheme:
|
||||
case MetadataScheme.FOOOCUS:
|
||||
return FooocusMetadataParser()
|
||||
case MetadataScheme.A1111:
|
||||
return A1111MetadataParser()
|
||||
case _:
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
def read_info_from_image(filepath) -> tuple[str | None, MetadataScheme | None]:
|
||||
with Image.open(filepath) as image:
|
||||
items = (image.info or {}).copy()
|
||||
|
||||
parameters = items.pop('parameters', None)
|
||||
metadata_scheme = items.pop('fooocus_scheme', None)
|
||||
exif = items.pop('exif', None)
|
||||
|
||||
if parameters is not None and is_json(parameters):
|
||||
parameters = json.loads(parameters)
|
||||
elif exif is not None:
|
||||
exif = image.getexif()
|
||||
# 0x9286 = UserComment
|
||||
parameters = exif.get(0x9286, None)
|
||||
# 0x927C = MakerNote
|
||||
metadata_scheme = exif.get(0x927C, None)
|
||||
|
||||
if is_json(parameters):
|
||||
parameters = json.loads(parameters)
|
||||
|
||||
try:
|
||||
metadata_scheme = MetadataScheme(metadata_scheme)
|
||||
except ValueError:
|
||||
metadata_scheme = None
|
||||
|
||||
# broad fallback
|
||||
if isinstance(parameters, dict):
|
||||
metadata_scheme = MetadataScheme.FOOOCUS
|
||||
|
||||
if isinstance(parameters, str):
|
||||
metadata_scheme = MetadataScheme.A1111
|
||||
|
||||
return parameters, metadata_scheme
|
||||
|
||||
|
||||
def get_exif(metadata: str | None, metadata_scheme: str):
|
||||
exif = Image.Exif()
|
||||
# tags see see https://github.com/python-pillow/Pillow/blob/9.2.x/src/PIL/ExifTags.py
|
||||
# 0x9286 = UserComment
|
||||
exif[0x9286] = metadata
|
||||
# 0x0131 = Software
|
||||
exif[0x0131] = 'Fooocus v' + fooocus_version.version
|
||||
# 0x927C = MakerNote
|
||||
exif[0x927C] = metadata_scheme
|
||||
return exif
|
@ -1,19 +0,0 @@
|
||||
import torch
|
||||
import contextlib
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def use_patched_ops(operations):
|
||||
op_names = ['Linear', 'Conv2d', 'Conv3d', 'GroupNorm', 'LayerNorm']
|
||||
backups = {op_name: getattr(torch.nn, op_name) for op_name in op_names}
|
||||
|
||||
try:
|
||||
for op_name in op_names:
|
||||
setattr(torch.nn, op_name, getattr(operations, op_name))
|
||||
|
||||
yield
|
||||
|
||||
finally:
|
||||
for op_name in op_names:
|
||||
setattr(torch.nn, op_name, backups[op_name])
|
||||
return
|
@ -17,6 +17,7 @@ import ldm_patched.controlnet.cldm
|
||||
import ldm_patched.modules.model_patcher
|
||||
import ldm_patched.modules.samplers
|
||||
import ldm_patched.modules.args_parser
|
||||
import modules.advanced_parameters as advanced_parameters
|
||||
import warnings
|
||||
import safetensors.torch
|
||||
import modules.constants as constants
|
||||
@ -28,25 +29,15 @@ from modules.patch_precision import patch_all_precision
|
||||
from modules.patch_clip import patch_all_clip
|
||||
|
||||
|
||||
class PatchSettings:
|
||||
def __init__(self,
|
||||
sharpness=2.0,
|
||||
adm_scaler_end=0.3,
|
||||
positive_adm_scale=1.5,
|
||||
negative_adm_scale=0.8,
|
||||
controlnet_softness=0.25,
|
||||
adaptive_cfg=7.0):
|
||||
self.sharpness = sharpness
|
||||
self.adm_scaler_end = adm_scaler_end
|
||||
self.positive_adm_scale = positive_adm_scale
|
||||
self.negative_adm_scale = negative_adm_scale
|
||||
self.controlnet_softness = controlnet_softness
|
||||
self.adaptive_cfg = adaptive_cfg
|
||||
self.global_diffusion_progress = 0
|
||||
self.eps_record = None
|
||||
sharpness = 2.0
|
||||
|
||||
adm_scaler_end = 0.3
|
||||
positive_adm_scale = 1.5
|
||||
negative_adm_scale = 0.8
|
||||
|
||||
patch_settings = {}
|
||||
adaptive_cfg = 7.0
|
||||
global_diffusion_progress = 0
|
||||
eps_record = None
|
||||
|
||||
|
||||
def calculate_weight_patched(self, patches, weight, key):
|
||||
@ -210,13 +201,14 @@ class BrownianTreeNoiseSamplerPatched:
|
||||
|
||||
|
||||
def compute_cfg(uncond, cond, cfg_scale, t):
|
||||
pid = os.getpid()
|
||||
mimic_cfg = float(patch_settings[pid].adaptive_cfg)
|
||||
global adaptive_cfg
|
||||
|
||||
mimic_cfg = float(adaptive_cfg)
|
||||
real_cfg = float(cfg_scale)
|
||||
|
||||
real_eps = uncond + real_cfg * (cond - uncond)
|
||||
|
||||
if cfg_scale > patch_settings[pid].adaptive_cfg:
|
||||
if cfg_scale > adaptive_cfg:
|
||||
mimicked_eps = uncond + mimic_cfg * (cond - uncond)
|
||||
return real_eps * t + mimicked_eps * (1 - t)
|
||||
else:
|
||||
@ -224,13 +216,13 @@ def compute_cfg(uncond, cond, cfg_scale, t):
|
||||
|
||||
|
||||
def patched_sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options=None, seed=None):
|
||||
pid = os.getpid()
|
||||
global eps_record
|
||||
|
||||
if math.isclose(cond_scale, 1.0) and not model_options.get("disable_cfg1_optimization", False):
|
||||
if math.isclose(cond_scale, 1.0):
|
||||
final_x0 = calc_cond_uncond_batch(model, cond, None, x, timestep, model_options)[0]
|
||||
|
||||
if patch_settings[pid].eps_record is not None:
|
||||
patch_settings[pid].eps_record = ((x - final_x0) / timestep).cpu()
|
||||
if eps_record is not None:
|
||||
eps_record = ((x - final_x0) / timestep).cpu()
|
||||
|
||||
return final_x0
|
||||
|
||||
@ -239,16 +231,16 @@ def patched_sampling_function(model, x, timestep, uncond, cond, cond_scale, mode
|
||||
positive_eps = x - positive_x0
|
||||
negative_eps = x - negative_x0
|
||||
|
||||
alpha = 0.001 * patch_settings[pid].sharpness * patch_settings[pid].global_diffusion_progress
|
||||
alpha = 0.001 * sharpness * global_diffusion_progress
|
||||
|
||||
positive_eps_degraded = anisotropic.adaptive_anisotropic_filter(x=positive_eps, g=positive_x0)
|
||||
positive_eps_degraded_weighted = positive_eps_degraded * alpha + positive_eps * (1.0 - alpha)
|
||||
|
||||
final_eps = compute_cfg(uncond=negative_eps, cond=positive_eps_degraded_weighted,
|
||||
cfg_scale=cond_scale, t=patch_settings[pid].global_diffusion_progress)
|
||||
cfg_scale=cond_scale, t=global_diffusion_progress)
|
||||
|
||||
if patch_settings[pid].eps_record is not None:
|
||||
patch_settings[pid].eps_record = (final_eps / timestep).cpu()
|
||||
if eps_record is not None:
|
||||
eps_record = (final_eps / timestep).cpu()
|
||||
|
||||
return x - final_eps
|
||||
|
||||
@ -263,19 +255,20 @@ def round_to_64(x):
|
||||
|
||||
|
||||
def sdxl_encode_adm_patched(self, **kwargs):
|
||||
global positive_adm_scale, negative_adm_scale
|
||||
|
||||
clip_pooled = ldm_patched.modules.model_base.sdxl_pooled(kwargs, self.noise_augmentor)
|
||||
width = kwargs.get("width", 1024)
|
||||
height = kwargs.get("height", 1024)
|
||||
target_width = width
|
||||
target_height = height
|
||||
pid = os.getpid()
|
||||
|
||||
if kwargs.get("prompt_type", "") == "negative":
|
||||
width = float(width) * patch_settings[pid].negative_adm_scale
|
||||
height = float(height) * patch_settings[pid].negative_adm_scale
|
||||
width = float(width) * negative_adm_scale
|
||||
height = float(height) * negative_adm_scale
|
||||
elif kwargs.get("prompt_type", "") == "positive":
|
||||
width = float(width) * patch_settings[pid].positive_adm_scale
|
||||
height = float(height) * patch_settings[pid].positive_adm_scale
|
||||
width = float(width) * positive_adm_scale
|
||||
height = float(height) * positive_adm_scale
|
||||
|
||||
def embedder(number_list):
|
||||
h = self.embedder(torch.tensor(number_list, dtype=torch.float32))
|
||||
@ -329,7 +322,7 @@ def patched_KSamplerX0Inpaint_forward(self, x, sigma, uncond, cond, cond_scale,
|
||||
|
||||
def timed_adm(y, timesteps):
|
||||
if isinstance(y, torch.Tensor) and int(y.dim()) == 2 and int(y.shape[1]) == 5632:
|
||||
y_mask = (timesteps > 999.0 * (1.0 - float(patch_settings[os.getpid()].adm_scaler_end))).to(y)[..., None]
|
||||
y_mask = (timesteps > 999.0 * (1.0 - float(adm_scaler_end))).to(y)[..., None]
|
||||
y_with_adm = y[..., :2816].clone()
|
||||
y_without_adm = y[..., 2816:].clone()
|
||||
return y_with_adm * y_mask + y_without_adm * (1.0 - y_mask)
|
||||
@ -339,7 +332,6 @@ def timed_adm(y, timesteps):
|
||||
def patched_cldm_forward(self, x, hint, timesteps, context, y=None, **kwargs):
|
||||
t_emb = ldm_patched.ldm.modules.diffusionmodules.openaimodel.timestep_embedding(timesteps, self.model_channels, repeat_only=False).to(x.dtype)
|
||||
emb = self.time_embed(t_emb)
|
||||
pid = os.getpid()
|
||||
|
||||
guided_hint = self.input_hint_block(hint, emb, context)
|
||||
|
||||
@ -365,17 +357,19 @@ def patched_cldm_forward(self, x, hint, timesteps, context, y=None, **kwargs):
|
||||
h = self.middle_block(h, emb, context)
|
||||
outs.append(self.middle_block_out(h, emb, context))
|
||||
|
||||
if patch_settings[pid].controlnet_softness > 0:
|
||||
if advanced_parameters.controlnet_softness > 0:
|
||||
for i in range(10):
|
||||
k = 1.0 - float(i) / 9.0
|
||||
outs[i] = outs[i] * (1.0 - patch_settings[pid].controlnet_softness * k)
|
||||
outs[i] = outs[i] * (1.0 - advanced_parameters.controlnet_softness * k)
|
||||
|
||||
return outs
|
||||
|
||||
|
||||
def patched_unet_forward(self, x, timesteps=None, context=None, y=None, control=None, transformer_options={}, **kwargs):
|
||||
global global_diffusion_progress
|
||||
|
||||
self.current_step = 1.0 - timesteps.to(x) / 999.0
|
||||
patch_settings[os.getpid()].global_diffusion_progress = float(self.current_step.detach().cpu().numpy().tolist()[0])
|
||||
global_diffusion_progress = float(self.current_step.detach().cpu().numpy().tolist()[0])
|
||||
|
||||
y = timed_adm(y, timesteps)
|
||||
|
||||
@ -486,10 +480,6 @@ def build_loaded(module, loader_name):
|
||||
|
||||
|
||||
def patch_all():
|
||||
if ldm_patched.modules.model_management.directml_enabled:
|
||||
ldm_patched.modules.model_management.lowvram_available = True
|
||||
ldm_patched.modules.model_management.OOM_EXCEPTION = Exception
|
||||
|
||||
patch_all_precision()
|
||||
patch_all_clip()
|
||||
|
||||
|
@ -16,12 +16,30 @@ import ldm_patched.modules.samplers
|
||||
import ldm_patched.modules.sd
|
||||
import ldm_patched.modules.sd1_clip
|
||||
import ldm_patched.modules.clip_vision
|
||||
import ldm_patched.modules.model_management as model_management
|
||||
import ldm_patched.modules.ops as ops
|
||||
import contextlib
|
||||
|
||||
from modules.ops import use_patched_ops
|
||||
from transformers import CLIPTextModel, CLIPTextConfig, modeling_utils, CLIPVisionConfig, CLIPVisionModelWithProjection
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def use_patched_ops(operations):
|
||||
op_names = ['Linear', 'Conv2d', 'Conv3d', 'GroupNorm', 'LayerNorm']
|
||||
backups = {op_name: getattr(torch.nn, op_name) for op_name in op_names}
|
||||
|
||||
try:
|
||||
for op_name in op_names:
|
||||
setattr(torch.nn, op_name, getattr(operations, op_name))
|
||||
|
||||
yield
|
||||
|
||||
finally:
|
||||
for op_name in op_names:
|
||||
setattr(torch.nn, op_name, backups[op_name])
|
||||
return
|
||||
|
||||
|
||||
def patched_encode_token_weights(self, token_weight_pairs):
|
||||
to_encode = list()
|
||||
max_token_len = 0
|
||||
|
@ -1,131 +1,60 @@
|
||||
import os
|
||||
import args_manager
|
||||
import modules.config
|
||||
import json
|
||||
import urllib.parse
|
||||
|
||||
from PIL import Image
|
||||
from PIL.PngImagePlugin import PngInfo
|
||||
from modules.flags import OutputFormat
|
||||
from modules.meta_parser import MetadataParser, get_exif
|
||||
from modules.util import generate_temp_filename
|
||||
|
||||
|
||||
log_cache = {}
|
||||
|
||||
|
||||
def get_current_html_path(output_format=None):
|
||||
output_format = output_format if output_format else modules.config.default_output_format
|
||||
def get_current_html_path():
|
||||
date_string, local_temp_filename, only_name = generate_temp_filename(folder=modules.config.path_outputs,
|
||||
extension=output_format)
|
||||
extension='png')
|
||||
html_name = os.path.join(os.path.dirname(local_temp_filename), 'log.html')
|
||||
return html_name
|
||||
|
||||
|
||||
def log(img, metadata, metadata_parser: MetadataParser | None = None, output_format=None) -> str:
|
||||
path_outputs = modules.config.temp_path if args_manager.args.disable_image_log else modules.config.path_outputs
|
||||
output_format = output_format if output_format else modules.config.default_output_format
|
||||
date_string, local_temp_filename, only_name = generate_temp_filename(folder=path_outputs, extension=output_format)
|
||||
os.makedirs(os.path.dirname(local_temp_filename), exist_ok=True)
|
||||
|
||||
parsed_parameters = metadata_parser.parse_string(metadata.copy()) if metadata_parser is not None else ''
|
||||
image = Image.fromarray(img)
|
||||
|
||||
if output_format == OutputFormat.PNG.value:
|
||||
if parsed_parameters != '':
|
||||
pnginfo = PngInfo()
|
||||
pnginfo.add_text('parameters', parsed_parameters)
|
||||
pnginfo.add_text('fooocus_scheme', metadata_parser.get_scheme().value)
|
||||
else:
|
||||
pnginfo = None
|
||||
image.save(local_temp_filename, pnginfo=pnginfo)
|
||||
elif output_format == OutputFormat.JPEG.value:
|
||||
image.save(local_temp_filename, quality=95, optimize=True, progressive=True, exif=get_exif(parsed_parameters, metadata_parser.get_scheme().value) if metadata_parser else Image.Exif())
|
||||
elif output_format == OutputFormat.WEBP.value:
|
||||
image.save(local_temp_filename, quality=95, lossless=False, exif=get_exif(parsed_parameters, metadata_parser.get_scheme().value) if metadata_parser else Image.Exif())
|
||||
else:
|
||||
image.save(local_temp_filename)
|
||||
|
||||
def log(img, dic, single_line_number=3):
|
||||
if args_manager.args.disable_image_log:
|
||||
return local_temp_filename
|
||||
return
|
||||
|
||||
date_string, local_temp_filename, only_name = generate_temp_filename(folder=modules.config.path_outputs, extension='png')
|
||||
os.makedirs(os.path.dirname(local_temp_filename), exist_ok=True)
|
||||
Image.fromarray(img).save(local_temp_filename)
|
||||
html_name = os.path.join(os.path.dirname(local_temp_filename), 'log.html')
|
||||
|
||||
css_styles = (
|
||||
"<style>"
|
||||
"body { background-color: #121212; color: #E0E0E0; } "
|
||||
"a { color: #BB86FC; } "
|
||||
".metadata { border-collapse: collapse; width: 100%; } "
|
||||
".metadata .label { width: 15%; } "
|
||||
".metadata .value { width: 85%; font-weight: bold; } "
|
||||
".metadata th, .metadata td { border: 1px solid #4d4d4d; padding: 4px; } "
|
||||
".image-container img { height: auto; max-width: 512px; display: block; padding-right:10px; } "
|
||||
".image-container div { text-align: center; padding: 4px; } "
|
||||
"hr { border-color: gray; } "
|
||||
"button { background-color: black; color: white; border: 1px solid grey; border-radius: 5px; padding: 5px 10px; text-align: center; display: inline-block; font-size: 16px; cursor: pointer; }"
|
||||
"button:hover {background-color: grey; color: black;}"
|
||||
"</style>"
|
||||
)
|
||||
existing_log = log_cache.get(html_name, None)
|
||||
|
||||
js = (
|
||||
"""<script>
|
||||
function to_clipboard(txt) {
|
||||
txt = decodeURIComponent(txt);
|
||||
if (navigator.clipboard && navigator.permissions) {
|
||||
navigator.clipboard.writeText(txt)
|
||||
} else {
|
||||
const textArea = document.createElement('textArea')
|
||||
textArea.value = txt
|
||||
textArea.style.width = 0
|
||||
textArea.style.position = 'fixed'
|
||||
textArea.style.left = '-999px'
|
||||
textArea.style.top = '10px'
|
||||
textArea.setAttribute('readonly', 'readonly')
|
||||
document.body.appendChild(textArea)
|
||||
|
||||
textArea.select()
|
||||
document.execCommand('copy')
|
||||
document.body.removeChild(textArea)
|
||||
}
|
||||
alert('Copied to Clipboard!\\nPaste to prompt area to load parameters.\\nCurrent clipboard content is:\\n\\n' + txt);
|
||||
}
|
||||
</script>"""
|
||||
)
|
||||
|
||||
begin_part = f"<!DOCTYPE html><html><head><title>Fooocus Log {date_string}</title>{css_styles}</head><body>{js}<p>Fooocus Log {date_string} (private)</p>\n<p>Metadata is embedded if enabled in the config or developer debug mode. You can find the information for each image in line Metadata Scheme.</p><!--fooocus-log-split-->\n\n"
|
||||
end_part = f'\n<!--fooocus-log-split--></body></html>'
|
||||
|
||||
middle_part = log_cache.get(html_name, "")
|
||||
|
||||
if middle_part == "":
|
||||
if existing_log is None:
|
||||
if os.path.exists(html_name):
|
||||
existing_split = open(html_name, 'r', encoding='utf-8').read().split('<!--fooocus-log-split-->')
|
||||
if len(existing_split) == 3:
|
||||
middle_part = existing_split[1]
|
||||
else:
|
||||
middle_part = existing_split[0]
|
||||
existing_log = open(html_name, encoding='utf-8').read()
|
||||
else:
|
||||
existing_log = f'<p>Fooocus Log {date_string} (private)</p>\n<p>All images do not contain any hidden data.</p>'
|
||||
|
||||
div_name = only_name.replace('.', '_')
|
||||
item = f"<div id=\"{div_name}\" class=\"image-container\"><hr><table><tr>\n"
|
||||
item += f"<td><a href=\"{only_name}\" target=\"_blank\"><img src='{only_name}' onerror=\"this.closest('.image-container').style.display='none';\" loading='lazy'/></a><div>{only_name}</div></td>"
|
||||
item += "<td><table class='metadata'>"
|
||||
for label, key, value in metadata:
|
||||
value_txt = str(value).replace('\n', ' </br> ')
|
||||
item += f"<tr><td class='label'>{label}</td><td class='value'>{value_txt}</td></tr>\n"
|
||||
item += "</table>"
|
||||
|
||||
js_txt = urllib.parse.quote(json.dumps({k: v for _, k, v in metadata}, indent=0), safe='')
|
||||
item += f"</br><button onclick=\"to_clipboard('{js_txt}')\">Copy to Clipboard</button>"
|
||||
|
||||
item = f'<div id="{div_name}">\n'
|
||||
item += "<table><tr>"
|
||||
item += f"<td><img src=\"{only_name}\" width=auto height=100% loading=lazy style=\"height:auto;max-width:512px\" onerror=\"document.getElementById('{div_name}').style.display = 'none';\"></img></p></td>"
|
||||
item += f"<td style=\"padding-left:10px;\"><p>{only_name}</p>\n"
|
||||
for i, (k, v) in enumerate(dic):
|
||||
if i < single_line_number:
|
||||
item += f"<p>{k}: <b>{v}</b></p>\n"
|
||||
else:
|
||||
if (i - single_line_number) % 2 == 0:
|
||||
item += f"<p>{k}: <b>{v}</b>, "
|
||||
else:
|
||||
item += f"{k}: <b>{v}</b></p>\n"
|
||||
item += "</td>"
|
||||
item += "</tr></table></div>\n\n"
|
||||
|
||||
middle_part = item + middle_part
|
||||
item += "</tr></table><hr></div>\n"
|
||||
existing_log = item + existing_log
|
||||
|
||||
with open(html_name, 'w', encoding='utf-8') as f:
|
||||
f.write(begin_part + middle_part + end_part)
|
||||
f.write(existing_log)
|
||||
|
||||
print(f'Image generated with private log at: {html_name}')
|
||||
|
||||
log_cache[html_name] = middle_part
|
||||
log_cache[html_name] = existing_log
|
||||
|
||||
return local_temp_filename
|
||||
return
|
||||
|
@ -99,13 +99,6 @@ def sample_hacked(model, noise, positive, negative, cfg, device, sampler, sigmas
|
||||
calculate_start_end_timesteps(model, negative)
|
||||
calculate_start_end_timesteps(model, positive)
|
||||
|
||||
if latent_image is not None:
|
||||
latent_image = model.process_latent_in(latent_image)
|
||||
|
||||
if hasattr(model, 'extra_conds'):
|
||||
positive = encode_model_conds(model.extra_conds, positive, noise, device, "positive", latent_image=latent_image, denoise_mask=denoise_mask)
|
||||
negative = encode_model_conds(model.extra_conds, negative, noise, device, "negative", latent_image=latent_image, denoise_mask=denoise_mask)
|
||||
|
||||
#make sure each cond area has an opposite one with the same area
|
||||
for c in positive:
|
||||
create_cond_with_same_area_if_none(negative, c)
|
||||
@ -118,6 +111,13 @@ def sample_hacked(model, noise, positive, negative, cfg, device, sampler, sigmas
|
||||
apply_empty_x_to_equal_area(list(filter(lambda c: c.get('control_apply_to_uncond', False) == True, positive)), negative, 'control', lambda cond_cnets, x: cond_cnets[x])
|
||||
apply_empty_x_to_equal_area(positive, negative, 'gligen', lambda cond_cnets, x: cond_cnets[x])
|
||||
|
||||
if latent_image is not None:
|
||||
latent_image = model.process_latent_in(latent_image)
|
||||
|
||||
if hasattr(model, 'extra_conds'):
|
||||
positive = encode_model_conds(model.extra_conds, positive, noise, device, "positive", latent_image=latent_image, denoise_mask=denoise_mask)
|
||||
negative = encode_model_conds(model.extra_conds, negative, noise, device, "negative", latent_image=latent_image, denoise_mask=denoise_mask)
|
||||
|
||||
extra_args = {"cond":positive, "uncond":negative, "cond_scale": cfg, "model_options": model_options, "seed":seed}
|
||||
|
||||
if current_refiner is not None and hasattr(current_refiner.model, 'extra_conds'):
|
||||
@ -174,7 +174,7 @@ def calculate_sigmas_scheduler_hacked(model, scheduler_name, steps):
|
||||
elif scheduler_name == "sgm_uniform":
|
||||
sigmas = normal_scheduler(model, steps, sgm=True)
|
||||
elif scheduler_name == "turbo":
|
||||
sigmas = SDTurboScheduler().get_sigmas(namedtuple('Patcher', ['model'])(model=model), steps=steps, denoise=1.0)[0]
|
||||
sigmas = SDTurboScheduler().get_sigmas(namedtuple('Patcher', ['model'])(model=model), steps)[0]
|
||||
else:
|
||||
raise TypeError("error invalid scheduler")
|
||||
return sigmas
|
||||
|
@ -1,13 +1,13 @@
|
||||
import os
|
||||
import re
|
||||
import json
|
||||
import math
|
||||
import modules.config
|
||||
|
||||
from modules.util import get_files_from_folder
|
||||
|
||||
|
||||
# cannot use modules.config - validators causing circular imports
|
||||
styles_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../sdxl_styles/'))
|
||||
wildcards_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../wildcards/'))
|
||||
wildcards_max_bfs_depth = 64
|
||||
|
||||
|
||||
@ -31,8 +31,7 @@ for x in ['sdxl_styles_fooocus.json',
|
||||
'sdxl_styles_sai.json',
|
||||
'sdxl_styles_mre.json',
|
||||
'sdxl_styles_twri.json',
|
||||
'sdxl_styles_diva.json',
|
||||
'sdxl_styles_marc_k3nt3l.json']:
|
||||
'sdxl_styles_diva.json']:
|
||||
if x in styles_files:
|
||||
styles_files.remove(x)
|
||||
styles_files.append(x)
|
||||
@ -59,7 +58,7 @@ def apply_style(style, positive):
|
||||
return p.replace('{prompt}', positive).splitlines(), n.splitlines()
|
||||
|
||||
|
||||
def apply_wildcards(wildcard_text, rng, i, read_wildcards_in_order):
|
||||
def apply_wildcards(wildcard_text, rng, directory=wildcards_path):
|
||||
for _ in range(wildcards_max_bfs_depth):
|
||||
placeholders = re.findall(r'__([\w-]+)__', wildcard_text)
|
||||
if len(placeholders) == 0:
|
||||
@ -68,14 +67,10 @@ def apply_wildcards(wildcard_text, rng, i, read_wildcards_in_order):
|
||||
print(f'[Wildcards] processing: {wildcard_text}')
|
||||
for placeholder in placeholders:
|
||||
try:
|
||||
matches = [x for x in modules.config.wildcard_filenames if os.path.splitext(os.path.basename(x))[0] == placeholder]
|
||||
words = open(os.path.join(modules.config.path_wildcards, matches[0]), encoding='utf-8').read().splitlines()
|
||||
words = open(os.path.join(directory, f'{placeholder}.txt'), encoding='utf-8').read().splitlines()
|
||||
words = [x for x in words if x != '']
|
||||
assert len(words) > 0
|
||||
if read_wildcards_in_order:
|
||||
wildcard_text = wildcard_text.replace(f'__{placeholder}__', words[i % len(words)], 1)
|
||||
else:
|
||||
wildcard_text = wildcard_text.replace(f'__{placeholder}__', rng.choice(words), 1)
|
||||
wildcard_text = wildcard_text.replace(f'__{placeholder}__', rng.choice(words), 1)
|
||||
except:
|
||||
print(f'[Wildcards] Warning: {placeholder}.txt missing or empty. '
|
||||
f'Using "{placeholder}" as a normal word.')
|
||||
@ -84,38 +79,3 @@ def apply_wildcards(wildcard_text, rng, i, read_wildcards_in_order):
|
||||
|
||||
print(f'[Wildcards] BFS stack overflow. Current text: {wildcard_text}')
|
||||
return wildcard_text
|
||||
|
||||
|
||||
def get_words(arrays, totalMult, index):
|
||||
if len(arrays) == 1:
|
||||
return [arrays[0].split(',')[index]]
|
||||
else:
|
||||
words = arrays[0].split(',')
|
||||
word = words[index % len(words)]
|
||||
index -= index % len(words)
|
||||
index /= len(words)
|
||||
index = math.floor(index)
|
||||
return [word] + get_words(arrays[1:], math.floor(totalMult/len(words)), index)
|
||||
|
||||
|
||||
def apply_arrays(text, index):
|
||||
arrays = re.findall(r'\[\[(.*?)\]\]', text)
|
||||
if len(arrays) == 0:
|
||||
return text
|
||||
|
||||
print(f'[Arrays] processing: {text}')
|
||||
mult = 1
|
||||
for arr in arrays:
|
||||
words = arr.split(',')
|
||||
mult *= len(words)
|
||||
|
||||
index %= mult
|
||||
chosen_words = get_words(arrays, mult, index)
|
||||
|
||||
i = 0
|
||||
for arr in arrays:
|
||||
text = text.replace(f'[[{arr}]]', chosen_words[i], 1)
|
||||
i = i+1
|
||||
|
||||
return text
|
||||
|
||||
|
@ -15,14 +15,11 @@ def try_load_sorted_styles(style_names, default_selected):
|
||||
try:
|
||||
if os.path.exists('sorted_styles.json'):
|
||||
with open('sorted_styles.json', 'rt', encoding='utf-8') as fp:
|
||||
sorted_styles = []
|
||||
for x in json.load(fp):
|
||||
if x in all_styles:
|
||||
sorted_styles.append(x)
|
||||
for x in all_styles:
|
||||
if x not in sorted_styles:
|
||||
sorted_styles.append(x)
|
||||
all_styles = sorted_styles
|
||||
sorted_styles = json.load(fp)
|
||||
if len(sorted_styles) == len(all_styles):
|
||||
if all(x in all_styles for x in sorted_styles):
|
||||
if all(x in sorted_styles for x in all_styles):
|
||||
all_styles = sorted_styles
|
||||
except Exception as e:
|
||||
print('Load style sorting failed.')
|
||||
print(e)
|
||||
|
@ -30,7 +30,6 @@ def javascript_html():
|
||||
edit_attention_js_path = webpath('javascript/edit-attention.js')
|
||||
viewer_js_path = webpath('javascript/viewer.js')
|
||||
image_viewer_js_path = webpath('javascript/imageviewer.js')
|
||||
samples_path = webpath(os.path.abspath('./sdxl_styles/samples/fooocus_v2.jpg'))
|
||||
head = f'<script type="text/javascript">{localization_js(args_manager.args.language)}</script>\n'
|
||||
head += f'<script type="text/javascript" src="{script_js_path}"></script>\n'
|
||||
head += f'<script type="text/javascript" src="{context_menus_js_path}"></script>\n'
|
||||
@ -39,7 +38,6 @@ def javascript_html():
|
||||
head += f'<script type="text/javascript" src="{edit_attention_js_path}"></script>\n'
|
||||
head += f'<script type="text/javascript" src="{viewer_js_path}"></script>\n'
|
||||
head += f'<script type="text/javascript" src="{image_viewer_js_path}"></script>\n'
|
||||
head += f'<meta name="samples-path" content="{samples_path}"></meta>\n'
|
||||
|
||||
if args_manager.args.theme:
|
||||
head += f'<script type="text/javascript">set_theme(\"{args_manager.args.theme}\");</script>\n'
|
||||
|
239
modules/util.py
239
modules/util.py
@ -1,28 +1,13 @@
|
||||
import typing
|
||||
|
||||
import numpy as np
|
||||
import datetime
|
||||
import random
|
||||
import math
|
||||
import os
|
||||
import cv2
|
||||
import json
|
||||
import hashlib
|
||||
|
||||
from PIL import Image
|
||||
|
||||
import modules.sdxl_styles
|
||||
|
||||
LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS)
|
||||
HASH_SHA256_LENGTH = 10
|
||||
|
||||
def erode_or_dilate(x, k):
|
||||
k = int(k)
|
||||
if k > 0:
|
||||
return cv2.dilate(x, kernel=np.ones(shape=(3, 3), dtype=np.uint8), iterations=k)
|
||||
if k < 0:
|
||||
return cv2.erode(x, kernel=np.ones(shape=(3, 3), dtype=np.uint8), iterations=-k)
|
||||
return x
|
||||
|
||||
|
||||
def resample_image(im, width, height):
|
||||
@ -160,235 +145,23 @@ def generate_temp_filename(folder='./outputs/', extension='png'):
|
||||
random_number = random.randint(1000, 9999)
|
||||
filename = f"{time_string}_{random_number}.{extension}"
|
||||
result = os.path.join(folder, date_string, filename)
|
||||
return date_string, os.path.abspath(result), filename
|
||||
return date_string, os.path.abspath(os.path.realpath(result)), filename
|
||||
|
||||
|
||||
def get_files_from_folder(folder_path, extensions=None, name_filter=None):
|
||||
def get_files_from_folder(folder_path, exensions=None, name_filter=None):
|
||||
if not os.path.isdir(folder_path):
|
||||
raise ValueError("Folder path is not a valid directory.")
|
||||
|
||||
filenames = []
|
||||
|
||||
for root, dirs, files in os.walk(folder_path, topdown=False):
|
||||
for root, dirs, files in os.walk(folder_path):
|
||||
relative_path = os.path.relpath(root, folder_path)
|
||||
if relative_path == ".":
|
||||
relative_path = ""
|
||||
for filename in sorted(files, key=lambda s: s.casefold()):
|
||||
for filename in files:
|
||||
_, file_extension = os.path.splitext(filename)
|
||||
if (extensions is None or file_extension.lower() in extensions) and (name_filter is None or name_filter in _):
|
||||
if (exensions == None or file_extension.lower() in exensions) and (name_filter == None or name_filter in _):
|
||||
path = os.path.join(relative_path, filename)
|
||||
filenames.append(path)
|
||||
|
||||
return filenames
|
||||
|
||||
|
||||
def sha256(filename, use_addnet_hash=False, length=HASH_SHA256_LENGTH):
|
||||
print(f"Calculating sha256 for {filename}: ", end='')
|
||||
if use_addnet_hash:
|
||||
with open(filename, "rb") as file:
|
||||
sha256_value = addnet_hash_safetensors(file)
|
||||
else:
|
||||
sha256_value = calculate_sha256(filename)
|
||||
print(f"{sha256_value}")
|
||||
|
||||
return sha256_value[:length] if length is not None else sha256_value
|
||||
|
||||
|
||||
def addnet_hash_safetensors(b):
|
||||
"""kohya-ss hash for safetensors from https://github.com/kohya-ss/sd-scripts/blob/main/library/train_util.py"""
|
||||
hash_sha256 = hashlib.sha256()
|
||||
blksize = 1024 * 1024
|
||||
|
||||
b.seek(0)
|
||||
header = b.read(8)
|
||||
n = int.from_bytes(header, "little")
|
||||
|
||||
offset = n + 8
|
||||
b.seek(offset)
|
||||
for chunk in iter(lambda: b.read(blksize), b""):
|
||||
hash_sha256.update(chunk)
|
||||
|
||||
return hash_sha256.hexdigest()
|
||||
|
||||
|
||||
def calculate_sha256(filename) -> str:
|
||||
hash_sha256 = hashlib.sha256()
|
||||
blksize = 1024 * 1024
|
||||
|
||||
with open(filename, "rb") as f:
|
||||
for chunk in iter(lambda: f.read(blksize), b""):
|
||||
hash_sha256.update(chunk)
|
||||
|
||||
return hash_sha256.hexdigest()
|
||||
|
||||
|
||||
def quote(text):
|
||||
if ',' not in str(text) and '\n' not in str(text) and ':' not in str(text):
|
||||
return text
|
||||
|
||||
return json.dumps(text, ensure_ascii=False)
|
||||
|
||||
|
||||
def unquote(text):
|
||||
if len(text) == 0 or text[0] != '"' or text[-1] != '"':
|
||||
return text
|
||||
|
||||
try:
|
||||
return json.loads(text)
|
||||
except Exception:
|
||||
return text
|
||||
|
||||
|
||||
def unwrap_style_text_from_prompt(style_text, prompt):
|
||||
"""
|
||||
Checks the prompt to see if the style text is wrapped around it. If so,
|
||||
returns True plus the prompt text without the style text. Otherwise, returns
|
||||
False with the original prompt.
|
||||
|
||||
Note that the "cleaned" version of the style text is only used for matching
|
||||
purposes here. It isn't returned; the original style text is not modified.
|
||||
"""
|
||||
stripped_prompt = prompt
|
||||
stripped_style_text = style_text
|
||||
if "{prompt}" in stripped_style_text:
|
||||
# Work out whether the prompt is wrapped in the style text. If so, we
|
||||
# return True and the "inner" prompt text that isn't part of the style.
|
||||
try:
|
||||
left, right = stripped_style_text.split("{prompt}", 2)
|
||||
except ValueError as e:
|
||||
# If the style text has multple "{prompt}"s, we can't split it into
|
||||
# two parts. This is an error, but we can't do anything about it.
|
||||
print(f"Unable to compare style text to prompt:\n{style_text}")
|
||||
print(f"Error: {e}")
|
||||
return False, prompt, ''
|
||||
|
||||
left_pos = stripped_prompt.find(left)
|
||||
right_pos = stripped_prompt.find(right)
|
||||
if 0 <= left_pos < right_pos:
|
||||
real_prompt = stripped_prompt[left_pos + len(left):right_pos]
|
||||
prompt = stripped_prompt.replace(left + real_prompt + right, '', 1)
|
||||
if prompt.startswith(", "):
|
||||
prompt = prompt[2:]
|
||||
if prompt.endswith(", "):
|
||||
prompt = prompt[:-2]
|
||||
return True, prompt, real_prompt
|
||||
else:
|
||||
# Work out whether the given prompt starts with the style text. If so, we
|
||||
# return True and the prompt text up to where the style text starts.
|
||||
if stripped_prompt.endswith(stripped_style_text):
|
||||
prompt = stripped_prompt[: len(stripped_prompt) - len(stripped_style_text)]
|
||||
if prompt.endswith(", "):
|
||||
prompt = prompt[:-2]
|
||||
return True, prompt, prompt
|
||||
|
||||
return False, prompt, ''
|
||||
|
||||
|
||||
def extract_original_prompts(style, prompt, negative_prompt):
|
||||
"""
|
||||
Takes a style and compares it to the prompt and negative prompt. If the style
|
||||
matches, returns True plus the prompt and negative prompt with the style text
|
||||
removed. Otherwise, returns False with the original prompt and negative prompt.
|
||||
"""
|
||||
if not style.prompt and not style.negative_prompt:
|
||||
return False, prompt, negative_prompt
|
||||
|
||||
match_positive, extracted_positive, real_prompt = unwrap_style_text_from_prompt(
|
||||
style.prompt, prompt
|
||||
)
|
||||
if not match_positive:
|
||||
return False, prompt, negative_prompt, ''
|
||||
|
||||
match_negative, extracted_negative, _ = unwrap_style_text_from_prompt(
|
||||
style.negative_prompt, negative_prompt
|
||||
)
|
||||
if not match_negative:
|
||||
return False, prompt, negative_prompt, ''
|
||||
|
||||
return True, extracted_positive, extracted_negative, real_prompt
|
||||
|
||||
|
||||
def extract_styles_from_prompt(prompt, negative_prompt):
|
||||
extracted = []
|
||||
applicable_styles = []
|
||||
|
||||
for style_name, (style_prompt, style_negative_prompt) in modules.sdxl_styles.styles.items():
|
||||
applicable_styles.append(PromptStyle(name=style_name, prompt=style_prompt, negative_prompt=style_negative_prompt))
|
||||
|
||||
real_prompt = ''
|
||||
|
||||
while True:
|
||||
found_style = None
|
||||
|
||||
for style in applicable_styles:
|
||||
is_match, new_prompt, new_neg_prompt, new_real_prompt = extract_original_prompts(
|
||||
style, prompt, negative_prompt
|
||||
)
|
||||
if is_match:
|
||||
found_style = style
|
||||
prompt = new_prompt
|
||||
negative_prompt = new_neg_prompt
|
||||
if real_prompt == '' and new_real_prompt != '' and new_real_prompt != prompt:
|
||||
real_prompt = new_real_prompt
|
||||
break
|
||||
|
||||
if not found_style:
|
||||
break
|
||||
|
||||
applicable_styles.remove(found_style)
|
||||
extracted.append(found_style.name)
|
||||
|
||||
# add prompt expansion if not all styles could be resolved
|
||||
if prompt != '':
|
||||
if real_prompt != '':
|
||||
extracted.append(modules.sdxl_styles.fooocus_expansion)
|
||||
else:
|
||||
# find real_prompt when only prompt expansion is selected
|
||||
first_word = prompt.split(', ')[0]
|
||||
first_word_positions = [i for i in range(len(prompt)) if prompt.startswith(first_word, i)]
|
||||
if len(first_word_positions) > 1:
|
||||
real_prompt = prompt[:first_word_positions[-1]]
|
||||
extracted.append(modules.sdxl_styles.fooocus_expansion)
|
||||
if real_prompt.endswith(', '):
|
||||
real_prompt = real_prompt[:-2]
|
||||
|
||||
return list(reversed(extracted)), real_prompt, negative_prompt
|
||||
|
||||
|
||||
class PromptStyle(typing.NamedTuple):
|
||||
name: str
|
||||
prompt: str
|
||||
negative_prompt: str
|
||||
|
||||
|
||||
def is_json(data: str) -> bool:
|
||||
try:
|
||||
loaded_json = json.loads(data)
|
||||
assert isinstance(loaded_json, dict)
|
||||
except (ValueError, AssertionError):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def get_file_from_folder_list(name, folders):
|
||||
for folder in folders:
|
||||
filename = os.path.abspath(os.path.realpath(os.path.join(folder, name)))
|
||||
if os.path.isfile(filename):
|
||||
return filename
|
||||
|
||||
return os.path.abspath(os.path.realpath(os.path.join(folders[0], name)))
|
||||
|
||||
|
||||
def ordinal_suffix(number: int) -> str:
|
||||
return 'th' if 10 <= number % 100 <= 20 else {1: 'st', 2: 'nd', 3: 'rd'}.get(number % 10, 'th')
|
||||
|
||||
|
||||
def makedirs_with_log(path):
|
||||
try:
|
||||
os.makedirs(path, exist_ok=True)
|
||||
except OSError as error:
|
||||
print(f'Directory {path} could not be created, reason: {error}')
|
||||
|
||||
|
||||
def get_enabled_loras(loras: list) -> list:
|
||||
return [[lora[1], lora[2]] for lora in loras if lora[0]]
|
||||
return sorted(filenames, key=lambda x: -1 if os.sep in x else 1)
|
||||
|
6
presets/.gitignore
vendored
6
presets/.gitignore
vendored
@ -1,6 +0,0 @@
|
||||
*.json
|
||||
!anime.json
|
||||
!default.json
|
||||
!lcm.json
|
||||
!realistic.json
|
||||
!sai.json
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user