* Fix some potential frozen after model mismatch
* Fix crash when cfg=1 when using anime preset
* Added some guidelines for troubleshoot the "CUDA kernel errors asynchronously" problem
This commit is contained in:
lllyasviel 2023-12-14 13:55:49 -08:00 committed by GitHub
parent bac5c882ba
commit 323af5667a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 18 additions and 14 deletions

View File

@ -1 +1 @@
version = '2.1.839'
version = '2.1.840'

View File

@ -801,12 +801,12 @@ def worker():
task = async_tasks.pop(0)
try:
handler(task)
except:
traceback.print_exc()
finally:
build_image_wall(task)
task.yields.append(['finish', task.results])
pipeline.prepare_text_encoder(async_call=True)
except:
traceback.print_exc()
task.yields.append(['finish', task.results])
pass

View File

@ -214,16 +214,20 @@ def compute_cfg(uncond, cond, cfg_scale, t):
def patched_sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options=None, seed=None):
if math.isclose(cond_scale, 1.0):
return calc_cond_uncond_batch(model, cond, None, x, timestep, model_options)[0]
global eps_record
if math.isclose(cond_scale, 1.0):
final_x0 = calc_cond_uncond_batch(model, cond, None, x, timestep, model_options)[0]
if eps_record is not None:
eps_record = ((x - final_x0) / timestep).cpu()
return final_x0
positive_x0, negative_x0 = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options)
positive_eps = x - positive_x0
negative_eps = x - negative_x0
sigma = timestep
alpha = 0.001 * sharpness * global_diffusion_progress
@ -234,7 +238,7 @@ def patched_sampling_function(model, x, timestep, uncond, cond, cond_scale, mode
cfg_scale=cond_scale, t=global_diffusion_progress)
if eps_record is not None:
eps_record = (final_eps / sigma).cpu()
eps_record = (final_eps / timestep).cpu()
return x - final_eps

View File

@ -118,12 +118,12 @@ If you get this error elsewhere in the world, then you may need to look at [this
### CUDA kernel errors might be asynchronously reported at some other API call
This problem is fixed two months ago. Please make sure that you are using the latest version of Fooocus (try fresh install).
If it still does not work, try to upgrade your Nvidia driver.
If it still does not work, open an issue with full log, and we will take a look.
A very small amount of devices does have this problem. The cause can be complicated but usually can be resolved after following these steps:
1. Make sure that you are using official version and latest version installed from [here](https://github.com/lllyasviel/Fooocus#download). (Some forks and other versions are more likely to cause this problem.)
2. Upgrade your Nvidia driver to the latest version. (Usually the version of your Nvidia driver should be 53X, not 3XX or 4XX.)
3. If things still do not work, then perhaps it is a problem with CUDA 12. You can use CUDA 11 and Xformers to try to solve this problem. We have prepared all files for you, and please do NOT install any CUDA or other environment on you own. The only one official way to do this is: (1) Backup and delete your `python_embeded` folder (near the `run.bat`); (2) Download the "previous_old_xformers_env.7z" from the [release page](https://github.com/lllyasviel/Fooocus/releases/tag/release), decompress it, and put the newly extracted `python_embeded` folder near your `run.bat`; (3) run Fooocus.
4. If it still does not work, please open an issue for us to take a look.
### Found no NVIDIA driver on your system