mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-02-28 20:51:09 +01:00
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled
Generate Pydantic Stubs from api.comfy.org / generate-models (push) Has been cancelled
pinned memory was converted back to pinning the CPU side weight without any changes. Fix the pinner to use the CPU weight and not the model defined geometry. This will either save RAM or stop buffer overruns when the types mismatch. Fix the model defined weight caster to use the [ s.weight, s.bias ] interpretation, as xfer_dest might be the flattened pin now. Fix the detection of needing to cast to not be conditional on !pin.
30 lines
856 B
Python
30 lines
856 B
Python
import torch
|
|
import comfy.model_management
|
|
import comfy.memory_management
|
|
|
|
from comfy.cli_args import args
|
|
|
|
def get_pin(module):
|
|
return getattr(module, "_pin", None)
|
|
|
|
def pin_memory(module):
|
|
if module.pin_failed or args.disable_pinned_memory or get_pin(module) is not None:
|
|
return
|
|
#FIXME: This is a RAM cache trigger event
|
|
size = comfy.memory_management.vram_aligned_size([ module.weight, module.bias ])
|
|
pin = torch.empty((size,), dtype=torch.uint8)
|
|
if comfy.model_management.pin_memory(pin):
|
|
module._pin = pin
|
|
else:
|
|
module.pin_failed = True
|
|
return False
|
|
return True
|
|
|
|
def unpin_memory(module):
|
|
if get_pin(module) is None:
|
|
return 0
|
|
size = module._pin.numel() * module._pin.element_size()
|
|
comfy.model_management.unpin_memory(module._pin)
|
|
del module._pin
|
|
return size
|