rattus
0bfb936ab4
comfy-aimdo 0.2 - Improved pytorch allocator integration ( #12557 )
...
Integrate comfy-aimdo 0.2 which takes a different approach to
installing the memory allocator hook. Instead of using the complicated
and buggy pytorch MemPool+CudaPluggableAlloctor, cuda is directly hooked
making the process much more transparent to both comfy and pytorch. As
far as pytorch knows, aimdo doesnt exist anymore, and just operates
behind the scenes.
Remove all the mempool setup stuff for dynamic_vram and bump the
comfy-aimdo version. Remove the allocator object from memory_management
and demote its use as an enablment check to a boolean flag.
Comfy-aimdo 0.2 also support the pytorch cuda async allocator, so
remove the dynamic_vram based force disablement of cuda_malloc and
just go back to the old settings of allocators based on command line
input.
2026-02-21 10:52:57 -08:00
rattus
f8acd9c402
Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading ( #11845 )
2026-02-01 01:01:11 -05:00
comfyanonymous
d7a0aef650
Set OCL_SET_SVM_SIZE on AMD. ( #11139 )
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled
Close stale issues / stale (push) Has been cancelled
2025-12-06 00:15:21 -05:00
comfyanonymous
5b80addafd
Turn off cuda malloc by default when --fast autotune is turned on. ( #10393 )
2025-10-18 22:35:46 -04:00
comfyanonymous
e78d230496
Only enable cuda malloc on cuda torch. ( #9031 )
2025-07-23 19:37:43 -04:00
comfyanonymous
f1d6cef71c
Revert "Disable cuda malloc by default."
...
This reverts commit 50bf66e5c44fe3637f29999034c10a0c083c7600.
2024-08-14 08:38:07 -04:00
comfyanonymous
50bf66e5c4
Disable cuda malloc by default.
2024-08-14 02:49:25 -04:00
comfyanonymous
2f93b91646
Add Tesla GPUs to cuda malloc blacklist.
2024-03-26 23:09:28 -04:00
comfyanonymous
caddef8d88
Auto disable cuda malloc on unsupported GPUs on Linux.
2024-03-04 09:03:59 -05:00
comfyanonymous
192ca0676c
Add some more cards to the cuda malloc blacklist.
2023-08-13 16:08:11 -04:00
comfyanonymous
861fd58819
Add a warning if a card that doesn't support cuda malloc has it enabled.
2023-08-13 12:37:53 -04:00
comfyanonymous
fc71cf656e
Add some 800M gpus to cuda malloc blacklist.
2023-08-05 21:54:52 -04:00
comfyanonymous
5a90d3cea5
GeForce MX110 + MX130 are maxwell.
2023-08-04 21:44:37 -04:00
comfyanonymous
7c0a5a3e0e
Disable cuda malloc on a bunch of quadro cards.
2023-07-25 00:09:01 -04:00
comfyanonymous
30de083dd0
Disable cuda malloc on all the 9xx series.
2023-07-23 13:29:14 -04:00
comfyanonymous
85a8900a14
Disable cuda malloc on regular GTX 960.
2023-07-22 11:05:33 -04:00
comfyanonymous
39c58b227f
Disable cuda malloc on GTX 750 Ti.
2023-07-19 15:14:10 -04:00
comfyanonymous
799c08a4ce
Auto disable cuda malloc on some GPUs on windows.
2023-07-19 14:43:55 -04:00