-
Notifications
You must be signed in to change notification settings - Fork 25.4k
Description
Hi there, thanks for all the great work.
I have been always installing pytorch nightly.
After updating this week I've noticed that logging is enabled on my app (reForge) forcefully. This also happens, for example, when using ComfyUI.
I'm installing with
pip install torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu129
And I get this log
Loading model /naiXLVpred102d_colorized.safetensors [2a7fc06348] (1 of 1)
Loading weights [2a7fc06348] from /naiXLVpred102d_colorized.safetensors
CHv1.8.13: Set Proxy:
INFO:root:model weight dtype torch.float16, manual cast: None
INFO:root:model_type V_PREDICTION
INFO:root:Using pytorch attention in VAE
INFO:root:Using pytorch attention in VAE
INFO:root:VAE load device: cuda:0, offload device: cuda:0, dtype: torch.bfloat16
INFO:root:Conditional stage model device compatibility check is disabled.
INFO:root:Requested to load SDXLClipModel
INFO:root:loaded completely 9.5367431640625e+25 1749.853515625 True
INFO:root:CLIP/text encoder model load device: cuda:0, offload device: cuda:0, current: cuda:0, dtype: torch.float16
WARNING:root:clip missing: ['clip_l.text_projection', 'clip_l.logit_scale']
loaded diffusion model directly to GPU
INFO:root:Requested to load SDXL
INFO:root:loaded completely 9.5367431640625e+25 4897.0483474731445 True
Loading VAE weights specified in settings: /models/VAE/sdxlVaeAnimeTest_alpha67500.safetensors
Reloading VAE
Loading VAE weights specified in settings: /models/VAE/sdxlVaeAnimeTest_alpha67500.safetensors
INFO:root:Requested to load SDXLClipModel
INFO:root:loaded completely 9.5367431640625e+25 2521.853515625 True
Model /naiXLVpred102d_colorized.safetensors [2a7fc06348] loaded in 2.3s (memory cleanup: 0.5s, forge load real models: 1.3s, load VAE: 0.2s, calculate empty prompt: 0.1s).
Memory change: 0.00 MB (10.62 GB total)
2025-07-14 20:41:01,152 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL: https://0.0.0.0:8280
And then for every subsequent generation or anything that touches the application/API, more logging/debug is outputted. i.e.:
INFO:httpx:HTTP Request: POST https://localhost:8280/api/predict "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/api/predict "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/api/predict "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/api/predict "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/api/predict "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/reset "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/reset "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/reset "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/reset "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/reset "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/api/predict "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/api/predict "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/reset "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://localhost:8280/reset "HTTP/1.1 200 OK"
But when I install with
pip install torch==2.9.0.dev20250629 torchvision --index-url https://download.pytorch.org/whl/nightly/cu129
Logging/debugging is not forced enabled, and it looks like this:
Loading model /naiXLVpred102d_colorized.safetensors [2a7fc06348] (1 of 1)
Loading weights [2a7fc06348] from /naiXLVpred102d_colorized.safetensors
CHv1.8.13: Set Proxy:
WARNING:root:clip missing: ['clip_l.text_projection', 'clip_l.logit_scale']
loaded diffusion model directly to GPU
Loading VAE weights specified in settings: /models/VAE/sdxlVaeAnimeTest_alpha67500.safetensors
Reloading VAE
Loading VAE weights specified in settings: /models/VAE/sdxlVaeAnimeTest_alpha67500.safetensors
Model /naiXLVpred102d_colorized.safetensors [2a7fc06348] loaded in 2.3s (memory cleanup: 0.5s, forge load real models: 1.4s, load VAE: 0.2s, calculate empty prompt: 0.1s).
Memory change: 0.00 MB (10.76 GB total)
2025-07-14 20:38:49,048 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL: https://0.0.0.0:8280
This happens with either torch+cu128 or torch+cu129.
Is there maybe a setting or env variable to disable this? As normal env variables or commands to set debugging no "None" doesn't work.
Thanks.