Skip to content

How to load models from a local Windows path? #434

@3dluvr

Description

@3dluvr

I am trying to load existing models i have from a different folder (portable ComfyUI installation) instead of your models/ local to DSS - how do I that?

I provided the paths as:

model_manager = ModelManager(device="cpu")
model_manager.load_models(
    [
        "G:\\ComfyUI_windows_portable\\ComfyUI\\models\\diffusion_models\\Wan2_1-T2V-1_3B_bf16.safetensors",
        "G:\\ComfyUI_windows_portable\\ComfyUI\\models\\clip\\umt5-xxl-enc-fp8_e4m3fn.safetensors",
        "G:\\ComfyUI_windows_portable\\ComfyUI\\models\\vae\\Wan2_1_VAE_fp32.safetensors",
    ],
    torch_dtype=torch.float8_e4m3fn,
)

...and by all accounts it should just work. While it appears at first that it would load them, it fails and reports an error:

Loading models from: G:\ComfyUI_windows_portable\ComfyUI\models\diffusion_models\Wan2_1-T2V-1_3B_bf16.safetensors
    model_name: wan_video_dit model_class: WanModel
        This model is initialized with extra kwargs: {'model_type': 't2v', 'patch_size': (1, 2, 2), 'text_len': 512, 'in_dim': 16, 'dim': 1536, 'ffn_dim': 8960, 'freq_dim': 256, 'text_dim': 4096, 'out_dim': 16, 'num_heads': 12, 'num_layers': 30, 'window_size': (-1, -1), 'qk_norm': True, 'cross_attn_norm': True, 'eps': 1e-06}
    The following models are loaded: ['wan_video_dit'].
Loading models from: G:\ComfyUI_windows_portable\ComfyUI\models\clip\umt5-xxl-enc-fp8_e4m3fn.safetensors
    model_name: wan_video_text_encoder model_class: WanTextEncoder
    The following models are loaded: ['wan_video_text_encoder'].
Loading models from: G:\ComfyUI_windows_portable\ComfyUI\models\vae\Wan2_1_VAE_fp32.safetensors
    model_name: wan_video_vae model_class: WanVideoVAE
    The following models are loaded: ['wan_video_vae'].
Using wan_video_text_encoder from G:\ComfyUI_windows_portable\ComfyUI\models\clip\umt5-xxl-enc-fp8_e4m3fn.safetensors.
Traceback (most recent call last):
  File "G:\FluxGym\Lib\site-packages\transformers\utils\hub.py", line 403, in cached_file
    resolved_file = hf_hub_download(
                    ^^^^^^^^^^^^^^^^
  File "G:\FluxGym\Lib\site-packages\huggingface_hub\utils\_validators.py", line 106, in _inner_fn
    validate_repo_id(arg_value)
  File "G:\FluxGym\Lib\site-packages\huggingface_hub\utils\_validators.py", line 160, in validate_repo_id
    raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'G:\ComfyUI_windows_portable\ComfyUI\models\clip\google/umt5-xxl'.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "G:\FluxGym\DiffSynth-Studio\examples\wanvideo\my_wan_1.3b_text_to_video.py", line 19, in <module>
    pipe = WanVideoPipeline.from_model_manager(model_manager, torch_dtype=torch.bfloat16, device="cuda")
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\FluxGym\Lib\site-packages\diffsynth\pipelines\wan_video.py", line 142, in from_model_manager
    pipe.fetch_models(model_manager)
  File "G:\FluxGym\Lib\site-packages\diffsynth\pipelines\wan_video.py", line 131, in fetch_models
    self.prompter.fetch_tokenizer(os.path.join(os.path.dirname(tokenizer_path), "google/umt5-xxl"))
  File "G:\FluxGym\Lib\site-packages\diffsynth\prompters\wan_prompter.py", line 94, in fetch_tokenizer
    self.tokenizer = HuggingfaceTokenizer(name=tokenizer_path, seq_len=self.text_len, clean='whitespace')
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\FluxGym\Lib\site-packages\diffsynth\prompters\wan_prompter.py", line 45, in __init__
    self.tokenizer = AutoTokenizer.from_pretrained(name, **kwargs)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\FluxGym\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 871, in from_pretrained
    tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "G:\FluxGym\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 703, in get_tokenizer_config
    resolved_config_file = cached_file(
                           ^^^^^^^^^^^^
  File "G:\FluxGym\Lib\site-packages\transformers\utils\hub.py", line 469, in cached_file
    raise EnvironmentError(
OSError: Incorrect path_or_model_id: 'G:\ComfyUI_windows_portable\ComfyUI\models\clip\google/umt5-xxl'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

Why is it trying to load the models from HF Hub, why is it even going to the Hub when I provided local model paths??

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions