Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

魔搭上下载的模型不行,缺文件 #28

Closed
zhengyangyong opened this issue May 16, 2024 · 2 comments
Closed

魔搭上下载的模型不行,缺文件 #28

zhengyangyong opened this issue May 16, 2024 · 2 comments

Comments

@zhengyangyong
Copy link

2024-05-16 03:24:16.542 | INFO     | hydit.inference:__init__:160 - Got text-to-image model root path: ckpts/t2i
2024-05-16 03:24:21.606 | INFO     | hydit.inference:__init__:172 - Loading CLIP Text Encoder...
2024-05-16 03:24:24.485 | INFO     | hydit.inference:__init__:175 - Loading CLIP Text Encoder finished
2024-05-16 03:24:24.485 | INFO     | hydit.inference:__init__:178 - Loading CLIP Tokenizer...
2024-05-16 03:24:24.675 | INFO     | hydit.inference:__init__:181 - Loading CLIP Tokenizer finished
2024-05-16 03:24:24.675 | INFO     | hydit.inference:__init__:184 - Loading T5 Text Encoder and T5 Tokenizer...
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
/usr/python/lib/python3.9/site-packages/transformers/convert_slow_tokenizer.py:515: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text.
  warnings.warn(
You are using a model of type mt5 to instantiate a model of type t5. This is not supported for all configurations of models and can yield errors.
2024-05-16 03:24:40.931 | INFO     | hydit.inference:__init__:188 - Loading t5_text_encoder and t5_tokenizer finished
2024-05-16 03:24:40.932 | INFO     | hydit.inference:__init__:191 - Loading VAE...
2024-05-16 03:24:41.118 | INFO     | hydit.inference:__init__:194 - Loading VAE finished
2024-05-16 03:24:41.118 | INFO     | hydit.inference:__init__:198 - Building HunYuan-DiT model...
2024-05-16 03:24:41.666 | INFO     | hydit.modules.models:__init__:229 -     Number of tokens: 4096
2024-05-16 03:24:57.888 | INFO     | hydit.inference:__init__:218 - Loading model checkpoint ckpts/t2i/model/pytorch_model_ema.pt...
2024-05-16 03:25:00.776 | INFO     | hydit.inference:__init__:229 - Loading inference pipeline...
2024-05-16 03:25:00.796 | INFO     | hydit.inference:__init__:231 - Loading pipeline finished
2024-05-16 03:25:00.797 | INFO     | hydit.inference:__init__:235 - ==================================================
2024-05-16 03:25:00.797 | INFO     | hydit.inference:__init__:236 -                 Model is ready.                  
2024-05-16 03:25:00.797 | INFO     | hydit.inference:__init__:237 - ==================================================
2024-05-16 03:25:00.823 | INFO     | sample_t2i:inferencer:21 - Loading DialogGen model (for prompt enhancement)...
/usr/python/lib/python3.9/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Traceback (most recent call last):
  File "/usr/python/lib/python3.9/site-packages/urllib3/connection.py", line 198, in _new_conn
    sock = connection.create_connection(
  File "/usr/python/lib/python3.9/site-packages/urllib3/util/connection.py", line 85, in create_connection
    raise err
  File "/usr/python/lib/python3.9/site-packages/urllib3/util/connection.py", line 73, in create_connection
    sock.connect(sa)
OSError: [Errno 101] Network is unreachable

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/python/lib/python3.9/site-packages/urllib3/connectionpool.py", line 793, in urlopen
    response = self._make_request(
  File "/usr/python/lib/python3.9/site-packages/urllib3/connectionpool.py", line 491, in _make_request
    raise new_e
  File "/usr/python/lib/python3.9/site-packages/urllib3/connectionpool.py", line 467, in _make_request
    self._validate_conn(conn)
  File "/usr/python/lib/python3.9/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
    conn.connect()
  File "/usr/python/lib/python3.9/site-packages/urllib3/connection.py", line 616, in connect
    self.sock = sock = self._new_conn()
  File "/usr/python/lib/python3.9/site-packages/urllib3/connection.py", line 213, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7fbfbc7e47c0>: Failed to establish a new connection: [Errno 101] Network is unreachable

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/python/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/usr/python/lib/python3.9/site-packages/urllib3/connectionpool.py", line 847, in urlopen
    retries = retries.increment(
  File "/usr/python/lib/python3.9/site-packages/urllib3/util/retry.py", line 515, in increment
    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14-336/resolve/main/preprocessor_config.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbfbc7e47c0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/python/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1722, in _get_metadata_or_catch_error
    metadata = get_hf_file_metadata(url=url, proxies=proxies, timeout=etag_timeout, headers=headers)
  File "/usr/python/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/python/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1645, in get_hf_file_metadata
    r = _request_wrapper(
  File "/usr/python/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 372, in _request_wrapper
    response = _request_wrapper(
  File "/usr/python/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 395, in _request_wrapper
    response = get_session().request(method=method, url=url, **params)
  File "/usr/python/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/python/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/usr/python/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 66, in send
    return super().send(request, *args, **kwargs)
  File "/usr/python/lib/python3.9/site-packages/requests/adapters.py", line 519, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14-336/resolve/main/preprocessor_config.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbfbc7e47c0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"), '(Request ID: 2150ae95-4931-46f6-85fd-b17bbbe82b7c)')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/python/lib/python3.9/site-packages/transformers/utils/hub.py", line 385, in cached_file
    resolved_file = hf_hub_download(
  File "/usr/python/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/python/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1221, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "/usr/python/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1325, in _hf_hub_download_to_cache_dir
    _raise_on_head_call_error(head_call_error, force_download, local_files_only)
  File "/usr/python/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1826, in _raise_on_head_call_error
    raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/data/tti/HunyuanDiT/./app/hydit_api.py", line 34, in <module>
    args, gen, enhancer = inferencer()
  File "/data/tti/HunyuanDiT/sample_t2i.py", line 22, in inferencer
    enhancer = DialogGen(str(models_root_path / "dialoggen"))
  File "/data/tti/HunyuanDiT/dialoggen/dialoggen_demo.py", line 141, in __init__
    self.models = init_dialoggen_model(model_path)
  File "/data/tti/HunyuanDiT/dialoggen/dialoggen_demo.py", line 55, in init_dialoggen_model
    tokenizer, model, image_processor, context_len = load_pretrained_model(
  File "/data/tti/HunyuanDiT/dialoggen/llava/model/builder.py", line 142, in load_pretrained_model
    model = AutoModelForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, **kwargs)
  File "/usr/python/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
    return model_class.from_pretrained(
  File "/usr/python/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3594, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
  File "/data/tti/HunyuanDiT/dialoggen/llava/model/language_model/llava_mistral.py", line 47, in __init__
    self.model = LlavaMistralModel(config)
  File "/data/tti/HunyuanDiT/dialoggen/llava/model/language_model/llava_mistral.py", line 39, in __init__
    super(LlavaMistralModel, self).__init__(config)
  File "/data/tti/HunyuanDiT/dialoggen/llava/model/llava_arch.py", line 35, in __init__
    self.vision_tower = build_vision_tower(config, delay_load=True)
  File "/data/tti/HunyuanDiT/dialoggen/llava/model/multimodal_encoder/builder.py", line 9, in build_vision_tower
    return CLIPVisionTower(vision_tower, args=vision_tower_cfg, **kwargs)
  File "/data/tti/HunyuanDiT/dialoggen/llava/model/multimodal_encoder/clip_encoder.py", line 20, in __init__
    self.load_model()
  File "/data/tti/HunyuanDiT/dialoggen/llava/model/multimodal_encoder/clip_encoder.py", line 29, in load_model
    self.image_processor = CLIPImageProcessor.from_pretrained(self.vision_tower_name)
  File "/usr/python/lib/python3.9/site-packages/transformers/image_processing_utils.py", line 206, in from_pretrained
    image_processor_dict, kwargs = cls.get_image_processor_dict(pretrained_model_name_or_path, **kwargs)
  File "/usr/python/lib/python3.9/site-packages/transformers/image_processing_utils.py", line 335, in get_image_processor_dict
    resolved_image_processor_file = cached_file(
  File "/usr/python/lib/python3.9/site-packages/transformers/utils/hub.py", line 425, in cached_file
    raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like openai/clip-vit-large-patch14-336 is not the path to a directory containing a file named preprocessor_config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

看样子需要openai/clip-vit-large-patch14-336,配置文件里面确实有:

  "mm_vision_tower": "openai/clip-vit-large-patch14-336",

有点潦草。

@YangKai0616
Copy link

同,用prompt增强就会报这个错

@Jiangfeng-Xiong
Copy link
Collaborator

This is due to the need to download some required weights from Huggingface, but your local network environment failed to access it. You can manually download the file from "https://huggingface.co/openai/clip-vit-large-patch14-336" and modify the configuration file to the local directory, https://huggingface.co/Tencent-Hunyuan/HunyuanDiT/blob/main/dialoggen/config.json#L50

@yestinl yestinl closed this as completed May 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants