Skip to content

convert-hf-to-gguf.py XVERSE-13B-256K error #6425

@edisonzf2020

Description

@edisonzf2020

Please include information about your system, the steps to reproduce the bug, and the version of llama.cpp that you are using. If possible, please provide a minimal code example that reproduces the bug.

Model:
https://huggingface.co/xverse/XVERSE-13B-256K

python convert-hf-to-gguf.py /Volumes/FanData/models/XVERSE-13B-256K --outfile /Volumes/FanData/models/GGUF/xverse-13b-256k-f16.gguf --outtype f16

python convert-hf-to-gguf.py /Volumes/FanData/models/XVERSE-13B-256K --outfile /Volumes/FanData/models/GGUF/xverse-13b-256k-f16.gguf --outtype f16 Loading model: XVERSE-13B-256K gguf: This GGUF file is for Little Endian only Set model parameters Set model tokenizer gguf: Setting special token type bos to 2 gguf: Setting special token type eos to 3 gguf: Setting special token type pad to 1 Exporting model to '/Volumes/FanData/models/GGUF/xverse-13b-256k-f16.gguf' gguf: loading model part 'pytorch_model-00001-of-00015.bin' Traceback (most recent call last): File "/Users/fanmac/AI/llama.cpp/convert-hf-to-gguf.py", line 2296, in <module> main() File "/Users/fanmac/AI/llama.cpp/convert-hf-to-gguf.py", line 2290, in main model_instance.write() File "/Users/fanmac/AI/llama.cpp/convert-hf-to-gguf.py", line 175, in write self.write_tensors() File "/Users/fanmac/AI/llama.cpp/convert-hf-to-gguf.py", line 858, in write_tensors model_kv = dict(self.get_tensors()) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/fanmac/AI/llama.cpp/convert-hf-to-gguf.py", line 83, in get_tensors ctx = contextlib.nullcontext(torch.load(str(self.dir_model / part_name), map_location="cpu", mmap=True, weights_only=True)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/fanmac/.miniconda3/envs/llamacpp/lib/python3.11/site-packages/torch/serialization.py", line 993, in load with _open_zipfile_reader(opened_file) as opened_zipfile: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/fanmac/.miniconda3/envs/llamacpp/lib/python3.11/site-packages/torch/serialization.py", line 447, in __init__ super().__init__(torch._C.PyTorchFileReader(name_or_buffer)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

If the bug concerns the server, please try to reproduce it first using the server test scenario framework.

Metadata

Metadata

Assignees

No one assigned

    Labels

    invalidThis doesn't seem right

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions