Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid: Protobuf parsing failed? #42

Closed
jryebread opened this issue Jun 17, 2024 · 4 comments
Closed

Invalid: Protobuf parsing failed? #42

jryebread opened this issue Jun 17, 2024 · 4 comments

Comments

@jryebread
Copy link

Hi when I run it get this error, any idea what could be wrong?

GPU: A10G 24GB VRAM

  return torch._C._cuda_getDeviceCount() > 0
Traceback (most recent call last):
  File "/home/ubuntu/hallo-webui/scripts/inference.py", line 424, in <module>
    inference_process(
  File "/home/ubuntu/hallo-webui/scripts/inference.py", line 181, in inference_process
    with ImageProcessor(img_size, face_analysis_model_path) as image_processor:
  File "/home/ubuntu/hallo-webui/hallo/datasets/image_processor.py", line 97, in __init__
    self.face_analysis = FaceAnalysis(
  File "/home/ubuntu/hallo-webui/venv/lib/python3.10/site-packages/insightface/app/face_analysis.py", line 31, in __init__
    model = model_zoo.get_model(onnx_file, **kwargs)
  File "/home/ubuntu/hallo-webui/venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 96, in get_model
    model = router.get_model(providers=providers, provider_options=provider_options)
  File "/home/ubuntu/hallo-webui/venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 40, in get_model
    session = PickableInferenceSession(self.onnx_file, **kwargs)
  File "/home/ubuntu/hallo-webui/venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 25, in __init__
    super().__init__(model_path, **kwargs)
  File "/home/ubuntu/hallo-webui/venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/ubuntu/hallo-webui/venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 472, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from ./pretrained_models/face_analysis/models/1k3d68.onnx failed:Protobuf parsing failed.```
@xumingw
Copy link
Contributor

xumingw commented Jun 18, 2024

Could you check your package versions and models sha256sum?

@xingdi1990
Copy link

same here

@SystemR
Copy link

SystemR commented Jun 19, 2024

I got this problem earlier and realized I don't have git lfs installed in the machine I was trying this project out.

Browse through the repo here https://huggingface.co/fudan-generative-ai/hallo/tree/main and see if the sha256 checksum matches with the ones in your machine.

@AricGamma AricGamma mentioned this issue Jun 20, 2024
@AricGamma
Copy link
Member

Since there has been no further feedback, this issue will be closed. If needed, please reopen it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants